File size: 5,485 Bytes
f322e2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
---
license: cc-by-4.0
language:
- zh
- en
tags:
- multimodal-mathematical-reasoning
- geometry
- tikz
- latex
- image-to-tikz
- benchmark
size_categories:
- 10K<n<100K
task_categories:
- image-to-text
- visual-question-answering
- text-generation
pretty_name: TriGeoBench
---

# TriGeoBench

TriGeoBench is a geometry-centric multimodal mathematics benchmark designed for evaluating mathematical reasoning with visual diagrams and image-to-TikZ generation. The dataset contains mathematical problems, solutions, diagram images, and corresponding TikZ annotations.

This repository is anonymized for peer review. Author and institution information will be added upon acceptance.

## Dataset Files

The dataset contains four Parquet files:

```text
TriGeoBench
├── README.md
├── image2tikz/
│   ├── train.parquet
│   └── test.parquet
└── question/
    ├── train.parquet
    └── test.parquet
````

The dataset supports two tasks:

1. **Image-to-TikZ generation**: generating TikZ code from a geometric diagram image.
2. **Multimodal mathematical reasoning**: solving math problems with textual questions, solutions, and associated figures.

## Image-to-TikZ Data

Files:

```text
image2tikz/train.parquet
image2tikz/test.parquet
```

Each row corresponds to one diagram image and its ground-truth TikZ code.

### Fields

| Field        | Description                                                  |
| ------------ | ------------------------------------------------------------ |
| `key`        | Unique figure identifier. It is composed of `<problem_id>_<position>_<figure_index>`, where `position` indicates whether the figure appears in the question or the solution. This key can be linked to the corresponding problem in the question-level data. |
| `image`      | Base64-encoded image.                                        |
| `latex_gt`   | Ground-truth TikZ code corresponding to the image.           |
| `difficulty` | Figure complexity level. Possible values are `容易`, `中等`, and `困难`. |

## Question-Level Data

Files:

```text
question/train.parquet
question/test.parquet
```

Each row corresponds to one mathematical problem, including the problem text, solution, metadata, and associated figures.

### Fields

| Field             | Description                                                  |
| ----------------- | ------------------------------------------------------------ |
| `sample_id`       | Unique problem identifier. It can be linked to the `key` field in the image-to-TikZ data. |
| `difficulty`      | Problem difficulty level. Possible values are `容易`, `中等`, and `困难`. |
| `question_type`   | Problem type. Possible values include `选择题`, `填空题`, `解答题`, and `证明题`. |
| `knowledge_point` | Main mathematical knowledge area. Possible values include `向量`, `函数`, `平面几何`, `立体几何`, and `解析几何`. |
| `question`        | Problem statement in LaTeX format.                           |
| `solution`        | Solution or answer in LaTeX format.                          |
| `q_figX`          | Base64-encoded image of the X-th figure appearing in the question. |
| `q_figX_latex_gt` | Ground-truth TikZ code of the X-th question figure.          |
| `s_figY`          | Base64-encoded image of the Y-th figure appearing in the solution. |
| `s_figY_latex_gt` | Ground-truth TikZ code of the Y-th solution figure.          |

Here, `X` and `Y` denote figure indices. A problem may contain different numbers of question-side and solution-side figures.

## Data Splits

The dataset is split into training and test sets for both tasks:

| Task                   | Train File                 | Test File                 |
| ---------------------- | -------------------------- | ------------------------- |
| Image-to-TikZ          | `image2tikz_train.parquet` | `image2tikz_test.parquet` |
| Mathematical Reasoning | `question_train.parquet`   | `question_test.parquet`   |

## Loading the Dataset

The Parquet files can be loaded with `pandas`:

```python
import pandas as pd

image2tikz_train = pd.read_parquet("image2tikz_train.parquet")
image2tikz_test = pd.read_parquet("image2tikz_test.parquet")

question_train = pd.read_parquet("question_train.parquet")
question_test = pd.read_parquet("question_test.parquet")
```

Base64-encoded images can be decoded as follows:

```python
import base64
from io import BytesIO
from PIL import Image

def decode_base64_image(image_base64):
    image_bytes = base64.b64decode(image_base64)
    return Image.open(BytesIO(image_bytes)).convert("RGB")

img = decode_base64_image(image2tikz_train.iloc[0]["image"])
img.show()
```

## Intended Use

TriGeoBench is intended for research on:

* multimodal mathematical reasoning;
* geometry-centric visual question answering;
* image-to-TikZ generation;
* evaluating whether models can reason over precise geometric structures;
* studying the interaction between textual math problems, visual diagrams, and symbolic diagram representations.

## Limitations

The dataset focuses on geometry-centric middle- and high-school mathematics problems. The annotations include LaTeX-formatted problem texts and TikZ code for figures. Although the dataset has been processed and checked, residual annotation errors may remain.

## Anonymous Review Notice

This repository is anonymized for peer review. Please do not attempt to identify the authors during the review process.