Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
sxluo commited on
Commit
3be2fbf
·
verified ·
1 Parent(s): b648eed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -3
README.md CHANGED
@@ -1,3 +1,64 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # GeoGramBench: Benchmarking the Geometric Program Reasoning in Modern LLMs
6
+
7
+ GeoGramBench is a tailored benchmark dataset designed for evaluating the geometric spatial reasoning capabilities of large language models (LLMs) over procedural programmatic code. The dataset introduces a novel task, **Program-to-Geometry**, that requires models to transform programmatic drawing code into abstract geometric reasoning for problem-solving.
8
+
9
+ ## Features of GeoGramBench
10
+
11
+ - **500 Curated Problems:** Each sample includes procedural drawing code and associated geometry reasoning problems. These problems are rigorously curated to ensure quality, fairness, and diversity.
12
+ - **Taxonomy-Based Evaluation:** Problems are categorized into three difficulty levels:
13
+ - **Primitive Recognition:** Basic geometric problems requiring direct recognition of a few elements.
14
+ - **Local Relation Composition:** Involves reasoning about relationships between multiple geometric components.
15
+ - **Global Abstract Integration:** Complex problems requiring global spatial synthesis, parameterization, or 3D reasoning.
16
+ - **Six Subtypes:** Problems span six mathematical subfields: `Angle`, `Length`, `Area`, `Volume`, `Ratio`, and `Count`, supporting fine-grained diagnostics.
17
+
18
+ ## Benchmark Highlights
19
+
20
+ - GeoGramBench differs from traditional math benchmarks by emphasizing the symbolic-to-spatial abstraction capabilities of LLMs, leveraging procedural code expressed in formats such as `Asymptote`.
21
+ - Initial evaluation using 17 state-of-the-art LLMs revealed substantial gaps, particularly for higher abstraction tasks:
22
+ - Models achieved less than **50%** accuracy on the most challenging **Global Abstract Integration** category.
23
+ - Even advanced models struggle to bridge procedural code with reliable spatial reasoning.
24
+
25
+ ## Dataset Composition
26
+
27
+ | Complexity Level | Problem Count | Example Tasks |
28
+ |----------------------------|---------------|-----------------------------------|
29
+ | **Primitive Recognition** | 102 | Compute the area of a triangle. |
30
+ | **Local Relation Composition** | 279 | Solve for angles in composite diagrams. |
31
+ | **Global Abstract Integration** | 119 | Analyze 3D projections and symmetry. |
32
+
33
+ ### Subtype Distribution Across Levels
34
+
35
+ | Subtype | Primitive | Compositional | Abstract |
36
+ |-----------|-----------|---------------|----------|
37
+ | Angle | 22 | 20 | 7 |
38
+ | Length | 25 | 88 | 20 |
39
+ | Area | 26 | 89 | 46 |
40
+ | Ratio | 14 | 51 | 4 |
41
+ | Count | 15 | 31 | 15 |
42
+ | Volume | 0 | 0 | 27 |
43
+
44
+ ## Use Cases
45
+
46
+ GeoGramBench is designed for:
47
+ - Researchers developing **geometry-aware LLMs** or fine-tuning models for symbolic-to-spatial reasoning.
48
+ - Model diagnostics to pinpoint weaknesses in handling code-driven geometric reasoning or abstract spatial relations.
49
+ - Evaluation and advancement of LLMs' performance on tasks involving spatial reasoning.
50
+
51
+
52
+ ## Citation
53
+
54
+ If you use GeoGramBench in your research, please cite:
55
+
56
+ ```bibtex
57
+ @article{luo2025geogrambench,
58
+ title={Geogrambench: Benchmarking the geometric program reasoning in modern llms},
59
+ author={Luo, Shixian and Zhu, Zezhou and Yuan, Yu and Yang, Yuncheng and Shan, Lianlei and Wu, Yong},
60
+ journal={arXiv preprint arXiv:2505.17653},
61
+ year={2025}
62
+ }
63
+ ```
64
+ ```