Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
sxluo commited on
Commit
dd32df6
·
verified ·
1 Parent(s): 3be2fbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -16
README.md CHANGED
@@ -15,23 +15,8 @@ GeoGramBench is a tailored benchmark dataset designed for evaluating the geometr
15
  - **Global Abstract Integration:** Complex problems requiring global spatial synthesis, parameterization, or 3D reasoning.
16
  - **Six Subtypes:** Problems span six mathematical subfields: `Angle`, `Length`, `Area`, `Volume`, `Ratio`, and `Count`, supporting fine-grained diagnostics.
17
 
18
- ## Benchmark Highlights
19
-
20
- - GeoGramBench differs from traditional math benchmarks by emphasizing the symbolic-to-spatial abstraction capabilities of LLMs, leveraging procedural code expressed in formats such as `Asymptote`.
21
- - Initial evaluation using 17 state-of-the-art LLMs revealed substantial gaps, particularly for higher abstraction tasks:
22
- - Models achieved less than **50%** accuracy on the most challenging **Global Abstract Integration** category.
23
- - Even advanced models struggle to bridge procedural code with reliable spatial reasoning.
24
-
25
  ## Dataset Composition
26
 
27
- | Complexity Level | Problem Count | Example Tasks |
28
- |----------------------------|---------------|-----------------------------------|
29
- | **Primitive Recognition** | 102 | Compute the area of a triangle. |
30
- | **Local Relation Composition** | 279 | Solve for angles in composite diagrams. |
31
- | **Global Abstract Integration** | 119 | Analyze 3D projections and symmetry. |
32
-
33
- ### Subtype Distribution Across Levels
34
-
35
  | Subtype | Primitive | Compositional | Abstract |
36
  |-----------|-----------|---------------|----------|
37
  | Angle | 22 | 20 | 7 |
@@ -41,10 +26,40 @@ GeoGramBench is a tailored benchmark dataset designed for evaluating the geometr
41
  | Count | 15 | 31 | 15 |
42
  | Volume | 0 | 0 | 27 |
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ## Use Cases
45
 
46
  GeoGramBench is designed for:
47
- - Researchers developing **geometry-aware LLMs** or fine-tuning models for symbolic-to-spatial reasoning.
48
  - Model diagnostics to pinpoint weaknesses in handling code-driven geometric reasoning or abstract spatial relations.
49
  - Evaluation and advancement of LLMs' performance on tasks involving spatial reasoning.
50
 
 
15
  - **Global Abstract Integration:** Complex problems requiring global spatial synthesis, parameterization, or 3D reasoning.
16
  - **Six Subtypes:** Problems span six mathematical subfields: `Angle`, `Length`, `Area`, `Volume`, `Ratio`, and `Count`, supporting fine-grained diagnostics.
17
 
 
 
 
 
 
 
 
18
  ## Dataset Composition
19
 
 
 
 
 
 
 
 
 
20
  | Subtype | Primitive | Compositional | Abstract |
21
  |-----------|-----------|---------------|----------|
22
  | Angle | 22 | 20 | 7 |
 
26
  | Count | 15 | 31 | 15 |
27
  | Volume | 0 | 0 | 27 |
28
 
29
+ ## Benchmark Highlights
30
+
31
+ - GeoGramBench differs from traditional math benchmarks by emphasizing the symbolic-to-spatial abstraction capabilities of LLMs, leveraging procedural code expressed in formats such as `Asymptote`.
32
+ - Initial evaluation using 17 state-of-the-art LLMs revealed substantial gaps, particularly for higher abstraction tasks:
33
+ - Models achieved less than **50%** accuracy on the most challenging **Global Abstract Integration** category.
34
+ - Even advanced models struggle to bridge procedural code with reliable spatial reasoning.
35
+
36
+ | Model | Primitive | Compositional | Abstract | ALL |
37
+ |-------|-----------|-----------|-----------|--------------|
38
+ | <strong>Closed-source Models</strong> |
39
+ | GPT-o3-mini | 84.33 | 75.66 | 42.16 | 70.00 |
40
+ | GPT-o1 | <strong>86.76</strong> | <strong>76.02</strong> | <strong>43.35</strong> | <strong>70.92</strong> |
41
+ | GPT-o1-preview | 74.79 | 55.98 | 26.20 | 53.15 |
42
+ | GPT-o1-mini | 79.62 | 63.21 | 29.09 | 58.94 |
43
+ | GPT-4o | 39.81 | 21.29 | 4.96 | 21.40 |
44
+ | Gemini-Pro-1.5 | 49.26 | 31.79 | 15.92 | 31.64 |
45
+ | <strong>Open-source Models</strong> |
46
+ | Qwen3-235B-Thinking-2507| <strong>89.09</strong> | <strong>79.12</strong> | <strong>49.05</strong> | <strong>74.00</strong> |
47
+ | DeepSeek-R1 | 85.66 | 75.27 | 40.38 | 69.17 |
48
+ | DeepSeek-v3-0324 | 80.57 | 68.89 | 27.67 | 62.05 |
49
+ | QwQ-32B | 85.17 | 73.12 | 37.92 | 67.20 |
50
+ | DeepSeek-R1-Distill-Qwen-32B | 79.78 | 67.83 | 35.92 | 62.68 |
51
+ | Bespoke-Stratos-32B | 62.50 | 42.56 | 17.02 | 40.55 |
52
+ | s1.1-32B | 75.37 | 58.96 | 26.58 | 54.60 |
53
+ | DeepSeek-R1-Distill-Qwen-7B | 72.79 | 58.74 | 24.16 | 53.38 |
54
+ | Sky-T1-mini-7B | 71.45 | 57.75 | 24.79 | 52.70 |
55
+ | DeepSeek-R1-Distill-Qwen-1.5B | 60.29 | 39.02 | 11.03 | 36.70 |
56
+ | DeepScaleR-1.5B-preview | 65.44 | 47.89 | 15.76 | 43.83 |
57
+
58
+
59
  ## Use Cases
60
 
61
  GeoGramBench is designed for:
62
+ - Researchers developing **geometry-aware LLMs** for symbolic-to-spatial reasoning.
63
  - Model diagnostics to pinpoint weaknesses in handling code-driven geometric reasoning or abstract spatial relations.
64
  - Evaluation and advancement of LLMs' performance on tasks involving spatial reasoning.
65