huzican commited on
Commit
bfe0aa6
·
verified ·
1 Parent(s): 1a98e9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -45
README.md CHANGED
@@ -19,17 +19,6 @@ dataset_info:
19
  dtype: string
20
  - name: full_text_only_thought
21
  dtype: string
22
- - name: meta_info
23
- struct:
24
- - name: map
25
- sequence:
26
- sequence: string
27
- - name: icon_info
28
- dtype: string
29
- - name: task_type
30
- dtype: string
31
- - name: goal_object
32
- dtype: string
33
  splits:
34
  - name: test
35
  num_bytes: 10179370
@@ -65,46 +54,33 @@ A multimodal chain-of-thought reasoning dataset for game navigation and puzzle-s
65
 
66
  This dataset contains 100 visual reasoning examples from various game scenarios, featuring step-by-step reasoning with intermediate visual representations.
67
 
68
- ### Supported Tasks
69
-
70
- - **Navigation**: Path planning in grid-based game environments
71
- - **Puzzle-solving**: Goal-oriented reasoning tasks
72
-
73
  ## Dataset Structure
74
 
75
- Each example contains:
76
-
77
  | Field | Type | Description |
78
  |-------|------|-------------|
79
- | `pid` | string | Unique problem ID (`{scene}_{task}_{width}_{height}_{index}`) |
80
  | `question` | string | Task description |
81
  | `answer` | string | Expert path / solution |
82
- | `problem_image_0` | image | Initial game state visualization |
83
- | `resoning_thought_0` | string | First round reasoning text |
84
- | `reasoning_image_0` | image | Intermediate reasoning visualization |
85
- | `resoning_thought_1` | string | Second round reasoning text |
86
- | `full_text_only_thought` | string | Complete text-only reasoning chain |
87
- | `task` | string | Task identifier (`{scene}_{task}`) |
88
- | `meta_info` | dict | Metadata for verification |
89
 
90
- ### Meta Info Structure
91
 
92
  ```python
93
- {
94
- "map": [
95
- ["1", "1", "f", "1", "f"],
96
- ["0", "1", "0", "s", "0"],
97
- ["1", "f", "1", "0", "1"],
98
- ["1", "0", "1", "1", "1"]
99
- ],
100
- # Grid layout: "0"=empty, "1"=wall, "s"=start, "f"=goal, etc.
101
-
102
- "icon_info": {
103
- "key": {"flags": ["A"], "pos": [(0, 2)]},
104
- "treasure": {"flags": ["B", "C"], "pos": [(2, 1), (0, 4)]}
105
- },
106
- # Object definitions with flags and positions
107
-
108
- "task_type": "navigation",
109
- "goal_object": "treasure"
110
- }
 
19
  dtype: string
20
  - name: full_text_only_thought
21
  dtype: string
 
 
 
 
 
 
 
 
 
 
 
22
  splits:
23
  - name: test
24
  num_bytes: 10179370
 
54
 
55
  This dataset contains 100 visual reasoning examples from various game scenarios, featuring step-by-step reasoning with intermediate visual representations.
56
 
 
 
 
 
 
57
  ## Dataset Structure
58
 
 
 
59
  | Field | Type | Description |
60
  |-------|------|-------------|
61
+ | `pid` | string | Unique problem ID |
62
  | `question` | string | Task description |
63
  | `answer` | string | Expert path / solution |
64
+ | `problem_image_0` | image | Initial game state |
65
+ | `resoning_thought_0` | string | First round reasoning |
66
+ | `reasoning_image_0` | image | Reasoning visualization |
67
+ | `resoning_thought_1` | string | Second round reasoning |
68
+ | `task` | string | Task identifier |
69
+ | `full_text_only_thought` | string | Complete reasoning chain |
70
+ | `meta_info` | dict | Metadata (map, icon_info, task_type, goal_object) |
71
 
72
+ ## Usage
73
 
74
  ```python
75
+ from datasets import load_dataset
76
+
77
+ dataset = load_dataset("huzican/eval_game")
78
+ example = dataset["test"][0]
79
+
80
+ # View images
81
+ example['problem_image_0'].show()
82
+ example['reasoning_image_0'].show()
83
+
84
+ # Access reasoning
85
+ print(example['resoning_thought_0'])
86
+ print(example['resoning_thought_1'])