eganscha commited on
Commit
38cb764
·
1 Parent(s): 451b5e1

restructuring and updated readme.

Browse files
README.md CHANGED
@@ -13,14 +13,30 @@ task_categories:
13
  - image-text-to-text
14
  pretty_name: "Gomoku VLM Dataset (LoRA finetuning)"
15
  configs:
16
- - config_name: train_basic_visual_strategy_split
17
  data_files:
18
  - split: train
19
- path: "train/*.parquet"
20
- - config_name: eval
 
 
 
21
  data_files:
 
 
22
  - split: validation
23
- path: "eval/*.parquet"
 
 
 
 
 
 
 
 
 
 
 
24
  - config_name: test
25
  data_files:
26
  - split: test
@@ -30,10 +46,16 @@ configs:
30
  # Gomoku VLM Dataset (LoRA finetuning)
31
 
32
  This repository contains a synthetic, image-grounded instruction dataset for training and evaluating **vision-language models (VLMs)** on **Gomoku (15×15)**.
33
- The dataset is designed for **LoRA finetuning** of image-text-to-text models (e.g. google/gemma-3-4b-it style) on two main skill families:
 
 
 
 
 
 
34
 
35
- - **Perception**: read board state from an image and answer factual questions (counts, locations, piece colors, etc.)
36
- - **Strategy**: answer tactical/strategic questions (win-in-1, recommended moves, etc.)
37
 
38
  Each example includes:
39
  1) a rendered board image,
@@ -44,19 +66,68 @@ Each example includes:
44
 
45
  ## Dataset structure
46
 
47
- This dataset is published as multiple *configs* (subsets) to support different training regimes:
 
 
 
 
 
 
 
 
48
 
49
- - `train_basic_visual_strategy_split`: main training subset (split into visual + strategy)
50
- - `train_curriculum_strategy`: curriculum-style strategy training (sharded in steps)
51
- - `train_curriculum_visual`: curriculum-style perception/visual training (sharded in steps)
52
- - `eval`: validation set
53
- - `eval_reduced`: smaller validation set for faster iteration
54
- - `test`: test set
55
 
56
- You can select a config using Hugging Face `datasets`:
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  ```python
59
  from datasets import load_dataset
60
 
61
- ds = load_dataset("eganscha/gomoku_vlm_ds", "train_basic_visual_strategy_split")
62
- print(ds)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - image-text-to-text
14
  pretty_name: "Gomoku VLM Dataset (LoRA finetuning)"
15
  configs:
16
+ - config_name: visual
17
  data_files:
18
  - split: train
19
+ path: "visual/train/*.parquet"
20
+ - split: validation
21
+ path: "visual/eval/*.parquet"
22
+
23
+ - config_name: strategy
24
  data_files:
25
+ - split: train
26
+ path: "strategy/train/*.parquet"
27
  - split: validation
28
+ path: "strategy/eval/*.parquet"
29
+
30
+ # Curriculum
31
+ - config_name: visual_curriculum
32
+ data_files:
33
+ - split: train
34
+ path:
35
+ - "curriculum/step_1.parquet"
36
+ - "curriculum/step_2.parquet"
37
+ - "curriculum/step_3.parquet"
38
+ - "curriculum/step_4.parquet"
39
+
40
  - config_name: test
41
  data_files:
42
  - split: test
 
46
  # Gomoku VLM Dataset (LoRA finetuning)
47
 
48
  This repository contains a synthetic, image-grounded instruction dataset for training and evaluating **vision-language models (VLMs)** on **Gomoku (15×15)**.
49
+ The dataset is designed for **LoRA finetuning** of image-text-to-text vision-language models on **two complementary capabilities**:
50
+
51
+ - **Visual**
52
+ Tasks where the model must read the board image and produce a structured answer about the current position.
53
+ This includes purely perceptual objectives (cell classification, counting) and also *visually grounded reasoning* such as run/line detection, matrix reconstruction, end-state recognition, and yes/no tactical assessments that can be decided from the current snapshot (e.g., “immediate win exists”, “opponent threatens immediate win”).
54
+
55
+ - **Curriculum:** Curriculum-learning variant for *visual skills*: the training data is split into four steps that progressively move from simpler to more complex board states and visually grounded objectives (e.g., from basic cell/count tasks toward more advanced structured board understanding)
56
 
57
+ - **Strategy / policy (action selection)**
58
+ Tasks that require choosing an action (e.g., best move / win-in-1 move selection) and decision-making that approximates a bot’s policy.
59
 
60
  Each example includes:
61
  1) a rendered board image,
 
66
 
67
  ## Dataset structure
68
 
69
+ This dataset is organized into multiple Hugging Face **configs** that mirror the repository folders:
70
+
71
+ ## Configs
72
+
73
+ - **`visual`**
74
+ Perception-focused questions (board reading, counting, localization, etc.).
75
+ Splits:
76
+ - `train` → `visual/train/*.parquet`
77
+ - `validation` → `visual/eval/*.parquet`
78
 
79
+ - **`strategy`**
80
+ Tactical / strategic questions (e.g., win-in-1 style tasks, move selection based on bot-policy).
81
+ Splits:
82
+ - `train` `strategy/train/*.parquet`
83
+ - `validation` `strategy/eval/*.parquet`
 
84
 
85
+ - **`visual_curriculum`**
86
+ Step-wise curriculum training data as **four growing steps**:
87
+ - `curriculum/step_1.parquet`
88
+ - `curriculum/step_2.parquet`
89
+ - `curriculum/step_3.parquet`
90
+ - `curriculum/step_4.parquet`
91
+
92
+ - **`test`**
93
+ Test Dataset:
94
+ - `test` → `test/combined.parquet`
95
+
96
+ ## Loading the dataset
97
 
98
  ```python
99
  from datasets import load_dataset
100
 
101
+ # Visual / perception
102
+ ds_visual = load_dataset("eganscha/gomoku_vlm_ds", "visual")
103
+ print(ds_visual)
104
+
105
+ # Strategy
106
+ ds_strategy = load_dataset("eganscha/gomoku_vlm_ds", "strategy")
107
+ print(ds_strategy)
108
+
109
+ # Curriculum steps (visual curriculum)
110
+ ds_curr = load_dataset("eganscha/gomoku_vlm_ds", "visual_curriculum")
111
+ print(ds_curr)
112
+
113
+ # Test
114
+ ds_test = load_dataset("eganscha/gomoku_vlm_ds", "test")
115
+ print(ds_test)
116
+ ```
117
+
118
+ ## Downloading the dataset locally
119
+
120
+ ### Make sure the hf CLI is installed
121
+ ```bash
122
+ curl -LsSf https://hf.co/cli/install.sh | bash
123
+ ```
124
+
125
+ ### Source bashrc
126
+ ```bash
127
+ source ~/.bashrc
128
+ ```
129
+
130
+ ### Download Dataset
131
+ ```bash
132
+ hf download eganscha/gomoku_vlm_ds --repo-type=dataset --local-dir ./gomoku_vlm_ds
133
+ ```
main_ds/eval/strategy.parquet → curriculum/step_1.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee8fa40df2c83402cc472843708a9d797c85af6b138bf2cb1170aa0f05596d32
3
- size 16528890
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8a167b6eb22e59e77a9c5605767849d0ecc775680fecd3b44cee5d43b62883b
3
+ size 51472120
main_ds/train/strategy.parquet → curriculum/step_2.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e9ebb5b2b0dc7d1f1efe758fe8a108e1fdf152fe2a8b690dd61abc4b18c08fb8
3
- size 236815969
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d970880fe81b496457d7e9d4ed7011d39438db352fa83ecccafb913d2ab9a154
3
+ size 160992477
rehearsal_ds/eval/visual.parquet → curriculum/step_3.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:93e0a685796604f77b99d4f03c6052df3ca656fba56e3516c6f3ee2e914a3dd8
3
- size 64300392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9aee8836e8e330a75e014976bd7a20de146e18cfd5023a05c0ba5abc3e0283f
3
+ size 247712785
rehearsal_ds/train/visual.parquet → curriculum/step_4.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a63a8a284657dfa702c92179c9397347b4d669a917a63aa0b05fc82e351c466a
3
- size 804581284
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:453787ae10ec65066952a6631dfc7dcfad038405d322598cd44472251bad7267
3
+ size 401155761
rehearsal_ds/eval/strategy.parquet → strategy/strategy_eval.parquet RENAMED
File without changes
rehearsal_ds/train/strategy.parquet → strategy/strategy_train.parquet RENAMED
File without changes
main_ds/eval/visual.parquet → visual/visual_eval.parquet RENAMED
File without changes
main_ds/train/visual.parquet → visual/visual_train.parquet RENAMED
File without changes