vichetkao commited on
Commit
39cd1fe
·
verified ·
1 Parent(s): d7662ca

Upload The ReadMe and Metadata

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +92 -0
  3. annotation_data.ipynb +300 -0
  4. info.json +3 -0
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ info.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Wild Khmer Grounding Dataset (OCR + Visual Grounding)
2
+
3
+ This dataset is a high-quality collection of Khmer text images "in the wild." It is a **derivative work** specifically enhanced for **Visual Grounding** and **High-Accuracy OCR** fine-tuning.
4
+
5
+ ## 📌 Provenance & Credits
6
+
7
+ - **Original Source:** [Khmer Word Dataset (Kaggle)](https://www.kaggle.com/datasets/keosaly/wildkhmerst-dataset)
8
+ - **Original Owner:** **Saly KEO**
9
+ - **Modification:** The original dataset (which provided single text labels per image) has been re-annotated and processed to include **normalized bounding boxes** for multi-region text detection and grounding tasks.
10
+
11
+ ## 📄 License
12
+
13
+ This dataset is distributed under the **Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license.
14
+
15
+ - **Attribution:** You must give appropriate credit to Saly KEO (original source) and the current maintainer.
16
+ - **Non-Commercial:** You may not use the material for commercial purposes.
17
+
18
+ ---
19
+
20
+ ## Dataset Summary
21
+
22
+ This version of the dataset provides **localized bounding boxes** for every text region within an image. This enables Vision-Language Models (VLMs) to learn spatial reasoning—understanding exactly _where_ a piece of text is located before reading it.
23
+
24
+ - **Primary Language:** Khmer (UTF-8)
25
+ - **Format:** Hugging Face Parquet (Embedded Images)
26
+ - **Normalization:** Coordinates scaled to **0-1000**.
27
+ - **Task Type:** Visual Grounding, Document Intelligence, Khmer OCR.
28
+
29
+ ## Dataset Structure
30
+
31
+ | Column | Type | Description |
32
+ | :------ | :------- | :----------------------------------------------------------------------------------- |
33
+ | `image` | `Image` | The source image (embedded bytes). |
34
+ | `text` | `String` | A JSON string containing a list of text regions and their normalized bounding boxes. |
35
+
36
+ ### JSON Label Example
37
+
38
+ ```json
39
+ {
40
+ "regions": [
41
+ {
42
+ "bbox_2d": [ymin, xmin, ymax, xmax],
43
+ "label": "សោភ័ណ ស៊ីថា"
44
+ }
45
+ ]
46
+ }
47
+ ```
48
+
49
+ ## How to Use
50
+
51
+ ### Loading the Dataset
52
+
53
+ ```python
54
+ from datasets import load_dataset
55
+
56
+ dataset = load_dataset("vichetkao/wild_khmer")
57
+ print(dataset['train'][0])
58
+ ```
59
+
60
+ ### Formatting for Qwen3-VL / Unsloth
61
+
62
+ ```python
63
+ import json
64
+
65
+ def convert_to_conversation(sample):
66
+ grounding_data = json.loads(sample["text"])
67
+ instruction = "Detect all Khmer text regions in this image and return the labels with their bounding boxes in JSON format."
68
+
69
+ return {
70
+ "messages": [
71
+ {"role": "user", "content": [{"type": "text", "text": instruction}, {"type": "image", "image": sample["image"]}]},
72
+ {"role": "assistant", "content": [{"type": "text", "text": sample["text"]}]}
73
+ ]
74
+ }
75
+ ```
76
+
77
+ ## Supported Models
78
+
79
+ - **Qwen3-VL-8B / 32B**
80
+ - **Qwen2-VL Series**
81
+ - **InternVL2**
82
+ - **Llama-3.2-Vision**
83
+
84
+ ## Citation
85
+
86
+ If you use this dataset, please credit the original author:
87
+
88
+ ```text
89
+ Original Creator: Saly KEO (Kaggle)
90
+ Derivative Work: Kao Vichet
91
+ License: CC BY-NC 4.0
92
+ ```
annotation_data.ipynb ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "f175028c",
6
+ "metadata": {},
7
+ "source": [
8
+ "## The \"Wild Khmer\" Dataset Creation Script"
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "code",
13
+ "execution_count": 8,
14
+ "id": "1ed4f71e",
15
+ "metadata": {},
16
+ "outputs": [
17
+ {
18
+ "name": "stdout",
19
+ "output_type": "stream",
20
+ "text": [
21
+ "Loaded 10000 images from JSON\n",
22
+ "Processing images and labels... This may take a while.\n"
23
+ ]
24
+ },
25
+ {
26
+ "data": {
27
+ "application/vnd.jupyter.widget-view+json": {
28
+ "model_id": "42c314cacca648d6a0e8b4df6093a9d9",
29
+ "version_major": 2,
30
+ "version_minor": 0
31
+ },
32
+ "text/plain": [
33
+ "Generating train split: 0 examples [00:00, ? examples/s]"
34
+ ]
35
+ },
36
+ "metadata": {},
37
+ "output_type": "display_data"
38
+ },
39
+ {
40
+ "name": "stderr",
41
+ "output_type": "stream",
42
+ "text": [
43
+ "100%|██████████| 10000/10000 [02:28<00:00, 67.53it/s]\n"
44
+ ]
45
+ },
46
+ {
47
+ "data": {
48
+ "application/vnd.jupyter.widget-view+json": {
49
+ "model_id": "2b0fef7ef288405c908d8e5ca92348fe",
50
+ "version_major": 2,
51
+ "version_minor": 0
52
+ },
53
+ "text/plain": [
54
+ "Loading dataset shards: 0%| | 0/17 [00:00<?, ?it/s]"
55
+ ]
56
+ },
57
+ "metadata": {},
58
+ "output_type": "display_data"
59
+ },
60
+ {
61
+ "name": "stdout",
62
+ "output_type": "stream",
63
+ "text": [
64
+ "Saving Parquet files...\n"
65
+ ]
66
+ },
67
+ {
68
+ "data": {
69
+ "application/vnd.jupyter.widget-view+json": {
70
+ "model_id": "bd6d7d400c8f46b5a10501516cbec7b3",
71
+ "version_major": 2,
72
+ "version_minor": 0
73
+ },
74
+ "text/plain": [
75
+ "Creating parquet from Arrow format: 0%| | 0/92 [00:00<?, ?ba/s]"
76
+ ]
77
+ },
78
+ "metadata": {},
79
+ "output_type": "display_data"
80
+ },
81
+ {
82
+ "data": {
83
+ "application/vnd.jupyter.widget-view+json": {
84
+ "model_id": "68cb344590af4837bb993bc6e0e1bef0",
85
+ "version_major": 2,
86
+ "version_minor": 0
87
+ },
88
+ "text/plain": [
89
+ "Creating parquet from Arrow format: 0%| | 0/11 [00:00<?, ?ba/s]"
90
+ ]
91
+ },
92
+ "metadata": {},
93
+ "output_type": "display_data"
94
+ },
95
+ {
96
+ "name": "stdout",
97
+ "output_type": "stream",
98
+ "text": [
99
+ "DONE ✅\n",
100
+ "Wild Train file: wild_khmer_train.parquet\n"
101
+ ]
102
+ }
103
+ ],
104
+ "source": [
105
+ "import json\n",
106
+ "import os\n",
107
+ "import pandas as pd\n",
108
+ "from PIL import Image\n",
109
+ "from tqdm import tqdm\n",
110
+ "from datasets import Dataset, Features, Image as datasets_Image, Value\n",
111
+ "\n",
112
+ "# -------------------------\n",
113
+ "# CONFIG\n",
114
+ "# -------------------------\n",
115
+ "JSON_PATH = \"info.json\" # Your CADT JSON file\n",
116
+ "IMAGE_FOLDER = \"images/images\" # Folder with your JPGs\n",
117
+ "TRAIN_OUT = \"wild_khmer_train.parquet\"\n",
118
+ "TEST_OUT = \"wild_khmer_test.parquet\"\n",
119
+ "TEST_SPLIT = 0.1\n",
120
+ "\n",
121
+ "# -------------------------\n",
122
+ "# LOAD DATA\n",
123
+ "# -------------------------\n",
124
+ "with open(JSON_PATH, 'r', encoding='utf-8') as f:\n",
125
+ " via_data = json.load(f)\n",
126
+ "print(f\"Loaded {len(via_data)} images from JSON\")\n",
127
+ "\n",
128
+ "# -------------------------\n",
129
+ "# DATA GENERATOR\n",
130
+ "# -------------------------\n",
131
+ "def generate_examples():\n",
132
+ " for key, data in tqdm(via_data.items()):\n",
133
+ " filename = data['filename']\n",
134
+ " image_path = os.path.join(IMAGE_FOLDER, filename)\n",
135
+ "\n",
136
+ " if not os.path.exists(image_path):\n",
137
+ " continue\n",
138
+ "\n",
139
+ " # 1. Get Image Dimensions for Normalization\n",
140
+ " try:\n",
141
+ " with Image.open(image_path) as img:\n",
142
+ " width, height = img.size\n",
143
+ " # Read raw bytes for embedding\n",
144
+ " with open(image_path, \"rb\") as f:\n",
145
+ " img_bytes = f.read()\n",
146
+ " except Exception as e:\n",
147
+ " print(f\"Error loading {filename}: {e}\")\n",
148
+ " continue\n",
149
+ "\n",
150
+ " # 2. Process Regions (Polygons -> Normalized Bounding Boxes)\n",
151
+ " regions_list = []\n",
152
+ " for region in data['regions']:\n",
153
+ " try:\n",
154
+ " # Extract coordinates\n",
155
+ " xs = region['shape_attributes']['all_points_x']\n",
156
+ " ys = region['shape_attributes']['all_points_y']\n",
157
+ " label = region['region_attributes']['label']\n",
158
+ "\n",
159
+ " # Convert Polygon to Bounding Box (ymin, xmin, ymax, xmax)\n",
160
+ " xmin, xmax = min(xs), max(xs)\n",
161
+ " ymin, ymax = min(ys), max(ys)\n",
162
+ "\n",
163
+ " # Normalize to 0-1000 scale (Qwen3-VL Standard)\n",
164
+ " n_xmin = int((xmin / width) * 1000)\n",
165
+ " n_xmax = int((xmax / width) * 1000)\n",
166
+ " n_ymin = int((ymin / height) * 1000)\n",
167
+ " n_ymax = int((ymax / height) * 1000)\n",
168
+ "\n",
169
+ " # Format as a dictionary for the grounding task\n",
170
+ " regions_list.append({\n",
171
+ " \"bbox_2d\": [n_ymin, n_xmin, n_ymax, n_xmax],\n",
172
+ " \"label\": label\n",
173
+ " })\n",
174
+ " except KeyError:\n",
175
+ " continue # Skip regions without labels or points\n",
176
+ "\n",
177
+ " # 3. Create the final text label (JSON string)\n",
178
+ " # This will be processed by your convert_to_conversation function\n",
179
+ " grounding_json = json.dumps({\"regions\": regions_list}, ensure_ascii=False)\n",
180
+ "\n",
181
+ " yield {\n",
182
+ " \"image\": img_bytes,\n",
183
+ " \"text\": grounding_json\n",
184
+ " }\n",
185
+ "\n",
186
+ "# -------------------------\n",
187
+ "# DATASET FEATURES\n",
188
+ "# -------------------------\n",
189
+ "features = Features({\n",
190
+ " \"image\": datasets_Image(), # embedded image bytes\n",
191
+ " \"text\": Value(\"string\"), # JSON string of boxes and text\n",
192
+ "})\n",
193
+ "\n",
194
+ "# -------------------------\n",
195
+ "# CREATE & SAVE\n",
196
+ "# -------------------------\n",
197
+ "print(\"Processing images and labels... This may take a while.\")\n",
198
+ "ds = Dataset.from_generator(generate_examples, features=features)\n",
199
+ "\n",
200
+ "# Shuffle and split\n",
201
+ "ds = ds.train_test_split(test_size=TEST_SPLIT)\n",
202
+ "\n",
203
+ "print(\"Saving Parquet files...\")\n",
204
+ "ds[\"train\"].to_parquet(TRAIN_OUT)\n",
205
+ "ds[\"test\"].to_parquet(TEST_OUT)\n",
206
+ "\n",
207
+ "print(\"DONE ✅\")\n",
208
+ "print(f\"Wild Train file: {TRAIN_OUT}\")"
209
+ ]
210
+ },
211
+ {
212
+ "cell_type": "markdown",
213
+ "id": "b1f80720",
214
+ "metadata": {},
215
+ "source": [
216
+ "## Open The Image Dataset For Checking"
217
+ ]
218
+ },
219
+ {
220
+ "cell_type": "code",
221
+ "execution_count": 19,
222
+ "id": "57395385",
223
+ "metadata": {},
224
+ "outputs": [
225
+ {
226
+ "data": {
227
+ "application/vnd.jupyter.widget-view+json": {
228
+ "model_id": "d2da1e3e954a45fdb05b47c8d38e66b1",
229
+ "version_major": 2,
230
+ "version_minor": 0
231
+ },
232
+ "text/plain": [
233
+ "Loading dataset shards: 0%| | 0/17 [00:00<?, ?it/s]"
234
+ ]
235
+ },
236
+ "metadata": {},
237
+ "output_type": "display_data"
238
+ },
239
+ {
240
+ "name": "stdout",
241
+ "output_type": "stream",
242
+ "text": [
243
+ "<class 'PIL.JpegImagePlugin.JpegImageFile'>\n",
244
+ "<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=960x1280 at 0x219E9F9BEC0>\n"
245
+ ]
246
+ }
247
+ ],
248
+ "source": [
249
+ "from datasets import Dataset\n",
250
+ "ds = Dataset.from_parquet(\"wild_khmer_train.parquet\")\n",
251
+ "sample = ds[1][\"image\"]\n",
252
+ "print(type(sample))\n",
253
+ "print(sample)\n",
254
+ "sample.show()\n"
255
+ ]
256
+ },
257
+ {
258
+ "cell_type": "markdown",
259
+ "id": "2307a106",
260
+ "metadata": {},
261
+ "source": [
262
+ "## DUMMY DATASET ROW"
263
+ ]
264
+ },
265
+ {
266
+ "cell_type": "markdown",
267
+ "id": "aa809986",
268
+ "metadata": {},
269
+ "source": [
270
+ "```python\n",
271
+ "{\n",
272
+ " 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x800>,\n",
273
+ " 'text': '{\"regions\": [{\"bbox_2d\": [150, 200, 300, 800], \"label\": \"អាហារដ្ឋានមិត្តភាព\"}, {\"bbox_2d\": [850, 400, 920, 600], \"label\": \"012 345 678\"}]}'\n",
274
+ "}\n",
275
+ "```"
276
+ ]
277
+ }
278
+ ],
279
+ "metadata": {
280
+ "kernelspec": {
281
+ "display_name": "base",
282
+ "language": "python",
283
+ "name": "python3"
284
+ },
285
+ "language_info": {
286
+ "codemirror_mode": {
287
+ "name": "ipython",
288
+ "version": 3
289
+ },
290
+ "file_extension": ".py",
291
+ "mimetype": "text/x-python",
292
+ "name": "python",
293
+ "nbconvert_exporter": "python",
294
+ "pygments_lexer": "ipython3",
295
+ "version": "3.12.7"
296
+ }
297
+ },
298
+ "nbformat": 4,
299
+ "nbformat_minor": 5
300
+ }
info.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54c180017eda6eac702890b35f8018d4e7818ac80099de52a09ecd4109784c54
3
+ size 19973399