yk3701208 commited on
Commit
bf27c9f
·
verified ·
1 Parent(s): 22036e2

Remove README

Browse files
Files changed (1) hide show
  1. README.md +0 -208
README.md DELETED
@@ -1,208 +0,0 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - image-to-text
5
- - object-detection
6
- - visual-question-answering
7
- size_categories:
8
- - 1M<n<10M
9
- language:
10
- - en
11
- - zh
12
- tags:
13
- - multimodal
14
- - vision
15
- - grounding
16
- - detection
17
- - ocr
18
- pretty_name: Locany Multimodal Dataset
19
- ---
20
-
21
- # Locany Multimodal Dataset
22
-
23
- ## Dataset Description
24
-
25
- This is a large-scale multimodal dataset for vision-language tasks including object detection, grounding, OCR, and UI understanding.
26
-
27
- ## Dataset Statistics
28
-
29
- - **Total Unique Images**: 9,860,204
30
- - **Total Annotations**: 28,768,853
31
- - **Total Size**: 2867.81 GB
32
- - **Categories**: 6
33
-
34
- ### Breakdown by Category
35
-
36
- | Category | Datasets | Annotations |
37
- |----------|----------|-------------|
38
- | Detection | 32 | 16,334,270 |
39
- | Grounding | 8 | 2,110,597 |
40
- | Layout | 15 | 1,141,706 |
41
- | OCR | 13 | 2,060,741 |
42
- | Pointing | 3 | 876,872 |
43
- | UI | 16 | 6,244,667 |
44
-
45
-
46
- ## Dataset Structure
47
-
48
- ```
49
- locany-dataset/
50
- ├── images_hf/ # Parquet files organized by category/subfolder
51
- │ ├── Detection/
52
- │ │ ├── COCO/
53
- │ │ ├── Object365/
54
- │ │ └── ...
55
- │ ├── Grounding/
56
- │ ├── OCR/
57
- │ ├── UI/
58
- │ └── ...
59
- └── annotations_hf/ # JSONL annotation files by category
60
- ├── Detection/
61
- ├── Grounding/
62
- ├── OCR/
63
- └── ...
64
- ```
65
-
66
- ### Parquet Files (Images)
67
-
68
- Images are stored in Parquet format with two columns:
69
- - `image`: Raw image bytes (PNG format)
70
- - `image_path`: Path string (e.g., `images_hf/Detection/COCO/000000178538.jpg`)
71
-
72
- Each parquet file is approximately 5GB.
73
-
74
- ### Annotation Files (JSONL)
75
-
76
- Each line in the JSONL files contains:
77
- ```json
78
- {
79
- "image": "images_hf/Detection/COCO/000000178538.jpg",
80
- "conversations": [
81
- {"from": "human", "value": "<image>\nDetect all objects..."},
82
- {"from": "gpt", "value": "<ref>person</ref><box>x1 y1 x2 y2</box>..."}
83
- ]
84
- }
85
- ```
86
-
87
- ### Coordinate Format
88
-
89
- Bounding boxes use normalized coordinates in range [0, 1000]:
90
- - `<box>x1 y1 x2 y2</box>` for rectangles
91
- - `<box>x y</box>` for points
92
-
93
- To convert to absolute coordinates:
94
- ```python
95
- absolute_x = normalized_x * image_width / 1000
96
- absolute_y = normalized_y * image_height / 1000
97
- ```
98
-
99
- ## Usage
100
-
101
- ### Loading Images from Parquet
102
-
103
- ```python
104
- import pyarrow.parquet as pq
105
- from PIL import Image
106
- import io
107
-
108
- # Read parquet file
109
- table = pq.read_table('images_hf/Detection/COCO/images_0001.parquet')
110
- df = table.to_pandas()
111
-
112
- # Create image lookup
113
- image_dict = {row['image_path']: row['image'] for _, row in df.iterrows()}
114
-
115
- # Load specific image
116
- img_path = "images_hf/Detection/COCO/000000178538.jpg"
117
- img_bytes = image_dict[img_path]
118
- image = Image.open(io.BytesIO(img_bytes))
119
- ```
120
-
121
- ### Loading Annotations
122
-
123
- ```python
124
- import json
125
-
126
- with open('annotations_hf/Detection/coco_llava_train_epoch1_part1.jsonl', 'r') as f:
127
- for line in f:
128
- data = json.loads(line)
129
- image_path = data['image']
130
- conversations = data['conversations']
131
- # Process annotation...
132
- ```
133
-
134
- ### Example: Training Pipeline
135
-
136
- ```python
137
- import json
138
- import pyarrow.parquet as pq
139
- from PIL import Image
140
- import io
141
-
142
- # 1. Load parquet images
143
- table = pq.read_table('images_hf/Detection/COCO/images_0001.parquet')
144
- image_lookup = {row['image_path']: row['image']
145
- for _, row in table.to_pandas().iterrows()}
146
-
147
- # 2. Read annotations
148
- with open('annotations_hf/Detection/coco_llava_train_epoch1_part1.jsonl') as f:
149
- for line in f:
150
- data = json.loads(line)
151
-
152
- # Get image
153
- img_bytes = image_lookup[data['image']]
154
- image = Image.open(io.BytesIO(img_bytes))
155
-
156
- # Parse conversations
157
- for conv in data['conversations']:
158
- if conv['from'] == 'gpt':
159
- # Extract boxes: <ref>text</ref><box>x1 y1 x2 y2</box>
160
- # Train your model...
161
- pass
162
- ```
163
-
164
- ## Categories
165
-
166
- ### Detection
167
- Object detection datasets with bounding boxes identifying object categories.
168
-
169
- ### Grounding
170
- Phrase grounding datasets connecting natural language descriptions to image regions.
171
-
172
- ### OCR
173
- Text detection and recognition in images.
174
-
175
- ### UI
176
- User interface understanding including desktop, mobile, and web screenshots.
177
-
178
- ### Layout
179
- Document layout analysis for forms, tables, and structured documents.
180
-
181
- ### Pointing
182
- Point-based localization tasks.
183
-
184
- ## Deduplication
185
-
186
- This dataset uses hash-based deduplication across all categories. Each unique image (by content) is stored only once, even if used in multiple datasets or categories. This reduces storage by approximately 30-50% compared to naive duplication.
187
-
188
- ## Citation
189
-
190
- If you use this dataset in your research, please cite:
191
-
192
- ```bibtex
193
- @dataset{locany_multimodal_2026,
194
- title={Locany Multimodal Dataset},
195
- author={Your Name},
196
- year={2026},
197
- publisher={Hugging Face},
198
- url={https://huggingface.co/datasets/yk3701208/locany-dataset}
199
- }
200
- ```
201
-
202
- ## License
203
-
204
- This dataset is released under CC-BY-4.0 license.
205
-
206
- ## Acknowledgments
207
-
208
- This dataset aggregates and processes data from multiple public sources including COCO, Object365, OpenImages, and others. Please see individual dataset licenses for specific restrictions.