momanz1 Zedge commited on
Commit
ba9c7ab
·
0 Parent(s):

Duplicate from Dataseeds/DataSeeds.AI-Sample-Dataset-DSD

Browse files

Co-authored-by: IT Admind <Zedge@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
GSD-example.jpeg ADDED

Git LFS Details

  • SHA256: d215c51f9aec4de7253e0357799c79facae8b8a6a1c947a610818963aa310e25
  • Pointer size: 132 Bytes
  • Size of remote file: 3.17 MB
README.md ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ size_categories:
6
+ - 1K<n<10K
7
+ task_categories:
8
+ - image-classification
9
+ - object-detection
10
+ - image-to-text
11
+ tags:
12
+ - computer-vision
13
+ - photography
14
+ - annotations
15
+ - EXIF
16
+ - scene-understanding
17
+ - multimodal
18
+ dataset_info:
19
+ features:
20
+ - name: image_id
21
+ dtype: string
22
+ - name: image
23
+ dtype: image
24
+ - name: image_title
25
+ dtype: string
26
+ - name: image_description
27
+ dtype: string
28
+ - name: scene_description
29
+ dtype: string
30
+ - name: all_labels
31
+ sequence: string
32
+ - name: segmented_objects
33
+ sequence: string
34
+ - name: segmentation_masks
35
+ sequence:
36
+ sequence: float64
37
+ - name: exif_make
38
+ dtype: string
39
+ - name: exif_model
40
+ dtype: string
41
+ - name: exif_f_number
42
+ dtype: string
43
+ - name: exif_exposure_time
44
+ dtype: string
45
+ - name: exif_exposure_mode
46
+ dtype: string
47
+ - name: exif_exposure_program
48
+ dtype: string
49
+ - name: exif_metering_mode
50
+ dtype: string
51
+ - name: exif_lens
52
+ dtype: string
53
+ - name: exif_focal_length
54
+ dtype: string
55
+ - name: exif_iso
56
+ dtype: string
57
+ - name: exif_date_original
58
+ dtype: string
59
+ - name: exif_software
60
+ dtype: string
61
+ - name: exif_orientation
62
+ dtype: string
63
+ splits:
64
+ - name: train
65
+ num_bytes: 3715850996.79
66
+ num_examples: 7010
67
+ - name: validation
68
+ num_bytes: 408185964.0
69
+ num_examples: 762
70
+ download_size: 4134168610
71
+ dataset_size: 4124036960.79
72
+ configs:
73
+ - config_name: default
74
+ data_files:
75
+ - split: train
76
+ path: data/train-*
77
+ - split: validation
78
+ path: data/validation-*
79
+ ---
80
+
81
+ # DataSeeds.AI Sample Dataset (DSD)
82
+
83
+ ![DSD Example](./GSD-example.jpeg)
84
+
85
+ ## Dataset Summary
86
+
87
+ The DataSeeds.AI Sample Dataset (DSD) is a high-fidelity, human-curated computer vision-ready dataset comprised of 7,772 peer-ranked, fully annotated photographic images, 350,000+ words of descriptive text, and comprehensive metadata. While the DSD is being released under an open source license, a sister dataset of over 10,000 fully annotated and segmented images is available for immediate commercial licensing, and the broader GuruShots ecosystem contains over 100 million images in its catalog.
88
+
89
+ Each image includes multi-tier human annotations and semantic segmentation masks. Generously contributed to the community by the GuruShots photography platform, where users engage in themed competitions, the DSD uniquely captures aesthetic preference signals and high-quality technical metadata (EXIF) across an expansive diversity of photographic styles, camera types, and subject matter. The dataset is optimized for fine-tuning and evaluating multimodal vision-language models, especially in scene description and stylistic comprehension tasks.
90
+
91
+ * **Technical Report** - [Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from DataSeeds' Annotated Imagery](https://huggingface.co/papers/2506.05673)
92
+ * **Github Repo** - Access the complete weights and code which were used to evaluate the DSD -- [https://github.com/DataSeeds-ai/DSD-finetune-blip-llava](https://github.com/DataSeeds-ai/DSD-finetune-blip-llava)
93
+
94
+ This dataset is ready for commercial/non-commercial use.
95
+
96
+ ## Dataset Structure
97
+
98
+ * **Size**: 7,772 images (7,010 train, 762 validation)
99
+ * **Format**: Apache Parquet files for metadata, with images in JPG format
100
+ * **Total Size**: ~4.1GB
101
+ * **Languages**: English (annotations)
102
+ * **Annotation Quality**: All annotations were verified through a multi-tier human-in-the-loop process
103
+
104
+ ### Data Fields
105
+
106
+ | Column Name | Description | Data Type |
107
+ |-------------|-------------|-----------|
108
+ | `image_id` | Unique identifier for the image | string |
109
+ | `image` | Image file, PIL type | image |
110
+ | `image_title` | Human-written title summarizing the content or subject | string |
111
+ | `image_description` | Human-written narrative describing what is visibly present | string |
112
+ | `scene_description` | Technical and compositional details about image capture | string |
113
+ | `all_labels` | All object categories identified in the image | list of strings |
114
+ | `segmented_objects` | Objects/elements that have segmentation masks | list of strings |
115
+ | `segmentation_masks` | Segmentation polygons as coordinate points [x,y,...] | list of lists of floats |
116
+ | `exif_make` | Camera manufacturer | string |
117
+ | `exif_model` | Camera model | string |
118
+ | `exif_f_number` | Aperture value (lower = wider aperture) | string |
119
+ | `exif_exposure_time` | Sensor exposure time (e.g., 1/500 sec) | string |
120
+ | `exif_exposure_mode` | Camera exposure setting (Auto/Manual/etc.) | string |
121
+ | `exif_exposure_program` | Exposure program mode | string |
122
+ | `exif_metering_mode` | Light metering mode | string |
123
+ | `exif_lens` | Lens information and specifications | string |
124
+ | `exif_focal_length` | Lens focal length (millimeters) | string |
125
+ | `exif_iso` | Camera sensor sensitivity to light | string |
126
+ | `exif_date_original` | Original timestamp when image was taken | string |
127
+ | `exif_software` | Post-processing software used | string |
128
+ | `exif_orientation` | Image layout (horizontal/vertical) | string |
129
+
130
+ ## How to Use
131
+
132
+ ### Basic Loading
133
+
134
+ ```python
135
+ from datasets import load_dataset
136
+
137
+ # Load the training split of the dataset
138
+ dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")
139
+
140
+ # Access the first sample
141
+ sample = dataset[0]
142
+
143
+ # Extract the different features from the sample
144
+ image = sample["image"] # The PIL Image object
145
+ title = sample["image_title"]
146
+ description = sample["image_description"]
147
+ segments = sample["segmented_objects"]
148
+ masks = sample["segmentation_masks"] # The PIL Image object for the mask
149
+
150
+ print(f"Title: {title}")
151
+ print(f"Description: {description}")
152
+ print(f"Segmented objects: {segments}")
153
+ ```
154
+
155
+ ### PyTorch DataLoader
156
+
157
+ ```python
158
+ from datasets import load_dataset
159
+ from torch.utils.data import DataLoader
160
+ import torch
161
+
162
+ # Load dataset
163
+ dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")
164
+
165
+ # Convert to PyTorch format
166
+ dataset.set_format(type="torch", columns=["image", "image_title", "segmentation_masks"])
167
+
168
+ # Create DataLoader
169
+ dataloader = DataLoader(dataset, batch_size=16, shuffle=True)
170
+ ```
171
+
172
+ ### TensorFlow
173
+
174
+ ```python
175
+ import tensorflow as tf
176
+ from datasets import load_dataset
177
+
178
+ TARGET_IMG_SIZE = (224, 224)
179
+ BATCH_SIZE = 16
180
+ dataset = load_dataset("Dataseeds/DataSeeds.AI-Sample-Dataset-DSD", split="train")
181
+
182
+ def hf_dataset_generator():
183
+ for example in dataset:
184
+ yield example['image'], example['image_title']
185
+
186
+ def preprocess(image, title):
187
+ # Resize the image to a fixed size
188
+ image = tf.image.resize(image, TARGET_IMG_SIZE)
189
+ image = tf.cast(image, tf.uint8)
190
+ return image, title
191
+
192
+ # The output_signature defines the data types and shapes
193
+ tf_dataset = tf.data.Dataset.from_generator(
194
+ hf_dataset_generator,
195
+ output_signature=(
196
+ tf.TensorSpec(shape=(None, None, 3), dtype=tf.uint8),
197
+ tf.TensorSpec(shape=(), dtype=tf.string),
198
+ )
199
+ )
200
+
201
+ # Apply the preprocessing, shuffle, and batch
202
+ tf_dataset = (
203
+ tf_dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
204
+ .shuffle(buffer_size=100)
205
+ .batch(BATCH_SIZE)
206
+ .prefetch(tf.data.AUTOTUNE)
207
+ )
208
+
209
+ print("Dataset is ready.")
210
+ for images, titles in tf_dataset.take(1):
211
+ print("Image batch shape:", images.shape)
212
+ print("A title from the batch:", titles.numpy()[0].decode('utf-8'))
213
+ ```
214
+
215
+ ## Dataset Characterization
216
+
217
+ **Data Collection Method**: Manual curation from GuruShots photography platform
218
+
219
+ **Labeling Method**: Human annotators with multi-tier verification process
220
+
221
+ ## Benchmark Results
222
+
223
+ To validate the impact of data quality, we fine-tuned two state-of-the-art vision-language models—**LLaVA-NEXT** and **BLIP2**—on the DSD scene description task. We observed consistent and measurable improvements over base models:
224
+
225
+ ### LLaVA-NEXT Results
226
+
227
+ | Model | BLEU-4 | ROUGE-L | BERTScore F1 | CLIPScore |
228
+ |-------|--------|---------|--------------|-----------|
229
+ | Base | 0.0199 | 0.2089 | 0.2751 | 0.3247 |
230
+ | Fine-tuned | 0.0246 | 0.2140 | 0.2789 | 0.3260 |
231
+ | **Relative Improvement** | **+24.09%** | **+2.44%** | **+1.40%** | **+0.41%** |
232
+
233
+ ### BLIP2 Results
234
+
235
+ | Model | BLEU-4 | ROUGE-L | BERTScore F1 | CLIPScore |
236
+ |-------|--------|---------|--------------|-----------|
237
+ | Base | 0.001 | 0.126 | 0.0545 | 0.2854 |
238
+ | Fine-tuned | 0.047 | 0.242 | -0.0537 | 0.2583 |
239
+ | **Relative Improvement** | **+4600%** | **+92.06%** | -198.53% | -9.49% |
240
+
241
+ These improvements demonstrate the dataset's value in improving scene understanding and textual grounding of visual features, especially in fine-grained photographic tasks.
242
+
243
+ ## Use Cases
244
+
245
+ The DSD is perfect for fine-tuning multimodal models for:
246
+
247
+ * **Image captioning** - Rich human-written descriptions
248
+ * **Scene description** - Technical photography analysis
249
+ * **Semantic segmentation** - Pixel-level object understanding
250
+ * **Aesthetic evaluation** - Style classification based on peer rankings
251
+ * **EXIF-aware analysis** - Technical metadata integration
252
+ * **Multimodal training** - Vision-language model development
253
+
254
+ ## Commercial Dataset Access & On-Demand Licensing
255
+
256
+ While the DSD is being released under an open source license, it represents only a small fraction of the broader commercial capabilities of the GuruShots ecosystem.
257
+
258
+ DataSeeds.AI operates a live, ongoing photography catalog that has amassed over 100 million images, sourced from both amateur and professional photographers participating in thousands of themed challenges across diverse geographic and stylistic contexts. Unlike most public datasets, this corpus is:
259
+
260
+ * Fully licensed for downstream use in AI training
261
+ * Backed by structured consent frameworks and traceable rights, with active opt-in from creators
262
+ * Rich in EXIF metadata, including camera model, lens type, and occasionally location data
263
+ * Curated through a built-in human preference signal based on competitive ranking, yielding rare insight into subjective aesthetic quality
264
+
265
+ ### On-Demand Dataset Creation
266
+
267
+ Uniquely, DataSeeds.AI has the ability to source new image datasets to spec via a just-in-time, first-party data acquisition engine. Clients (e.g. AI labs, model developers, media companies) can request:
268
+
269
+ * Specific content themes (e.g., "urban decay at dusk," "elderly people with dogs in snowy environments")
270
+ * Defined technical attributes (camera type, exposure time, geographic constraints)
271
+ * Ethical/region-specific filtering (e.g., GDPR-compliant imagery, no identifiable faces, kosher food imagery)
272
+ * Matching segmentation masks, EXIF metadata, and tiered annotations
273
+
274
+ Within days, the DataSeeds.AI platform can launch curated challenges to its global network of contributors and deliver targeted datasets with commercial-grade licensing terms.
275
+
276
+ ### Sales Inquiries
277
+
278
+ To inquire about licensing or customized dataset sourcing, contact:
279
+ **[sales@dataseeds.ai](mailto:sales@dataseeds.ai)**
280
+
281
+ ## License & Citation
282
+
283
+ **License**: Apache 2.0
284
+
285
+ **For commercial licenses, annotation, or access to the full 100M+ image catalog with on-demand annotations**: [sales@dataseeds.ai](mailto:sales@dataseeds.ai)
286
+
287
+ ### Citation
288
+
289
+ If you find the data useful, please cite:
290
+
291
+ ```bibtex
292
+ @article{abdoli2025peerranked,
293
+ title={Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from GuruShots' Annotated Imagery},
294
+ author={Sajjad Abdoli and Freeman Lewin and Gediminas Vasiliauskas and Fabian Schonholz},
295
+ journal={arXiv preprint arXiv:2506.05673},
296
+ year={2025},
297
+ }
298
+ ```
data/train-00000-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b24b0021e02ac7898e77bbac17e19b21292962d37c8ec31dc967f3f08c85e91
3
+ size 461959139
data/train-00001-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fce7ee4706b43dc0e81738eb9abcb2bc91fda102a3189670ab10576d9b9efdc
3
+ size 508310766
data/train-00002-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:614d7739fb4721fbe335313c5d38a23b9880790072a58b5b559ab0d7990c300f
3
+ size 471812387
data/train-00003-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e53932d548cf663d38b5880fde00a69416dbb81800d86f668f1b6985be3c9c7
3
+ size 448246211
data/train-00004-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84127a5648964bd8f488929e49971526eb858db3941145299c53362d62329632
3
+ size 458747898
data/train-00005-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b871c2247c219d11f0ec2396d93321e1e35bb2f8b0bc4e987d87494b03b49e7
3
+ size 469573116
data/train-00006-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57ec0c3d15c2e4fdb58049a2b5f2425b041c5df8cb82eef252c556e484058e95
3
+ size 473280694
data/train-00007-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78d23fd913d2081eec9fdfb34bf1984a5f298add0dac6f1faa7f1cd72c52c993
3
+ size 438512696
data/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:531394c26c2eb4256788281a75cdb9171ba9f25c7cf2a19bab31d601a73b0c79
3
+ size 403725703