Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,6 @@ The data is filtered and balanced using the algorithm applied by [MetaCLIP](http
|
|
| 7 |
1. Build a collection of 500k strings (namely the metadata) from the German Wikipedia.
|
| 8 |
2. Filter the data pool for German and English data using [fastText](https://github.com/facebookresearch/fastText).
|
| 9 |
3. Apply substring matching to the captions with the metadata.
|
| 10 |
-
4. Sample the image-text pairs using the algorithm by MetaCLIP with the (magic) target number per metadata entry of 20k.
|
|
|
|
|
|
|
|
|
| 7 |
1. Build a collection of 500k strings (namely the metadata) from the German Wikipedia.
|
| 8 |
2. Filter the data pool for German and English data using [fastText](https://github.com/facebookresearch/fastText).
|
| 9 |
3. Apply substring matching to the captions with the metadata.
|
| 10 |
+
4. Sample the image-text pairs using the algorithm by MetaCLIP with the (magic) target number per metadata entry of 20k.
|
| 11 |
+
|
| 12 |
+
We follow [DataComp](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge) and distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
|