Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,10 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
G400M is a German language image-text dataset with 400M image-text pairs extracted from the [xlarge pool of DataComp](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge).
|
| 6 |
+
The data is filtered and balanced using the algorithm applied by [MetaCLIP](https://github.com/facebookresearch/MetaCLIP), that is:
|
| 7 |
+
1. Build a collection of 500k strings (namely the metadata) from the German Wikipedia.
|
| 8 |
+
2. Filter the data pool for German and English data using [fastText](https://github.com/facebookresearch/fastText).
|
| 9 |
+
3. Apply substring matching to the captions with the metadata.
|
| 10 |
+
4. Sample the image-text pairs using the algorithm by MetaCLIP with the (magic) target number per metadata entry of 20k.
|