Datasets:
File size: 1,180 Bytes
dc0c9b4 ba5995f b23af88 dc0c9b4 277fc46 dc0c9b4 dcc3d4f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | ---
license: cc-by-4.0
task_categories:
- image-to-text
language:
- de
- en
task_ids:
- image-captioning
pretty_name: G400M
size_categories:
- 100M<n<1B
source_datasets:
- mlfoundations/datacomp_xlarge
---
# Dataset Card for G400M
G400M is a German language image-text dataset with 400M image-text pairs extracted from the [xlarge pool of DataComp](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge).
The data is filtered and balanced using the algorithm applied by [MetaCLIP](https://github.com/facebookresearch/MetaCLIP), that is:
1. Build a collection of 500k strings (namely the metadata) from the German Wikipedia.
2. Filter the data pool for German and English data using [fastText](https://github.com/facebookresearch/fastText).
3. Apply substring matching to the captions with the metadata.
4. Sample the image-text pairs using the algorithm by MetaCLIP with the (magic) target number per metadata entry of 20k.
We follow [DataComp](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge) and distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights. |