Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,8 @@ extra_gated_prompt: |
|
|
| 14 |
By clicking on “Access repository” below, you also agree to individual licensing terms for each of the subset datasets of the PMD as noted at https://huggingface.co/datasets/facebook/pmd#additional-information.
|
| 15 |
---
|
| 16 |
|
|
|
|
|
|
|
| 17 |
G400M is a German language image-text dataset with 400M image-text pairs extracted from the [xlarge pool of DataComp](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge).
|
| 18 |
The data is filtered and balanced using the algorithm applied by [MetaCLIP](https://github.com/facebookresearch/MetaCLIP), that is:
|
| 19 |
1. Build a collection of 500k strings (namely the metadata) from the German Wikipedia.
|
|
|
|
| 14 |
By clicking on “Access repository” below, you also agree to individual licensing terms for each of the subset datasets of the PMD as noted at https://huggingface.co/datasets/facebook/pmd#additional-information.
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# Dataset Card for G400M
|
| 18 |
+
|
| 19 |
G400M is a German language image-text dataset with 400M image-text pairs extracted from the [xlarge pool of DataComp](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge).
|
| 20 |
The data is filtered and balanced using the algorithm applied by [MetaCLIP](https://github.com/facebookresearch/MetaCLIP), that is:
|
| 21 |
1. Build a collection of 500k strings (namely the metadata) from the German Wikipedia.
|