Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Filtered WIT, an Image-Text Dataset.
|
| 2 |
+
A reliable Dataset to run Image-Text models.
|
| 3 |
+
|
| 4 |
+
You can find WIT, Wikipedia Image Text Dataset, [here](https://github.com/google-research-datasets/wit)
|
| 5 |
+
Data was taken from [dalle-mini/wit](https://huggingface.co/datasets/dalle-mini/wit)
|
| 6 |
+
|
| 7 |
+
## Author
|
| 8 |
+
- [Aarush Katta](https://github.com/ARKseal)
|
| 9 |
+
|
| 10 |
+
## Data Structure
|
| 11 |
+
The data is stored as tars, containing 10,000 samples per tar.
|
| 12 |
+
Each tar contains a `.jpg`, `.txt`, and `.json`.
|
| 13 |
+
The image is stored in `.jpg`, the caption in `.txt.` and the metadata in `.json`
|
| 14 |
+
The preferred method to read the data is [WebDataset](https://github.com/webdataset/webdataset)
|
| 15 |
+
Here's an example:
|
| 16 |
+
```python
|
| 17 |
+
import webdataset as wds
|
| 18 |
+
|
| 19 |
+
dataset = wds.WebDataset('data/00000.tar').to_tuple('txt', 'jpg', 'json')
|
| 20 |
+
|
| 21 |
+
for text, image, meta in dataset:
|
| 22 |
+
print(
|
| 23 |
+
text[:50],
|
| 24 |
+
image[:50],
|
| 25 |
+
meta[:50]
|
| 26 |
+
)
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
## Filteration
|
| 30 |
+
Each sample has 8 possible captions which were compared to the image using [CLIP ViT-B32](https://arxiv.org/abs/2103.00020)
|
| 31 |
+
The text was encoded using [multilingual CLIP text encoder](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1)
|
| 32 |
+
Each possible caption was compared to the encoded image using Cosign Similarity
|
| 33 |
+
and kept if the sim was greater than `0.26`
|
| 34 |
+
Then the new caption was the filtered captions concatenated, and samples with no filtered caption were dropped.
|