Add dataset README
Browse files
README.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Flickr30k CLIP-Preprocessed Dataset
|
| 2 |
+
|
| 3 |
+
This dataset contains the Flickr30k dataset preprocessed with CLIP ViT-Large-Patch14 image processor.
|
| 4 |
+
|
| 5 |
+
## Files
|
| 6 |
+
|
| 7 |
+
- `img_data.parquet`: Preprocessed images as flattened numpy arrays (shape: [3, 224, 224] -> flattened)
|
| 8 |
+
- `train_captions.parquet`: Training split captions with image_id mapping
|
| 9 |
+
- `val_captions.parquet`: Validation split captions with image_id mapping
|
| 10 |
+
|
| 11 |
+
## Usage
|
| 12 |
+
|
| 13 |
+
```python
|
| 14 |
+
import pandas as pd
|
| 15 |
+
import torch
|
| 16 |
+
import numpy as np
|
| 17 |
+
|
| 18 |
+
# Load the data
|
| 19 |
+
images_df = pd.read_parquet('img_data.parquet')
|
| 20 |
+
train_captions_df = pd.read_parquet('train_captions.parquet')
|
| 21 |
+
val_captions_df = pd.read_parquet('val_captions.parquet')
|
| 22 |
+
|
| 23 |
+
# Access an image
|
| 24 |
+
image_id = 0
|
| 25 |
+
flat_image = images_df.iloc[image_id]['image']
|
| 26 |
+
image_tensor = torch.from_numpy(flat_image.reshape(3, 224, 224))
|
| 27 |
+
|
| 28 |
+
# Access captions for that image
|
| 29 |
+
captions_for_image = train_captions_df[train_captions_df['image_id'] == image_id]['caption'].tolist()
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
## Original Dataset
|
| 33 |
+
|
| 34 |
+
Original dataset: [nlphuji/flickr30k](https://huggingface.co/datasets/nlphuji/flickr30k)
|
| 35 |
+
|
| 36 |
+
## Preprocessing
|
| 37 |
+
|
| 38 |
+
Images were processed using the CLIP ViT-Large-Patch14 image processor:
|
| 39 |
+
- Resized to 224x224
|
| 40 |
+
- CLIP normalization applied
|
| 41 |
+
- Converted to tensors and flattened for storage efficiency
|
| 42 |
+
|
| 43 |
+
## Dataset Statistics
|
| 44 |
+
|
| 45 |
+
- Total images: Check `img_data.parquet` length
|
| 46 |
+
- Train captions: Check `train_captions.parquet` length
|
| 47 |
+
- Validation captions: Check `val_captions.parquet` length
|
| 48 |
+
- Train/Validation split: 90/10
|