| | --- |
| | language: |
| | - fa |
| | license: apache-2.0 |
| | size_categories: |
| | - 10K<n<100K |
| | task_categories: |
| | - image-to-text |
| | pretty_name: Flickr30K Fa |
| | tags: |
| | - hezar |
| | dataset_info: |
| | features: |
| | - name: image_path |
| | dtype: image |
| | - name: label |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 3417564667.896 |
| | num_examples: 29146 |
| | - name: test |
| | num_bytes: 376609317.44 |
| | num_examples: 3236 |
| | download_size: 3780108327 |
| | dataset_size: 3794173985.336 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | - split: test |
| | path: data/test-* |
| | --- |
| | |
| | The Flickr30K dataset filtered and translated to Persian. |
| |
|
| | This dataset was originally made by **Sajjad Ayoubi** and uploaded to Kaggle at [https://www.kaggle.com/datasets/sajjadayobi360/flickrfa](https://www.kaggle.com/datasets/sajjadayobi360/flickrfa). |
| | This repo contains the exact dataset split to train/test using a custom sampling criteria and can be directly loaded using HuggingFace datasets or right from Hezar. |
| |
|
| | ### Usage |
| | #### Hugging Face Datasets |
| | ``` |
| | pip install datasets |
| | ``` |
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("hezarai/flickr30k-fa") |
| | ``` |
| |
|
| | #### Hezar |
| | ``` |
| | pip install hezar |
| | ``` |
| | ```python |
| | from hezar.data import Dataset |
| | |
| | dataset = Dataset.load("hezarai/flickr30k-fa", split="train") |
| | ``` |