File size: 960 Bytes
b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 b6948b1 c931619 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | ---
license: apache-2.0
task_categories:
- image-to-text
- visual-question-answering
tags:
- llava
- vision-language
- pretrain
- multimodal
size_categories:
- 100K<n<1M
---
# LLaVA-Pretrain Dataset
Pretraining data for LLaVA (Large Language and Vision Assistant).
## Description
This dataset contains the pretraining data used in LLaVA training, including:
- `blip_laion_cc_sbu_558k.json` - Annotation file with 558K image-caption pairs
- `images/` - Corresponding images
## Usage
```python
from huggingface_hub import snapshot_download
# Download the dataset
snapshot_download(
repo_id="pppop7/LLaVA-Pretrain",
repo_type="dataset",
local_dir="./llava_pretrain"
)
```
## Related Datasets
- [pppop7/LLaVA-Instruct-150K](https://huggingface.co/datasets/pppop7/LLaVA-Instruct-150K) - Instruction tuning data
## Reference
- [LLaVA Official Repository](https://github.com/haotian-liu/LLaVA)
- [LLaVA Paper](https://arxiv.org/abs/2304.08485)
|