|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- image-to-text |
|
|
- visual-question-answering |
|
|
tags: |
|
|
- llava |
|
|
- vision-language |
|
|
- pretrain |
|
|
- multimodal |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
|
|
|
# LLaVA-Pretrain Dataset |
|
|
|
|
|
Pretraining data for LLaVA (Large Language and Vision Assistant). |
|
|
|
|
|
## Description |
|
|
|
|
|
This dataset contains the pretraining data used in LLaVA training, including: |
|
|
- `blip_laion_cc_sbu_558k.json` - Annotation file with 558K image-caption pairs |
|
|
- `images/` - Corresponding images |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from huggingface_hub import snapshot_download |
|
|
|
|
|
# Download the dataset |
|
|
snapshot_download( |
|
|
repo_id="pppop7/LLaVA-Pretrain", |
|
|
repo_type="dataset", |
|
|
local_dir="./llava_pretrain" |
|
|
) |
|
|
``` |
|
|
|
|
|
## Related Datasets |
|
|
|
|
|
- [pppop7/LLaVA-Instruct-150K](https://huggingface.co/datasets/pppop7/LLaVA-Instruct-150K) - Instruction tuning data |
|
|
|
|
|
## Reference |
|
|
|
|
|
- [LLaVA Official Repository](https://github.com/haotian-liu/LLaVA) |
|
|
- [LLaVA Paper](https://arxiv.org/abs/2304.08485) |
|
|
|