Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- Lin-Chen/ShareGPT4V
|
| 4 |
+
pipeline_tag: visual-question-answering
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
<div align="center">
|
| 8 |
+
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
[](https://github.com/InternLM/xtuner)
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
</div>
|
| 15 |
+
|
| 16 |
+
## Model
|
| 17 |
+
|
| 18 |
+
llava-phi-3-mini-pretrain is a LLaVA projector pretrained from [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V/blob/main/share-captioner_coco_lcs_sam_1246k_1107.json) dataset by [XTuner](https://github.com/InternLM/xtuner).
|
| 19 |
+
|
| 20 |
+
The fine-tuned LLaVA model can be found on [xtuner/llava-phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini).
|
| 21 |
+
|
| 22 |
+
## Citation
|
| 23 |
+
|
| 24 |
+
```bibtex
|
| 25 |
+
@misc{2023xtuner,
|
| 26 |
+
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
|
| 27 |
+
author={XTuner Contributors},
|
| 28 |
+
howpublished = {\url{https://github.com/InternLM/xtuner}},
|
| 29 |
+
year={2023}
|
| 30 |
+
}
|
| 31 |
+
```
|