create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
metrics:
|
| 4 |
+
- accuracy
|
| 5 |
+
base_model:
|
| 6 |
+
- liuhaotian/llava-v1.5-7b
|
| 7 |
+
---
|
| 8 |
+
# LLaVA-3D
|
| 9 |
+
|
| 10 |
+
## Table of Contents
|
| 11 |
+
|
| 12 |
+
1. [Model Summary](##model-summary)
|
| 13 |
+
2. [Use](##use)
|
| 14 |
+
3. [Limitations](##limitations)
|
| 15 |
+
4. [Training](##training)
|
| 16 |
+
5. [License](##license)
|
| 17 |
+
6. [Citation](##citation)
|
| 18 |
+
|
| 19 |
+
## Model Summary
|
| 20 |
+
|
| 21 |
+
The LLaVA-3D model is a 7B parameter models trained on LLaVA-3D-Instruct-1M, based on LLaVA-v1.5-7B.
|
| 22 |
+
|
| 23 |
+
- **Repository:** [ZCMax/LLaVA-3D](https://github.com/ZCMax/LLaVA-3D)
|
| 24 |
+
- **Project Website:** [zcmax.github.io/projects/LLaVA-3D](https://zcmax.github.io/projects/LLaVA-3D/)
|
| 25 |
+
- **Paper:** [LLaVA-3D](https://arxiv.org/abs/2409.18125)
|
| 26 |
+
- **Point of Contact:** [Chenming Zhu](mailto:zcm952742165@gmail.com)
|
| 27 |
+
- **Languages:** English
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
## Use
|
| 31 |
+
|
| 32 |
+
### Intended use
|
| 33 |
+
|
| 34 |
+
The model was trained on LLaVA-3D-Instruct-1M and has the ability to interact with the single image for 2D tasks and posed RBG-D images for 3D tasks.
|
| 35 |
+
|
| 36 |
+
**Feel free to share your generations in the Community tab!**
|
| 37 |
+
|
| 38 |
+
# Training
|
| 39 |
+
|
| 40 |
+
## Model
|
| 41 |
+
|
| 42 |
+
- **Pretraining Stage:** scene-level and region-level caption data, 1 epoch, projector
|
| 43 |
+
- **Instructing Tuning Stage:** A mixture of 1M high-quality 2D and 3D data, 1 epoch, full model
|
| 44 |
+
- **Precision:** bfloat16
|
| 45 |
+
|
| 46 |
+
## Hardware & Software
|
| 47 |
+
|
| 48 |
+
- **GPUs:** 8 * Nvidia Tesla A100 (for whole model series training)
|
| 49 |
+
- **Orchestration:** [Huggingface Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
|
| 50 |
+
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
|
| 51 |
+
|
| 52 |
+
# Citation
|
| 53 |
+
```
|
| 54 |
+
@article{zhu2024llava,
|
| 55 |
+
title={LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness},
|
| 56 |
+
author={Zhu, Chenming and Wang, Tai and Zhang, Wenwei and Pang, Jiangmiao and Liu, Xihui},
|
| 57 |
+
journal={arXiv preprint arXiv:2409.18125},
|
| 58 |
+
year={2024}
|
| 59 |
+
}
|
| 60 |
+
```
|