Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,104 @@
|
|
| 1 |
-
---
|
| 2 |
-
license:
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
+
---
|
| 6 |
+

|
| 7 |
+
|
| 8 |
+
<p align="center">
|
| 9 |
+
📃 <a href="https://arxiv.org/abs/2409.02889" target="_blank">Paper</a> • 🌐 <a href="" target="_blank">Demo</a> • 📃 <a href="https://github.com/FreedomIntelligence/LongLLaVA" target="_blank">Github</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/LongLLaVA-53B-A13B" target="_blank">LongLLaVA-53B-A13B</a>
|
| 10 |
+
</p>
|
| 11 |
+
|
| 12 |
+

|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
## 🌈 Update
|
| 16 |
+
|
| 17 |
+
* **[2024.09.05]** LongLLaVA repo is published!🎉 The Code will
|
| 18 |
+
|
| 19 |
+
## Architecture
|
| 20 |
+
|
| 21 |
+
<details>
|
| 22 |
+
<summary>Click to view the architecture image</summary>
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+
</details>
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
## Results
|
| 30 |
+
|
| 31 |
+
<details>
|
| 32 |
+
<summary>Click to view the Results</summary>
|
| 33 |
+
|
| 34 |
+
- Main Results
|
| 35 |
+

|
| 36 |
+
- Diagnostic Results
|
| 37 |
+

|
| 38 |
+
- Video-NIAH
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
</details>
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
## Results reproduction
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
### Evaluation
|
| 49 |
+
|
| 50 |
+
- Preparation
|
| 51 |
+
|
| 52 |
+
Get the model inference code from [Github](https://github.com/FreedomIntelligence/LongLLaVA).
|
| 53 |
+
|
| 54 |
+
```bash
|
| 55 |
+
git clone https://github.com/FreedomIntelligence/LongLLaVA.git
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
- Environment Setup
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
pip install -r requirements.txt
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
- Command Line Interface
|
| 66 |
+
|
| 67 |
+
```bash
|
| 68 |
+
python cli.py --model_dir path-to-longllava
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
- Model Inference
|
| 73 |
+
|
| 74 |
+
```python
|
| 75 |
+
query = 'What does the picture show?'
|
| 76 |
+
image_paths = ['image_path1'] # image or video path
|
| 77 |
+
|
| 78 |
+
from cli import Chatbot
|
| 79 |
+
bot = Chatbot(path-to-longllava)
|
| 80 |
+
output = bot.chat(query, image_paths)
|
| 81 |
+
print(output) # Prints the output of the model
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
## Acknowledgement
|
| 86 |
+
|
| 87 |
+
- [LLaVA](https://github.com/haotian-liu/LLaVA): Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
|
| 88 |
+
|
| 89 |
+
## Citation
|
| 90 |
+
|
| 91 |
+
```
|
| 92 |
+
@misc{wang2024longllavascalingmultimodalllms,
|
| 93 |
+
title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture},
|
| 94 |
+
author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang},
|
| 95 |
+
year={2024},
|
| 96 |
+
eprint={2409.02889},
|
| 97 |
+
archivePrefix={arXiv},
|
| 98 |
+
primaryClass={cs.CL},
|
| 99 |
+
url={https://arxiv.org/abs/2409.02889},
|
| 100 |
+
}
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
|