Update README.md
Browse files
README.md
CHANGED
|
@@ -1,85 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
# AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability
|
| 3 |
[[Project Page](https://aligngpt-vl.github.io/)] [[Paper](https://arxiv.org/abs/2405.14129)] [[Demo](http://47.116.173.89:7870/)] [[Model](https://huggingface.co/nlpzhaof)]
|
| 4 |
|
| 5 |
-
|
| 6 |
-
|
| 7 |
Authors: [Fei Zhao*](https://scholar.google.com/citations?user=V01xzWQAAAAJ&hl=zh-CN), Taotian Pang*, Chunhui Li, [Zhen Wu](https://scholar.google.com/citations?user=IoGlgtoAAAAJ&hl=zh-CN), Junjie Guo, Shangyu Xing, [Xinyu Dai](https://scholar.google.com/citations?user=zpWB1CgAAAAJ&hl=zh-CN)
|
| 8 |
|
| 9 |
-
<div align="center">
|
| 10 |
-
<img src="./assert/architecture.png" width="800px">
|
| 11 |
-
</div>
|
| 12 |
-
|
| 13 |
-
<!--  -->
|
| 14 |
|
| 15 |
## News and Updates
|
| 16 |
- [5/24] 🔥 We released **AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability**. Checkout the [paper](https://arxiv.org/abs/2405.14129) and [demo](http://47.116.173.89:7870/).
|
| 17 |
- [5/24] 🔥 The data is not ready yet. We will upload it within a week.
|
| 18 |
|
| 19 |
|
| 20 |
-
## Contents
|
| 21 |
-
- [Install](#install)
|
| 22 |
-
- [Model Zoo](#model-zoo)
|
| 23 |
-
- [Demo](#demo)
|
| 24 |
-
- [Training](#training)
|
| 25 |
-
- [Evaluation](#evaluation)
|
| 26 |
-
- [Performance](#performance)
|
| 27 |
-
|
| 28 |
-
## Install
|
| 29 |
-
|
| 30 |
-
### Docker
|
| 31 |
-
|
| 32 |
-
We recommend to use docker to prepare the environment.
|
| 33 |
-
|
| 34 |
-
1. Clone this repository and navigate to AlignGPT folder
|
| 35 |
-
|
| 36 |
-
```bash
|
| 37 |
-
git clone https://github.com/AlignGPT-VL/AlignGPT.git
|
| 38 |
-
cd AlignGPT
|
| 39 |
-
```
|
| 40 |
-
|
| 41 |
-
2. Build the docker image
|
| 42 |
-
|
| 43 |
-
```bash
|
| 44 |
-
cd deploy
|
| 45 |
-
docker build -t aligngpt:1.0 .
|
| 46 |
-
```
|
| 47 |
-
|
| 48 |
-
If your machine cannot connect to github to download the flash attention pip wheel, you can download it manually on https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.5/flash_attn-2.5.5+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl and put it to `deploy/flash_attn-2.5.5+cu118torch2.1cxx11abiFALSE-cp310-cp310-linux_x86_64.whl`.
|
| 49 |
-
|
| 50 |
-
3. To start the container, run the following command in the project root directory
|
| 51 |
-
|
| 52 |
-
```bash
|
| 53 |
-
docker run --gpus all --ipc=host --network=host --rm -it -v .:/workspace aligngpt:1.0
|
| 54 |
-
```
|
| 55 |
-
|
| 56 |
-
More `-v` options can be added to mount the data and output directories.
|
| 57 |
-
|
| 58 |
-
### Conda
|
| 59 |
-
|
| 60 |
-
1. Clone this repository and navigate to AlignGPT folder
|
| 61 |
-
|
| 62 |
-
```bash
|
| 63 |
-
git clone https://github.com/AlignGPT-VL/AlignGPT.git
|
| 64 |
-
cd AlignGPT
|
| 65 |
-
```
|
| 66 |
-
|
| 67 |
-
2. Install Package
|
| 68 |
-
|
| 69 |
-
```Shell
|
| 70 |
-
conda create -n aligngpt python=3.10 -y
|
| 71 |
-
conda activate aligngpt
|
| 72 |
-
pip install --upgrade pip # enable PEP 660 support
|
| 73 |
-
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118
|
| 74 |
-
pip install -r deploy/requirements.txt
|
| 75 |
-
```
|
| 76 |
-
|
| 77 |
-
Finally, you need to install flash-attention manually before running the model.
|
| 78 |
-
|
| 79 |
## Model Zoo
|
| 80 |
|
| 81 |
-
Please download the weights for LLM, Vision Backbone and place them in the `./playground/model` folder, we also provide all the weights for the AlignGPT checkpoint.
|
| 82 |
-
|
| 83 |
| Model | LLM | Vision Backbone | Pre-training | Instruct-tuning |
|
| 84 |
|----------|----------|-----------|---|---|
|
| 85 |
| AlignGPT-7B | [Vicuna 7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |[aligngpt-7b-pretrain](https://huggingface.co/nlpzhaof/aligngpt-7b-pretrain/tree/main)| [aligngpt-7b](https://huggingface.co/nlpzhaof/aligngpt-7b/tree/main)|
|
|
@@ -87,75 +24,6 @@ Please download the weights for LLM, Vision Backbone and place them in the `./pl
|
|
| 87 |
| AlignGPT-LLaMA2 | [LLaMA-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released| To be released|
|
| 88 |
| AlignGPT-LLaMA3 | [LLaMA-3-8B-Base](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released|To be released|
|
| 89 |
|
| 90 |
-
## Demo
|
| 91 |
-
|
| 92 |
-
### Start Gradio UI
|
| 93 |
-
You can start gradio service with the following command:
|
| 94 |
-
|
| 95 |
-
```
|
| 96 |
-
cd AlignGPT
|
| 97 |
-
bash start_api.sh
|
| 98 |
-
```
|
| 99 |
-
This script will launch three processes: the controller, the Gradio web server, and the model worker, all of which will run in the background. You can view logs of these processes in folder `log/`, and view process status with command `ps -ef | grep src.serve`.
|
| 100 |
-
|
| 101 |
-
### CLI Inference
|
| 102 |
-
Chat about images using AlignGPT without the need of Gradio interface.
|
| 103 |
-
```
|
| 104 |
-
python -m src.serve.cli \
|
| 105 |
-
--model-path playground/model/aligngpt-13b \
|
| 106 |
-
--image-file "image folder/image.jpg" \
|
| 107 |
-
```
|
| 108 |
-
|
| 109 |
-
## Training
|
| 110 |
-
|
| 111 |
-
We place all training data in the `./playground/data` folder. Please download [aligngpt_pretrain_data]() from HuggingFace and place it in `./playground/data`. The details are introduced below.
|
| 112 |
-
|
| 113 |
-
### Pre-training
|
| 114 |
-
* **Dataset**: We use the 558K image-text pairs in the pre-training phase. Organize them in `./playground/data` as follows:
|
| 115 |
-
|
| 116 |
-
```
|
| 117 |
-
├── LLaVA-Pretrain
|
| 118 |
-
│ └── blip_laion_cc_sbu_558k_with_similarity_number.json
|
| 119 |
-
│ └── images
|
| 120 |
-
```
|
| 121 |
-
|
| 122 |
-
* **Run**: You can launch the pre-training phase using the following command:
|
| 123 |
-
```
|
| 124 |
-
bash scripts/pretrain.sh
|
| 125 |
-
```
|
| 126 |
-
Before running the script of pretraining, you should set the arguments related to **directories** of model checkpoints, data and outputs, *i.e.*, `model_name_or_path`, `data_path`, `image_folder`, `vision_tower` and `output_dir`.
|
| 127 |
-
|
| 128 |
-
### Instruction-tuning
|
| 129 |
-
* **Dataset**: We used 665K image-text pairs/text data in the instruction-tuning phase. The images corresponding to these data include: `COCO`, `GQA`, `OCR-VQA`, `TextVQA`, and `VisualGenome`. Organize them in `./playground/data` as follows:
|
| 130 |
-
|
| 131 |
-
```
|
| 132 |
-
├── llava_v1_5_mix665k.json
|
| 133 |
-
├── coco
|
| 134 |
-
│ └── train2017
|
| 135 |
-
├── gqa
|
| 136 |
-
│ └── images
|
| 137 |
-
├── ocr_vqa
|
| 138 |
-
│ └── images
|
| 139 |
-
├── textvqa
|
| 140 |
-
│ └── train_images
|
| 141 |
-
└── vg
|
| 142 |
-
├── VG_100K
|
| 143 |
-
└── VG_100K_2
|
| 144 |
-
```
|
| 145 |
-
|
| 146 |
-
* **Run**: You can launch the instruction-tuning stage using the following command:
|
| 147 |
-
```
|
| 148 |
-
bash scripts/finetune.sh
|
| 149 |
-
```
|
| 150 |
-
Before running the script of instruction tuning, you should set the argument `pretrain_mm_mlp_align`, which is the path where you store the weights of the pre-training phase.
|
| 151 |
-
|
| 152 |
-
## Evaluation
|
| 153 |
-
|
| 154 |
-
We conduct evaluation on 12 benchmarks. The dataset to be evaluated is placed in `./playground/data/eval`. Please download [aligngpt_eval_data]() from HuggingFace and place it in `./playground/data/eval`. It contains custom annotations, scripts, and prediction files for AlignGPT. Here, we demonstrate how to evaluate the performance of our model on `MME` dataset. We use the following command to run the evaluation stage:
|
| 155 |
-
```
|
| 156 |
-
CUDA_VISIBLE_DEVICES=0 bash scripts/eval/mme.sh
|
| 157 |
-
```
|
| 158 |
-
You should set the directories of the model checkpoints and datasets in the scripts before running it. The evaluation of other datasets can be found in [Evaluation.md](docs/Evaluation.md).
|
| 159 |
|
| 160 |
## Performance
|
| 161 |
| Model | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-Bench-CN | SEED | LLaVA-Bench-Wild | MM-Vet |
|
|
@@ -176,12 +44,9 @@ If you find AlignGPT useful for your research and applications, please cite usin
|
|
| 176 |
}
|
| 177 |
```
|
| 178 |
|
| 179 |
-
## Acknowledgement
|
| 180 |
-
We build our project based on [LLaVA: Large Language and Vision Assistant](https://github.com/haotian-liu/LLaVA).
|
| 181 |
-
|
| 182 |
## License
|
| 183 |
|
| 184 |
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
|
| 185 |
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
|
| 186 |
|
| 187 |
-
The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
---
|
| 6 |
|
| 7 |
# AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability
|
| 8 |
[[Project Page](https://aligngpt-vl.github.io/)] [[Paper](https://arxiv.org/abs/2405.14129)] [[Demo](http://47.116.173.89:7870/)] [[Model](https://huggingface.co/nlpzhaof)]
|
| 9 |
|
|
|
|
|
|
|
| 10 |
Authors: [Fei Zhao*](https://scholar.google.com/citations?user=V01xzWQAAAAJ&hl=zh-CN), Taotian Pang*, Chunhui Li, [Zhen Wu](https://scholar.google.com/citations?user=IoGlgtoAAAAJ&hl=zh-CN), Junjie Guo, Shangyu Xing, [Xinyu Dai](https://scholar.google.com/citations?user=zpWB1CgAAAAJ&hl=zh-CN)
|
| 11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
## News and Updates
|
| 14 |
- [5/24] 🔥 We released **AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability**. Checkout the [paper](https://arxiv.org/abs/2405.14129) and [demo](http://47.116.173.89:7870/).
|
| 15 |
- [5/24] 🔥 The data is not ready yet. We will upload it within a week.
|
| 16 |
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
## Model Zoo
|
| 19 |
|
|
|
|
|
|
|
| 20 |
| Model | LLM | Vision Backbone | Pre-training | Instruct-tuning |
|
| 21 |
|----------|----------|-----------|---|---|
|
| 22 |
| AlignGPT-7B | [Vicuna 7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |[aligngpt-7b-pretrain](https://huggingface.co/nlpzhaof/aligngpt-7b-pretrain/tree/main)| [aligngpt-7b](https://huggingface.co/nlpzhaof/aligngpt-7b/tree/main)|
|
|
|
|
| 24 |
| AlignGPT-LLaMA2 | [LLaMA-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released| To be released|
|
| 25 |
| AlignGPT-LLaMA3 | [LLaMA-3-8B-Base](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released|To be released|
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
## Performance
|
| 29 |
| Model | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-Bench-CN | SEED | LLaVA-Bench-Wild | MM-Vet |
|
|
|
|
| 44 |
}
|
| 45 |
```
|
| 46 |
|
|
|
|
|
|
|
|
|
|
| 47 |
## License
|
| 48 |
|
| 49 |
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
|
| 50 |
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
|
| 51 |
|
| 52 |
+
The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
|