update readme
Browse files
README.md
CHANGED
|
@@ -1,4 +1,37 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
-
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Quick Start Guide
|
| 2 |
+
|
| 3 |
+
This hub contains weights that are trained with [LLaMA2-Accessory](https://github.com/Alpha-VLLM/LLaMA2-Accessory). To get started, follow the steps below:
|
| 4 |
+
|
| 5 |
+
1. Clone the LLaMA2-Accessory repository from GitHub:
|
| 6 |
+
```
|
| 7 |
+
git clone https://github.com/Alpha-VLLM/LLaMA2-Accessory.git
|
| 8 |
+
```
|
| 9 |
+
|
| 10 |
+
2. Download the weights from this hub.
|
| 11 |
+
|
| 12 |
+
3. The following instructions will guide you through running the models with each checkpoint.
|
| 13 |
+
|
| 14 |
+
Please note that the checkpoints provided here are trained using the quantization-assisted method. This involves quantizing and freezing the base model while retaining a small portion of trainable parameters. This approach significantly reduces VRAM usage.
|
| 15 |
+
|
| 16 |
+
For the checkpoints located in the `finetune/mm/` directory, use the following commands:
|
| 17 |
+
|
| 18 |
+
```bash
|
| 19 |
+
# Run the 13B multi-modal single run checkpoint
|
| 20 |
+
torchrun --nproc-per-node=1 demos/single_turn_mm.py \
|
| 21 |
+
--llama_config <path-to-params.json> \
|
| 22 |
+
--tokenizer_path <path-to-tokenizer.model> \
|
| 23 |
+
--pretrained_path <stage1-of-lamaQformerv2_13b> <this-repo>/finetune/mm/alpacaLlava_llamaQformerv2Peft_QF_13B/ \
|
| 24 |
+
--quant \
|
| 25 |
+
--llama_type llama_qformerv2_peft
|
| 26 |
+
|
| 27 |
+
# Explanation of flags:
|
| 28 |
+
# --llama_config : Path to the corresponding params.json
|
| 29 |
+
# --tokenizer_path : Path to the corresponding tokenizer.model
|
| 30 |
+
# --pretrained_path : Combination of <base weights> and <peft weights>
|
| 31 |
+
# --quant : Apply quantization method
|
| 32 |
+
# --llama_type : Choose from [llama, llama_adapter, llama_peft, llama_qformerv2, llama_qformerv2_peft]
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
Make sure to replace placeholders like `<path-to-params.json>`, `<path-to-tokenizer.model>`, and `<stage1-of-lamaQformerv2_13b>` with the actual paths.
|
| 36 |
+
|
| 37 |
+
Follow these steps to successfully run the checkpoints using the provided commands and flags. For more details, refer to the documentation in the LLaMA2-Accessory repository.
|