| # Quick Start Guide | |
| This hub contains weights that are trained with [LLaMA2-Accessory](https://github.com/Alpha-VLLM/LLaMA2-Accessory). To get started, follow the steps below: | |
| 1. Clone the LLaMA2-Accessory repository from GitHub: | |
| ``` | |
| git clone https://github.com/Alpha-VLLM/LLaMA2-Accessory.git | |
| ``` | |
| 2. Download the weights from this hub. | |
| 3. The following instructions will guide you through running the models with each checkpoint. | |
| Please note that the checkpoints provided here are trained using the quantization-assisted method. This involves quantizing and freezing the base model while retaining a small portion of trainable parameters. This approach significantly reduces VRAM usage. | |
| For the checkpoints located in the `finetune/mm/` directory, use the following commands: | |
| ```bash | |
| # Run the 13B multi-modal single run checkpoint | |
| torchrun --nproc-per-node=1 demos/single_turn_mm.py \ | |
| --llama_config <path-to-params.json> configs/model/finetune/sg/llamaPeft_normBiasLora.json \ | |
| --tokenizer_path <path-to-tokenizer.model> \ | |
| --pretrained_path <stage1-of-lamaQformerv2_13b> <this-repo>/finetune/mm/alpacaLlava_llamaQformerv2Peft_QF_13B/epoch2 \ | |
| --quant \ | |
| --llama_type llama_qformerv2_peft | |
| # Explanation of flags: | |
| # --llama_config : Path to the corresponding params.json | |
| # --tokenizer_path : Path to the corresponding tokenizer.model | |
| # --pretrained_path : Combination of <base weights> and <peft weights> | |
| # --quant : Apply quantization method | |
| # --llama_type : Choose from [llama, llama_adapter, llama_peft, llama_qformerv2, llama_qformerv2_peft] | |
| ``` | |
| For the checkpoints located in the `finetune/sg/` directory, use the following commands: | |
| ```bash | |
| # 70B single turn platypus | |
| torchrun --nproc-per-node=1 --master-port 29500 demos/single_turn.py \ | |
| --llama_config <path-to-Llama-2-70b/params.json> \ | |
| --tokenizer_path <path-to-tokenizer.model> \ | |
| --pretrained_path <path-to-Llama-2-70b> <path-to-platypus_normBias_QF_70B/epoch3> \ | |
| --quant --llama_type llama_peft | |
| ``` | |
| Make sure to replace placeholders like `<path-to-params.json>`, `<path-to-tokenizer.model>`, and `<stage1-of-lamaQformerv2_13b>` with the actual paths. | |
| Follow these steps to successfully run the checkpoints using the provided commands and flags. For more details, refer to the documentation in the LLaMA2-Accessory repository. | |