| | --- |
| | license: apache-2.0 |
| | --- |
| | <div style="width: 100%;"> |
| | <img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;"> |
| | </div> |
| | <p align="center"> |
| | <font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font> |
| | </p> |
| | <p align="center"> |
| | 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a> |
| | </p> |
| | |
| |
|
| |
|
| | This is a 4-bit GPTQ version of the [Tigerbot 13b chat](https://huggingface.co/TigerResearch/tigerbot-13b-chat). |
| |
|
| | It was quantized to 4bit using: https://github.com/PanQiWei/AutoGPTQ |
| |
|
| | ## How to download and use this model in github: https://github.com/TigerResearch/TigerBot |
| |
|
| | Here are commands to clone the TigerBot and install. |
| |
|
| | ``` |
| | conda create --name tigerbot python=3.8 |
| | conda activate tigerbot |
| | conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia |
| | |
| | git clone https://github.com/TigerResearch/TigerBot |
| | cd TigerBot |
| | pip install -r requirements.txt |
| | ``` |
| |
|
| | Inference with command line interface |
| |
|
| | infer with exllama |
| | ``` |
| | # 安装exllama_lib |
| | pip install exllama_lib@git+https://github.com/taprosoft/exllama.git |
| | |
| | # 启动推理 |
| | CUDA_VISIBLE_DEVICES=0 python other_infer/exllama_infer.py --model_path TigerResearch/tigerbot-13b-chat-4bit |
| | ``` |
| |
|
| |
|
| | infer with auto-gptq |
| | ``` |
| | # 安装auto-gptq |
| | pip install auto-gptq |
| | |
| | # 启动推理 |
| | CUDA_VISIBLE_DEVICES=0 python other_infer/gptq_infer.py --model_path TigerResearch/tigerbot-13b-chat-4bit |
| | ``` |