Instructions to use TigerResearch/tigerbot-7b-sft-v1-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TigerResearch/tigerbot-7b-sft-v1-4bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TigerResearch/tigerbot-7b-sft-v1-4bit")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TigerResearch/tigerbot-7b-sft-v1-4bit") model = AutoModelForCausalLM.from_pretrained("TigerResearch/tigerbot-7b-sft-v1-4bit") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use TigerResearch/tigerbot-7b-sft-v1-4bit with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TigerResearch/tigerbot-7b-sft-v1-4bit" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TigerResearch/tigerbot-7b-sft-v1-4bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/TigerResearch/tigerbot-7b-sft-v1-4bit
- SGLang
How to use TigerResearch/tigerbot-7b-sft-v1-4bit with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TigerResearch/tigerbot-7b-sft-v1-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TigerResearch/tigerbot-7b-sft-v1-4bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TigerResearch/tigerbot-7b-sft-v1-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TigerResearch/tigerbot-7b-sft-v1-4bit", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use TigerResearch/tigerbot-7b-sft-v1-4bit with Docker Model Runner:
docker model run hf.co/TigerResearch/tigerbot-7b-sft-v1-4bit
A cutting-edge foundation for your very own LLM.
๐ TigerBot โข ๐ค Hugging Face
This is a 4-bit GPTQ version of the Tigerbot 7B sft.
It was quantized to 4bit using: https://github.com/TigerResearch/TigerBot/tree/main/gptq
How to download and use this model in github: https://github.com/TigerResearch/TigerBot
Here are commands to clone the TigerBot and install.
conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt
Inference with command line interface
cd TigerBot/gptq
CUDA_VISIBLE_DEVICES=0 python tigerbot_infer.py TigerResearch/tigerbot-7b-sft-4bit-128g --wbits 4 --groupsize 128 --load TigerResearch/tigerbot-7b-sft-4bit-128g/tigerbot-7b-4bit-128g.pt
- Downloads last month
- 10