Instructions to use Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit") model = AutoModelForCausalLM.from_pretrained("Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit
- SGLang
How to use Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit with Docker Model Runner:
docker model run hf.co/Efficient-ML/LLaMA-3-8B-SmoothQuant-4bit-4bit
Why is this exactly the same size as the 8-bit one?
I'm guessing there is a mistake...?
In our research, to quickly validate the effectiveness of various quantization methods, we only performed fake-quant on smoothquant without storing in real 4-bit format. Therefore, the size of the checkpoints we saved and uploaded is actually equivalent to the fp16 model.πͺ
We will continue to improve our work to achieve as realistic quantization testing as possible with software and hardware support. More work is on the way!π€
That makes sense, I guess I could've looked :-D Thank you for the clarification!