Instructions to use HuggingFaceH4/zephyr-7b-alpha with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HuggingFaceH4/zephyr-7b-alpha with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha") model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-alpha") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use HuggingFaceH4/zephyr-7b-alpha with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HuggingFaceH4/zephyr-7b-alpha" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/zephyr-7b-alpha", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/HuggingFaceH4/zephyr-7b-alpha
- SGLang
How to use HuggingFaceH4/zephyr-7b-alpha with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/zephyr-7b-alpha" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/zephyr-7b-alpha", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/zephyr-7b-alpha" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/zephyr-7b-alpha", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use HuggingFaceH4/zephyr-7b-alpha with Docker Model Runner:
docker model run hf.co/HuggingFaceH4/zephyr-7b-alpha
High GPU RAM usage
Has anyone experienced a high GPU RAM usage with this model?
When downloaded in 8bit, the model only takes about 7GB on the GPU, but at inference time it shoots up to 28GB.
Anyone else / suggestions on how to fix this?
Facing the same issue +1
What configuration do you use when loading the model? default or overridden?
What configuration do you use when loading the model? default or overridden?
I load it in int8 with a device_map = auto.
What about the FP (dtype)?
What about the FP (dtype)?
torch_dtype=torch.float32
Is that the issue?
all in all:
device_map="auto",
torch_dtype=torch.float32,
load_in_8bit= True
The default model’s configuration uses bfloat16 floating point format "torch_dtype": "bfloat16". Switching to half point precision will definitely help reducing memory footprint and increase performance.
I experience the same problem with generation using cache: bfloat16 28GB, float32 39GB. For llama it is 20GB for bfloat16.
This model has bigger context.