Instructions to use stabilityai/StableBeluga-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stabilityai/StableBeluga-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="stabilityai/StableBeluga-7B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga-7B") model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga-7B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use stabilityai/StableBeluga-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "stabilityai/StableBeluga-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/StableBeluga-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/stabilityai/StableBeluga-7B
- SGLang
How to use stabilityai/StableBeluga-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "stabilityai/StableBeluga-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/StableBeluga-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "stabilityai/StableBeluga-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/StableBeluga-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use stabilityai/StableBeluga-7B with Docker Model Runner:
docker model run hf.co/stabilityai/StableBeluga-7B
Add chat template
#5
by Rocketknight1 HF Staff - opened
- README.md +25 -1
- tokenizer_config.json +1 -0
README.md
CHANGED
|
@@ -48,6 +48,30 @@ Your prompt here
|
|
| 48 |
The output of Stable Beluga 7B
|
| 49 |
```
|
| 50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
## Model Details
|
| 52 |
|
| 53 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
|
@@ -96,4 +120,4 @@ Beluga is a new technology that carries risks with use. Testing conducted to dat
|
|
| 96 |
archivePrefix={arXiv},
|
| 97 |
primaryClass={cs.CL}
|
| 98 |
}
|
| 99 |
-
```
|
|
|
|
| 48 |
The output of Stable Beluga 7B
|
| 49 |
```
|
| 50 |
|
| 51 |
+
This formatting is also available via a pre-defined Transformers chat template, which means that lists of messages can be formatted for you with the `apply_chat_template()` method:
|
| 52 |
+
|
| 53 |
+
```python
|
| 54 |
+
chat = [
|
| 55 |
+
{"role": "system", "content": "This is a system prompt, please behave and help the user."},
|
| 56 |
+
{"role": "user", "content": "Your prompt here"},
|
| 57 |
+
]
|
| 58 |
+
tokenizer.apply_chat_template(chat, tokenize=False)
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
which will yield:
|
| 62 |
+
|
| 63 |
+
```
|
| 64 |
+
### System:
|
| 65 |
+
This is a system prompt, please behave and help the user.
|
| 66 |
+
|
| 67 |
+
### User:
|
| 68 |
+
Your prompt here
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized and formatted conversation ready to pass to `model.generate()`.
|
| 74 |
+
|
| 75 |
## Model Details
|
| 76 |
|
| 77 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
|
|
|
| 120 |
archivePrefix={arXiv},
|
| 121 |
primaryClass={cs.CL}
|
| 122 |
}
|
| 123 |
+
```
|
tokenizer_config.json
CHANGED
|
@@ -7,6 +7,7 @@
|
|
| 7 |
"rstrip": false,
|
| 8 |
"single_word": false
|
| 9 |
},
|
|
|
|
| 10 |
"clean_up_tokenization_spaces": false,
|
| 11 |
"eos_token": {
|
| 12 |
"__type": "AddedToken",
|
|
|
|
| 7 |
"rstrip": false,
|
| 8 |
"single_word": false
|
| 9 |
},
|
| 10 |
+
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{ '### ' + message['role'].title() + ':\n' + message['content'] + '\n\n' }}{% endfor %}{% if add_generation_prompt %}{{ '###Assistant:\n' }}{% endif %}",
|
| 11 |
"clean_up_tokenization_spaces": false,
|
| 12 |
"eos_token": {
|
| 13 |
"__type": "AddedToken",
|