Instructions to use bigscience/bloom-560m with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigscience/bloom-560m with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigscience/bloom-560m")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigscience/bloom-560m with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigscience/bloom-560m" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom-560m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigscience/bloom-560m
- SGLang
How to use bigscience/bloom-560m with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigscience/bloom-560m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom-560m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigscience/bloom-560m" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom-560m", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigscience/bloom-560m with Docker Model Runner:
docker model run hf.co/bigscience/bloom-560m
Adding ONNX file of this model
Browse filesBeep boop I am the [ONNX export bot 🤖🏎️](https://huggingface.co/spaces/onnx/export). On behalf of [Gidz](https://huggingface.co/Gidz), I would like to add to this repository the model converted to ONNX.
What is ONNX? It stands for "Open Neural Network Exchange", and is the most commonly used open standard for machine learning interoperability. You can find out more at [onnx.ai](https://onnx.ai/)!
The exported ONNX model can be then be consumed by various backends as TensorRT or TVM, or simply be used in a few lines with 🤗 Optimum through ONNX Runtime, check out how [here](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models)!
- .gitattributes +3 -0
- onnx/config.json +31 -0
- onnx/decoder_model.onnx +3 -0
- onnx/decoder_model.onnx_data +3 -0
- onnx/decoder_model_merged.onnx +3 -0
- onnx/decoder_model_merged.onnx_data +3 -0
- onnx/decoder_with_past_model.onnx +3 -0
- onnx/decoder_with_past_model.onnx_data +3 -0
- onnx/generation_config.json +7 -0
- onnx/special_tokens_map.json +6 -0
- onnx/tokenizer.json +3 -0
- onnx/tokenizer_config.json +11 -0
|
@@ -27,3 +27,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
| 29 |
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
| 29 |
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
onnx/decoder_model_merged.onnx_data filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
onnx/decoder_with_past_model.onnx_data filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
onnx/decoder_model.onnx_data filter=lfs diff=lfs merge=lfs -text
|
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_name_or_path": "bigscience/bloom-560m",
|
| 3 |
+
"apply_residual_connection_post_layernorm": false,
|
| 4 |
+
"architectures": [
|
| 5 |
+
"BloomForCausalLM"
|
| 6 |
+
],
|
| 7 |
+
"attention_dropout": 0.0,
|
| 8 |
+
"attention_softmax_in_fp32": true,
|
| 9 |
+
"bias_dropout_fusion": true,
|
| 10 |
+
"bos_token_id": 1,
|
| 11 |
+
"eos_token_id": 2,
|
| 12 |
+
"hidden_dropout": 0.0,
|
| 13 |
+
"hidden_size": 1024,
|
| 14 |
+
"initializer_range": 0.02,
|
| 15 |
+
"layer_norm_epsilon": 1e-05,
|
| 16 |
+
"masked_softmax_fusion": true,
|
| 17 |
+
"model_type": "bloom",
|
| 18 |
+
"n_head": 16,
|
| 19 |
+
"n_inner": null,
|
| 20 |
+
"n_layer": 24,
|
| 21 |
+
"offset_alibi": 100,
|
| 22 |
+
"pad_token_id": 3,
|
| 23 |
+
"pretraining_tp": 1,
|
| 24 |
+
"skip_bias_add": true,
|
| 25 |
+
"skip_bias_add_qkv": false,
|
| 26 |
+
"slow_but_exact": false,
|
| 27 |
+
"transformers_version": "4.30.2",
|
| 28 |
+
"unk_token_id": 0,
|
| 29 |
+
"use_cache": true,
|
| 30 |
+
"vocab_size": 250880
|
| 31 |
+
}
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c318453eb4bdb7205e9a33c398de2e9db0e663513279fb065bccf4b00aa973ac
|
| 3 |
+
size 708438
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:046f1802b6053dc36e20f2058b1dca128cd9164cd37d34495962afa1f21ca102
|
| 3 |
+
size 3264462848
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:274f5d10ec33937d2a60043e928c358b3918192c6a6631e68d8c4b59b36787a0
|
| 3 |
+
size 1399393
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:046f1802b6053dc36e20f2058b1dca128cd9164cd37d34495962afa1f21ca102
|
| 3 |
+
size 3264462848
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:78714ec76dae1951e26d8823252c78ab621ef2d449fb59d19bcd95d9c7ddd498
|
| 3 |
+
size 714961
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:046f1802b6053dc36e20f2058b1dca128cd9164cd37d34495962afa1f21ca102
|
| 3 |
+
size 3264462848
|
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_from_model_config": true,
|
| 3 |
+
"bos_token_id": 1,
|
| 4 |
+
"eos_token_id": 2,
|
| 5 |
+
"pad_token_id": 3,
|
| 6 |
+
"transformers_version": "4.30.2"
|
| 7 |
+
}
|
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": "<s>",
|
| 3 |
+
"eos_token": "</s>",
|
| 4 |
+
"pad_token": "<pad>",
|
| 5 |
+
"unk_token": "<unk>"
|
| 6 |
+
}
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:17a208233d2ee8d8c83b23bc214df737c44806a1919f444e89b31e586cd956ba
|
| 3 |
+
size 14500471
|
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_prefix_space": false,
|
| 3 |
+
"bos_token": "<s>",
|
| 4 |
+
"clean_up_tokenization_spaces": false,
|
| 5 |
+
"eos_token": "</s>",
|
| 6 |
+
"model_max_length": 1000000000000000019884624838656,
|
| 7 |
+
"pad_token": "<pad>",
|
| 8 |
+
"padding_side": "left",
|
| 9 |
+
"tokenizer_class": "BloomTokenizer",
|
| 10 |
+
"unk_token": "<unk>"
|
| 11 |
+
}
|