Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
bloom
Eval Results (legacy)
text-generation-inference
Instructions to use bigscience/bloomz with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigscience/bloomz with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigscience/bloomz")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz") model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigscience/bloomz with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigscience/bloomz" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloomz", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigscience/bloomz
- SGLang
How to use bigscience/bloomz with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigscience/bloomz" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloomz", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigscience/bloomz" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloomz", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigscience/bloomz with Docker Model Runner:
docker model run hf.co/bigscience/bloomz
Commit ·
37a1107
1
Parent(s): 480af3c
Add res
Browse files- evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/Replace/results.json +9 -0
- evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/True_or_False/results.json +9 -0
- evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/does_underscore_refer_to/results.json +9 -0
- evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/stand_for/results.json +9 -0
- evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/underscore_refer_to/results.json +9 -0
- evaluation_bloomz/evaluation_val/wmt14_hi_en/examples.limited=3000.model=xp3capmixnewcodelonglossseq_global_step498.task=wmt14_hi_en.templates=gpt3-en-hi-target.fewshot=0.batchsize=4.seed=1234.timestamp=2022-09-11T02:12:47.jsonl +0 -0
- evaluation_bloomz/evaluation_val/wmt14_hi_en/examples.limited=3000.model=xp3capmixnewcodelonglossseq_global_step498.task=wmt14_hi_en.templates=gpt3-hi-en-target.fewshot=0.batchsize=4.seed=1234.timestamp=2022-09-11T02:20:15.jsonl +0 -0
evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/Replace/results.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset_name": "Muennighoff/xwinograd",
|
| 3 |
+
"dataset_config_name": "ru",
|
| 4 |
+
"template_name": "Replace",
|
| 5 |
+
"evaluation": {
|
| 6 |
+
"accuracy": 0.6095238095238096
|
| 7 |
+
},
|
| 8 |
+
"arguments": "Namespace(config_name=None, dataset_config_name='ru', dataset_name='Muennighoff/xwinograd', debug=False, dtype='bfloat16', max_length=2048, model_name_or_path='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz', nospace=False, output_dir='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz/evaluation', pad_to_max_length=False, per_device_eval_batch_size=2, prefixlm=False, split='test', target_max_length=256, template_config_name='en', template_name='Replace', tokenizer_name=None, use_slow_tokenizer=False)"
|
| 9 |
+
}
|
evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/True_or_False/results.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset_name": "Muennighoff/xwinograd",
|
| 3 |
+
"dataset_config_name": "ru",
|
| 4 |
+
"template_name": "True or False",
|
| 5 |
+
"evaluation": {
|
| 6 |
+
"accuracy": 0.4793650793650794
|
| 7 |
+
},
|
| 8 |
+
"arguments": "Namespace(config_name=None, dataset_config_name='ru', dataset_name='Muennighoff/xwinograd', debug=False, dtype='bfloat16', max_length=2048, model_name_or_path='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz', nospace=False, output_dir='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz/evaluation', pad_to_max_length=False, per_device_eval_batch_size=2, prefixlm=False, split='test', target_max_length=256, template_config_name='en', template_name='True or False', tokenizer_name=None, use_slow_tokenizer=False)"
|
| 9 |
+
}
|
evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/does_underscore_refer_to/results.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset_name": "Muennighoff/xwinograd",
|
| 3 |
+
"dataset_config_name": "ru",
|
| 4 |
+
"template_name": "does underscore refer to",
|
| 5 |
+
"evaluation": {
|
| 6 |
+
"accuracy": 0.6095238095238096
|
| 7 |
+
},
|
| 8 |
+
"arguments": "Namespace(config_name=None, dataset_config_name='ru', dataset_name='Muennighoff/xwinograd', debug=False, dtype='bfloat16', max_length=2048, model_name_or_path='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz', nospace=False, output_dir='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz/evaluation', pad_to_max_length=False, per_device_eval_batch_size=2, prefixlm=False, split='test', target_max_length=256, template_config_name='en', template_name='does underscore refer to', tokenizer_name=None, use_slow_tokenizer=False)"
|
| 9 |
+
}
|
evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/stand_for/results.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset_name": "Muennighoff/xwinograd",
|
| 3 |
+
"dataset_config_name": "ru",
|
| 4 |
+
"template_name": "stand for",
|
| 5 |
+
"evaluation": {
|
| 6 |
+
"accuracy": 0.5523809523809524
|
| 7 |
+
},
|
| 8 |
+
"arguments": "Namespace(config_name=None, dataset_config_name='ru', dataset_name='Muennighoff/xwinograd', debug=False, dtype='bfloat16', max_length=2048, model_name_or_path='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz', nospace=False, output_dir='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz/evaluation', pad_to_max_length=False, per_device_eval_batch_size=2, prefixlm=False, split='test', target_max_length=256, template_config_name='en', template_name='stand for', tokenizer_name=None, use_slow_tokenizer=False)"
|
| 9 |
+
}
|
evaluation_bloomz/evaluation_l2/Muennighoff_xwinograd/ru/underscore_refer_to/results.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset_name": "Muennighoff/xwinograd",
|
| 3 |
+
"dataset_config_name": "ru",
|
| 4 |
+
"template_name": "underscore refer to",
|
| 5 |
+
"evaluation": {
|
| 6 |
+
"accuracy": 0.5777777777777777
|
| 7 |
+
},
|
| 8 |
+
"arguments": "Namespace(config_name=None, dataset_config_name='ru', dataset_name='Muennighoff/xwinograd', debug=False, dtype='bfloat16', max_length=2048, model_name_or_path='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz', nospace=False, output_dir='/gpfsssd/scratch/rech/six/commun/experiments/muennighoff/bloomckpt/176bt0/bloomz/evaluation', pad_to_max_length=False, per_device_eval_batch_size=2, prefixlm=False, split='test', target_max_length=256, template_config_name='en', template_name='underscore refer to', tokenizer_name=None, use_slow_tokenizer=False)"
|
| 9 |
+
}
|
evaluation_bloomz/evaluation_val/wmt14_hi_en/examples.limited=3000.model=xp3capmixnewcodelonglossseq_global_step498.task=wmt14_hi_en.templates=gpt3-en-hi-target.fewshot=0.batchsize=4.seed=1234.timestamp=2022-09-11T02:12:47.jsonl
DELETED
|
File without changes
|
evaluation_bloomz/evaluation_val/wmt14_hi_en/examples.limited=3000.model=xp3capmixnewcodelonglossseq_global_step498.task=wmt14_hi_en.templates=gpt3-hi-en-target.fewshot=0.batchsize=4.seed=1234.timestamp=2022-09-11T02:20:15.jsonl
DELETED
|
File without changes
|