Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
bloom
Eval Results (legacy)
text-generation-inference
Instructions to use bigscience/bloom with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigscience/bloom with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigscience/bloom")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") model = AutoModelForCausalLM.from_pretrained("bigscience/bloom") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigscience/bloom with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigscience/bloom" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigscience/bloom
- SGLang
How to use bigscience/bloom with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigscience/bloom with Docker Model Runner:
docker model run hf.co/bigscience/bloom
Commit ·
f1a1828
1
Parent(s): b07b3ae
Move to subsection
Browse files
README.md
CHANGED
|
@@ -2140,6 +2140,31 @@ Model may:
|
|
| 2140 |
<details>
|
| 2141 |
<summary>Click to expand</summary>
|
| 2142 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2143 |
See this repository for JSON files: https://github.com/bigscience-workshop/evaluation-results
|
| 2144 |
|
| 2145 |
| Task | Language | Metric | BLOOM-176B | OPT-175B* |
|
|
@@ -2291,30 +2316,6 @@ See this repository for JSON files: https://github.com/bigscience-workshop/evalu
|
|
| 2291 |
| humaneval | python | pass@10 | 0.322 | 0.0 |
|
| 2292 |
| humaneval | python | pass@100 | 0.555 | 0.003 |
|
| 2293 |
|
| 2294 |
-
## Metrics
|
| 2295 |
-
*This section describes the different ways performance is calculated and why.*
|
| 2296 |
-
|
| 2297 |
-
|
| 2298 |
-
Includes:
|
| 2299 |
-
|
| 2300 |
-
| Metric | Why chosen |
|
| 2301 |
-
|--------------------|--------------------------------------------------------------------|
|
| 2302 |
-
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
|
| 2303 |
-
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
|
| 2304 |
-
|
| 2305 |
-
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
|
| 2306 |
-
|
| 2307 |
-
## Factors
|
| 2308 |
-
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
|
| 2309 |
-
|
| 2310 |
-
- Language, such as English or Yoruba
|
| 2311 |
-
|
| 2312 |
-
- Domain, such as newswire or stories
|
| 2313 |
-
|
| 2314 |
-
- Demographic characteristics, such as gender or nationality
|
| 2315 |
-
|
| 2316 |
-
## Results
|
| 2317 |
-
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
|
| 2318 |
|
| 2319 |
**Train-time Evaluation:**
|
| 2320 |
|
|
@@ -2326,7 +2327,6 @@ As of 25.May.2022, 15:00 PST:
|
|
| 2326 |
|
| 2327 |
- Perplexity: 8.9
|
| 2328 |
|
| 2329 |
-
(More evaluation scores forthcoming.)
|
| 2330 |
|
| 2331 |
</details>
|
| 2332 |
|
|
|
|
| 2140 |
<details>
|
| 2141 |
<summary>Click to expand</summary>
|
| 2142 |
|
| 2143 |
+
## Metrics
|
| 2144 |
+
*This section describes the different ways performance is calculated and why.*
|
| 2145 |
+
|
| 2146 |
+
|
| 2147 |
+
Includes:
|
| 2148 |
+
|
| 2149 |
+
| Metric | Why chosen |
|
| 2150 |
+
|--------------------|--------------------------------------------------------------------|
|
| 2151 |
+
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
|
| 2152 |
+
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
|
| 2153 |
+
|
| 2154 |
+
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
|
| 2155 |
+
|
| 2156 |
+
## Factors
|
| 2157 |
+
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
|
| 2158 |
+
|
| 2159 |
+
- Language, such as English or Yoruba
|
| 2160 |
+
|
| 2161 |
+
- Domain, such as newswire or stories
|
| 2162 |
+
|
| 2163 |
+
- Demographic characteristics, such as gender or nationality
|
| 2164 |
+
|
| 2165 |
+
## Results
|
| 2166 |
+
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
|
| 2167 |
+
|
| 2168 |
See this repository for JSON files: https://github.com/bigscience-workshop/evaluation-results
|
| 2169 |
|
| 2170 |
| Task | Language | Metric | BLOOM-176B | OPT-175B* |
|
|
|
|
| 2316 |
| humaneval | python | pass@10 | 0.322 | 0.0 |
|
| 2317 |
| humaneval | python | pass@100 | 0.555 | 0.003 |
|
| 2318 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2319 |
|
| 2320 |
**Train-time Evaluation:**
|
| 2321 |
|
|
|
|
| 2327 |
|
| 2328 |
- Perplexity: 8.9
|
| 2329 |
|
|
|
|
| 2330 |
|
| 2331 |
</details>
|
| 2332 |
|