Instructions to use bigscience/bloom with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigscience/bloom with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigscience/bloom")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") model = AutoModelForCausalLM.from_pretrained("bigscience/bloom") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigscience/bloom with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigscience/bloom" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigscience/bloom
- SGLang
How to use bigscience/bloom with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigscience/bloom with Docker Model Runner:
docker model run hf.co/bigscience/bloom
Minimum requirements for running inference on 176B model
I am planning to run inference on the 176B model, but I could not find much information, regarding the minimum requirements.
If anyone has experience with this, could you please provide some insights on what is the minimum set-up required to run inference on the 176B model?
Below are my specifications.
- 4 x A100 40GB GPUs
- Can allocate up to 10TB of free disk space
Thank you.
If you even looked at the files section, you would see that even the smallest possible size of the 176B model is over 300GB of disk space.
If you even looked at the files section, you would see that even the smallest possible size of the 176B model is over 300GB of disk space.
Thanks for the input! I should have mentioned that I can allocate up to 10TB more space. Changed the original post accordingly.
What is the largest possible size the model could be?
You need to be able to fully load the model into the GPUs, the disk space is not relevant.
So 400GB of GPU Vram....at minimum
If you load your model in 8bit you can half the GPU memory requirement (200GB needed instead of 400) . Install bitsandbytes and just add load_in_8bit=True when calling from_pretrained
If you load your model in 8bit you can half the GPU memory requirement (200GB needed instead of 400) . Install
bitsandbytesand just addload_in_8bit=Truewhen callingfrom_pretrained
Thank you for the suggestion! May I also ask if there would be any significant effects on the model performance, if I load the model in 8bit?
You should not observe any performance degradation, check out the paper: https://arxiv.org/abs/2208.07339 or the blogpost about the integration: https://huggingface.co/blog/hf-bitsandbytes-integration