Instructions to use HuggingFaceH4/starchat-beta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HuggingFaceH4/starchat-beta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/starchat-beta")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-beta") model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/starchat-beta") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use HuggingFaceH4/starchat-beta with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HuggingFaceH4/starchat-beta" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/starchat-beta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/HuggingFaceH4/starchat-beta
- SGLang
How to use HuggingFaceH4/starchat-beta with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/starchat-beta" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/starchat-beta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/starchat-beta" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/starchat-beta", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use HuggingFaceH4/starchat-beta with Docker Model Runner:
docker model run hf.co/HuggingFaceH4/starchat-beta
SFT taking high memory with Transformers (>5x the amount it takes to load model checkpoint )
I am trying to do SFT for a model: bigcode/starcoderbase-1b on 80Gb Gpu machine. (g5.12xlarge)
SFT Dataset: https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k (~800bytes/row * 1000rows) (my dataset is different and bigger than this, but I am trying this for a benchmark)
I can load the model for inferencing with 5GB GPU memory consumed using: AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
But when I do SFT with
num_train_epochs = 1
per_device_train_batch_size = 1
per_device_eval_batch_size = 1
but it consumes 30 GB across the SFT. (total: 35)
Does SFT take so much memory over (loading model as a checkpoint and inferencing) ?
And any way I can do this in less memory and more time ? (as I plan to do SFT with a 15B model which takes ~60GB just for loading model checkpoint)
(End goal being using a 15B model, the 4-bit quantised version of model takes 10-12 GB for model checkpoint and additional 50GB for SFT) (can't do SFT over non-quantised as that cause below error)
return F.dropout(input, self.p, self.training, self.inplace)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/functional.py", line 1252, in dropout
return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 22.04 GiB total capacity; 20.72 GiB already allocated; 43.12 MiB free; 20.87 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
tried: !export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512' (still same issue)