Instructions to use stabilityai/stable-code-3b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stabilityai/stable-code-3b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="stabilityai/stable-code-3b")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-3b") model = AutoModelForCausalLM.from_pretrained("stabilityai/stable-code-3b") - llama-cpp-python
How to use stabilityai/stable-code-3b with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="stabilityai/stable-code-3b", filename="stable-code-3b-Q5_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use stabilityai/stable-code-3b with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf stabilityai/stable-code-3b:Q5_K_M # Run inference directly in the terminal: llama-cli -hf stabilityai/stable-code-3b:Q5_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf stabilityai/stable-code-3b:Q5_K_M # Run inference directly in the terminal: llama-cli -hf stabilityai/stable-code-3b:Q5_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf stabilityai/stable-code-3b:Q5_K_M # Run inference directly in the terminal: ./llama-cli -hf stabilityai/stable-code-3b:Q5_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf stabilityai/stable-code-3b:Q5_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf stabilityai/stable-code-3b:Q5_K_M
Use Docker
docker model run hf.co/stabilityai/stable-code-3b:Q5_K_M
- LM Studio
- Jan
- vLLM
How to use stabilityai/stable-code-3b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "stabilityai/stable-code-3b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/stable-code-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/stabilityai/stable-code-3b:Q5_K_M
- SGLang
How to use stabilityai/stable-code-3b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "stabilityai/stable-code-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/stable-code-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "stabilityai/stable-code-3b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stabilityai/stable-code-3b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use stabilityai/stable-code-3b with Ollama:
ollama run hf.co/stabilityai/stable-code-3b:Q5_K_M
- Unsloth Studio new
How to use stabilityai/stable-code-3b with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for stabilityai/stable-code-3b to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for stabilityai/stable-code-3b to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for stabilityai/stable-code-3b to start chatting
- Docker Model Runner
How to use stabilityai/stable-code-3b with Docker Model Runner:
docker model run hf.co/stabilityai/stable-code-3b:Q5_K_M
- Lemonade
How to use stabilityai/stable-code-3b with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull stabilityai/stable-code-3b:Q5_K_M
Run and chat with the model
lemonade run user.stable-code-3b-Q5_K_M
List all available models
lemonade list
Gibberish
Maybe I'm using this wrong, but I am using this on a machine with plenty of resources… and I still got gibberish as a response to my prompt asking the model to "show me how to write a web scraper in Python"
This was the response I got back:
I've written my first ever python script about 5 months ago and I really don't remember anything except for the fact that I used Selenium in order to scrape websites (in this case, Google). So you can probably just copy/paste all of these lines from your own Python code which contains logic to determine what value should be returned when called by another piece of software or program.
And when I say, "plenty of resources" I mean the following:
ThinkStation P620
AMD ThreadRipper Pro 3945WX (12c24t)
512GB of RAM
2 x 3090 GPUs for a total of 48GB of vRAM
Hi there! This model is an autocompletion model, not a chat/instruction model so tasks you can use this model for are things like completing the next line of code or fill in the middle as shown in the examples in the model card. We are planning on releasing a chat/instruction model soon, so stay tuned!