OpenCoder-LLM/opc-sft-stage2
Viewer • Updated • 436k • 2.04k • 103
How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="ysn-rfd/OpenCoder-1.5B-Instruct-GGUF")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ysn-rfd/OpenCoder-1.5B-Instruct-GGUF", dtype="auto")How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ysn-rfd/OpenCoder-1.5B-Instruct-GGUF", filename="opencoder-1.5b-instruct-q4_0.gguf", )
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with llama.cpp:
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0 # Run inference directly in the terminal: llama-cli -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0 # Run inference directly in the terminal: llama-cli -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0 # Run inference directly in the terminal: ./llama-cli -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
docker model run hf.co/ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "ysn-rfd/OpenCoder-1.5B-Instruct-GGUF"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ysn-rfd/OpenCoder-1.5B-Instruct-GGUF",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "ysn-rfd/OpenCoder-1.5B-Instruct-GGUF" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ysn-rfd/OpenCoder-1.5B-Instruct-GGUF",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "ysn-rfd/OpenCoder-1.5B-Instruct-GGUF" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ysn-rfd/OpenCoder-1.5B-Instruct-GGUF",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with Ollama:
ollama run hf.co/ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ysn-rfd/OpenCoder-1.5B-Instruct-GGUF to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ysn-rfd/OpenCoder-1.5B-Instruct-GGUF to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ysn-rfd/OpenCoder-1.5B-Instruct-GGUF to start chatting
How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with Docker Model Runner:
docker model run hf.co/ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
How to use ysn-rfd/OpenCoder-1.5B-Instruct-GGUF with Lemonade:
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ysn-rfd/OpenCoder-1.5B-Instruct-GGUF:Q4_0
lemonade run user.OpenCoder-1.5B-Instruct-GGUF-Q4_0
lemonade list
Quantized with llama.cpp using all-gguf-same-where
Q4_K_M (Best balance of speed/quality)Q4_0 (Optimized for ARM CPUs)Q8_0 (Near-original quality)| 🚀 Download | 🔢 Type | 📝 Notes |
|---|---|---|
| Download | Basic quantization | |
| Download | Small size | |
| Download | Balanced quality | |
| Download | Better quality | |
| Download | Fast on ARM | |
| Download | Fast, recommended | |
| Download | Best balance | |
| Download | Good quality | |
| Download | Balanced | |
| Download | High quality | |
| Download | Very good quality | |
| Download | Fast, best quality | |
| Download | Maximum accuracy |
💡 Tip: Use F16 for maximum precision when quality is critical
| Application | Description | Download Link |
|---|---|---|
| Llama.cpp | A fast and efficient inference engine for GGUF models. | GitHub Repository |
| Ollama | A streamlined solution for running LLMs locally. | Website |
| AnythingLLM | An AI-powered knowledge management tool. | GitHub Repository |
| Open WebUI | A user-friendly web interface for running local LLMs. | GitHub Repository |
| GPT4All | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | GitHub Repository |
| LM Studio | A desktop application designed to run and manage local LLMs, supporting GGUF format. | Website |
| GPT4All Chat | A chat application compatible with GGUF models for local, offline interactions. | GitHub Repository |
| Application | Description | Download Link |
|---|---|---|
| ChatterUI | A simple and lightweight LLM app for mobile devices. | GitHub Repository |
| Maid | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | GitHub Repository |
| PocketPal AI | A mobile AI assistant powered by local models. | GitHub Repository |
| Layla | A flexible platform for running various AI models on mobile devices. | Website |
| Application | Description | Download Link |
|---|---|---|
| Stable Diffusion | An open-source AI model for generating images from text. | GitHub Repository |
| Stable Diffusion WebUI | A web application providing access to Stable Diffusion models via a browser interface. | GitHub Repository |
| Local Dream | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | GitHub Repository |
| Stable-Diffusion-Android (SDAI) | An open-source AI art application for Android devices, enabling digital art creation. | GitHub Repository |
4-bit
5-bit
8-bit
Base model
infly/OpenCoder-1.5B-Base