How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16# Run inference directly in the terminal:
llama-cli -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16# Run inference directly in the terminal:
./llama-cli -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16# Run inference directly in the terminal:
./build/bin/llama-cli -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16Use Docker
docker model run hf.co/Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16Quick Links
starcoder2-instruct (not my model, I just quantized it)
We've fine-tuned starcoder2-15b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves 77.4 pass@1 on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).
Usage
Here give some examples of how to use our model:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/starcoder2-15b-instruct")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/starcoder2-15b-instruct",
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))
With text-generation pipeline:
from transformers import pipeline
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
generator = pipeline(
model="TechxGenus/starcoder2-15b-instruct",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=2048)
print(result[0]["generated_text"])
Note
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
- Downloads last month
- 149
Hardware compatibility
Log In to add your hardware
16-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16# Run inference directly in the terminal: llama-cli -hf Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF:F16