How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf harshism1/codellama-leetcode-finetuned# Run inference directly in the terminal:
llama-cli -hf harshism1/codellama-leetcode-finetunedUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf harshism1/codellama-leetcode-finetuned# Run inference directly in the terminal:
./llama-cli -hf harshism1/codellama-leetcode-finetunedBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf harshism1/codellama-leetcode-finetuned# Run inference directly in the terminal:
./build/bin/llama-cli -hf harshism1/codellama-leetcode-finetunedUse Docker
docker model run hf.co/harshism1/codellama-leetcode-finetunedQuick Links
π§ Fine-tuned CodeLlama on LeetCode Problems
This model is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf on the greengerong/leetcode dataset. It has been instruction-tuned to generate Python solutions from LeetCode-style problem descriptions.
π¦ Model Formats Available
- Transformers-compatible (
.safetensors) β for use via π€ Transformers. - GGUF (
.gguf) β for use via llama.cpp, includingllama-server,llama-cpp-python, and other compatible tools.
π Example Usage (Transformers)
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "harshism1/codellama-leetcode-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = """You are an AI assistant. Solve the following problem:
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
## Solution
"""
result = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7)
print(result[0]["generated_text"])
βοΈ Usage with llama.cpp
You can run the model using tools in the llama.cpp ecosystem. Make sure you have the .gguf version of the model (e.g., codellama-leetcode.gguf).
π Using llama-cpp-python
Install:
pip install llama-cpp-python
Then use:
from llama_cpp import Llama
llm = Llama(
model_path="codellama-leetcode.gguf",
n_ctx=4096,
n_gpu_layers=99 # adjust based on your GPU
)
prompt = """### Problem
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
## Solution
"""
output = llm(prompt, max_tokens=256)
print(output["choices"][0]["text"])
π₯οΈ Using llama-server
Start the server:
llama-server --model codellama-leetcode.gguf --port 8000 --n_gpu_layers 99
Then send a request:
curl http://localhost:8000/completion -d '{
"prompt": "### Problem\nGiven an array of integers...\n\n## Solution\n",
"n_predict": 256
}'
- Downloads last month
- 30
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf harshism1/codellama-leetcode-finetuned# Run inference directly in the terminal: llama-cli -hf harshism1/codellama-leetcode-finetuned