How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf harshism1/codellama-leetcode-finetuned
# Run inference directly in the terminal:
llama-cli -hf harshism1/codellama-leetcode-finetuned
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf harshism1/codellama-leetcode-finetuned
# Run inference directly in the terminal:
llama-cli -hf harshism1/codellama-leetcode-finetuned
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf harshism1/codellama-leetcode-finetuned
# Run inference directly in the terminal:
./llama-cli -hf harshism1/codellama-leetcode-finetuned
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf harshism1/codellama-leetcode-finetuned
# Run inference directly in the terminal:
./build/bin/llama-cli -hf harshism1/codellama-leetcode-finetuned
Use Docker
docker model run hf.co/harshism1/codellama-leetcode-finetuned
Quick Links

🧠 Fine-tuned CodeLlama on LeetCode Problems

This model is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf on the greengerong/leetcode dataset. It has been instruction-tuned to generate Python solutions from LeetCode-style problem descriptions.

πŸ“¦ Model Formats Available

  • Transformers-compatible (.safetensors) β€” for use via πŸ€— Transformers.
  • GGUF (.gguf) β€” for use via llama.cpp, including llama-server, llama-cpp-python, and other compatible tools.

πŸ”— Example Usage (Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model_id = "harshism1/codellama-leetcode-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

prompt = """You are an AI assistant. Solve the following problem:

Given an array of integers, return indices of the two numbers such that they add up to a specific target.

## Solution
"""

result = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7)
print(result[0]["generated_text"])

βš™οΈ Usage with llama.cpp

You can run the model using tools in the llama.cpp ecosystem. Make sure you have the .gguf version of the model (e.g., codellama-leetcode.gguf).

🐍 Using llama-cpp-python

Install:

pip install llama-cpp-python

Then use:

from llama_cpp import Llama

llm = Llama(
    model_path="codellama-leetcode.gguf",
    n_ctx=4096,
    n_gpu_layers=99  # adjust based on your GPU
)

prompt = """### Problem
Given an array of integers, return indices of the two numbers such that they add up to a specific target.

## Solution
"""

output = llm(prompt, max_tokens=256)
print(output["choices"][0]["text"])

πŸ–₯️ Using llama-server

Start the server:

llama-server --model codellama-leetcode.gguf --port 8000 --n_gpu_layers 99

Then send a request:

curl http://localhost:8000/completion -d '{
  "prompt": "### Problem\nGiven an array of integers...\n\n## Solution\n",
  "n_predict": 256
}'
Downloads last month
30
Safetensors
Model size
7B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support