Instructions to use seniruk/commitGen-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use seniruk/commitGen-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="seniruk/commitGen-gguf", filename="commitGen.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use seniruk/commitGen-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf seniruk/commitGen-gguf # Run inference directly in the terminal: llama-cli -hf seniruk/commitGen-gguf
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf seniruk/commitGen-gguf # Run inference directly in the terminal: llama-cli -hf seniruk/commitGen-gguf
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf seniruk/commitGen-gguf # Run inference directly in the terminal: ./llama-cli -hf seniruk/commitGen-gguf
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf seniruk/commitGen-gguf # Run inference directly in the terminal: ./build/bin/llama-cli -hf seniruk/commitGen-gguf
Use Docker
docker model run hf.co/seniruk/commitGen-gguf
- LM Studio
- Jan
- Ollama
How to use seniruk/commitGen-gguf with Ollama:
ollama run hf.co/seniruk/commitGen-gguf
- Unsloth Studio new
How to use seniruk/commitGen-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for seniruk/commitGen-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for seniruk/commitGen-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for seniruk/commitGen-gguf to start chatting
- Pi new
How to use seniruk/commitGen-gguf with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf seniruk/commitGen-gguf
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "seniruk/commitGen-gguf" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use seniruk/commitGen-gguf with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf seniruk/commitGen-gguf
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default seniruk/commitGen-gguf
Run Hermes
hermes
- Docker Model Runner
How to use seniruk/commitGen-gguf with Docker Model Runner:
docker model run hf.co/seniruk/commitGen-gguf
- Lemonade
How to use seniruk/commitGen-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull seniruk/commitGen-gguf
Run and chat with the model
lemonade run user.commitGen-gguf-{{QUANT_TAG}}List all available models
lemonade list
Hi, Iβm Seniru Epasinghe π
Iβm an AI undergraduate and an AI enthusiast, working on machine learning projects and open-source contributions.
I enjoy exploring AI pipelines, natural language processing, and building tools that make development easier.
π Connect with me
Purpose
Used for generating high quality commit messages for a given git difference
Model Description
Generated by fine tuning Qwen2.5-Coder-1.5B-Instruct on bigcode/commitpackft dataset for 2 epochs Trained on a total of 277 Languages Achieved a final training loss in the range of 1- 1.7 (due to data set not containing equal data rows for each language) For common languages(python, java ,javascripts,c etc) loss went for a minimum of 1.0335
Environmental Impact
- Hardware Type: geforce RTX 4060 TI - 16GB]
- Hours used: 10 Hours
- Cloud Provider: local
Results
Inference
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="seniruk/commitGen-gguf",
filename="commitGen.gguf",
)
diff="" #the git difference
instruction= "" #the instruction --> 'create a commit message for given git difference'
prompt = "{}{}".format(instruction,diff)
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
output = llm.create_chat_completion(
messages=messages,
temperature=0.5
)
llm_message = output['choices'][0]['message']['content']
print(llm_message)
- Downloads last month
- -
We're not able to determine the quantization variants.


# Gated model: Login with a HF token with gated access permission hf auth login