How to use from
Unsloth StudioInstall Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for seniruk/commitGen-gguf to start chattingInstall Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for seniruk/commitGen-gguf to start chattingUsing HuggingFace Spaces for Unsloth
# No setup required# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for seniruk/commitGen-gguf to start chattingQuick Links
Hi, Iβm Seniru Epasinghe π
Iβm an AI undergraduate and an AI enthusiast, working on machine learning projects and open-source contributions.
I enjoy exploring AI pipelines, natural language processing, and building tools that make development easier.
π Connect with me
Purpose
Used for generating high quality commit messages for a given git difference
Model Description
Generated by fine tuning Qwen2.5-Coder-1.5B-Instruct on bigcode/commitpackft dataset for 2 epochs Trained on a total of 277 Languages Achieved a final training loss in the range of 1- 1.7 (due to data set not containing equal data rows for each language) For common languages(python, java ,javascripts,c etc) loss went for a minimum of 1.0335
Environmental Impact
- Hardware Type: geforce RTX 4060 TI - 16GB]
- Hours used: 10 Hours
- Cloud Provider: local
Results
Inference
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="seniruk/commitGen-gguf",
filename="commitGen.gguf",
)
diff="" #the git difference
instruction= "" #the instruction --> 'create a commit message for given git difference'
prompt = "{}{}".format(instruction,diff)
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
output = llm.create_chat_completion(
messages=messages,
temperature=0.5
)
llm_message = output['choices'][0]['message']['content']
print(llm_message)
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support


# Gated model: Login with a HF token with gated access permission hf auth login