KiteResolve-20B is a fine-tuned version of GPT-OSS-20B specifically engineered for automated Git merge conflict resolution. This model transforms the tedious process of manually resolving merge conflicts into an intelligent, automated workflow that understands code semantics across multiple programming languages.
✨ Key Features
🎯 20% Exact Match Accuracy on real-world merge conflicts
📈 12% Token-F1 Score Improvement over base model
🌐 Multi-Language Support: Java, JavaScript, Python, C#, TypeScript, and more
⚡ Fast Inference: Optimized for CLI and webhook integrations
🔧 Production Ready: Designed for enterprise Git workflows
📊 Performance Metrics
Model
Exact Match
Token F1
BLEU
ROUGE-L
Char Sim
codellama:13b
0.00
0.193
13.28
0.208
0.710
llama3.1:8b
0.04
0.583
50.59
0.610
0.818
gpt-oss:20b
0.24
0.549
47.19
0.572
0.736
KiteResolve-20B
0.22
0.617
50.82
0.586
0.765
Evaluated on 50 held-out samples from real-world merge conflicts.
🛠️ Usage
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
from unsloth.chat_templates import get_chat_template
# Load the model
model = AutoModelForCausalLM.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = AutoTokenizer.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = get_chat_template(tokenizer, chat_template="gpt-oss")
# Resolve a merge conflict
conflict = """<<<<<<< oursfunction calculateTotal(items) { return items.reduce((sum, item) => sum + item.price, 0);}=======function calculateTotal(items) { return items.map(item => item.price).reduce((a, b) => a + b, 0);}>>>>>>> theirs"""
messages = [{"role": "user", "content": f"Resolve this merge conflict:\n```{conflict}```"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([prompt], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=False)
resolution = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(resolution)
Ollama 🦙️
ollama run hf.co/SoarAILabs/KiteResolve-20B/model-q4_k_m.gguf
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="SoarAILabs/KiteResolve-20B", filename="model-q4_k_m.gguf", )