Text Generation
Transformers
Safetensors
GGUF
gpt_oss
merge-conflicts
git-automation
developer-tools
code-generation
version-control
devops
conversational
Eval Results (legacy)
How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf SoarAILabs/KiteResolve-20B:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf SoarAILabs/KiteResolve-20B:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf SoarAILabs/KiteResolve-20B:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf SoarAILabs/KiteResolve-20B:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf SoarAILabs/KiteResolve-20B:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf SoarAILabs/KiteResolve-20B:Q4_K_MUse Docker
docker model run hf.co/SoarAILabs/KiteResolve-20B:Q4_K_MQuick Links
🪁 KiteResolve-20B: AI-Powered Merge Conflict Resolution
Developed by Soar AI Labs
🚀 Model Description
KiteResolve-20B is a fine-tuned version of GPT-OSS-20B specifically engineered for automated Git merge conflict resolution. This model transforms the tedious process of manually resolving merge conflicts into an intelligent, automated workflow that understands code semantics across multiple programming languages.
✨ Key Features
- 🎯 20% Exact Match Accuracy on real-world merge conflicts
- 📈 12% Token-F1 Score Improvement over base model
- 🌐 Multi-Language Support: Java, JavaScript, Python, C#, TypeScript, and more
- ⚡ Fast Inference: Optimized for CLI and webhook integrations
- 🔧 Production Ready: Designed for enterprise Git workflows
📊 Performance Metrics
| Model | Exact Match | Token F1 | BLEU | ROUGE-L | Char Sim |
|---|---|---|---|---|---|
| codellama:13b | 0.00 | 0.193 | 13.28 | 0.208 | 0.710 |
| llama3.1:8b | 0.04 | 0.583 | 50.59 | 0.610 | 0.818 |
| gpt-oss:20b | 0.24 | 0.549 | 47.19 | 0.572 | 0.736 |
| KiteResolve-20B | 0.22 | 0.617 | 50.82 | 0.586 | 0.765 |
Evaluated on 50 held-out samples from real-world merge conflicts.
🛠️ Usage
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
from unsloth.chat_templates import get_chat_template
# Load the model
model = AutoModelForCausalLM.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = AutoTokenizer.from_pretrained("SoarAILabs/KiteResolve-20B")
tokenizer = get_chat_template(tokenizer, chat_template="gpt-oss")
# Resolve a merge conflict
conflict = """
<<<<<<< ours
function calculateTotal(items) {
return items.reduce((sum, item) => sum + item.price, 0);
}
=======
function calculateTotal(items) {
return items.map(item => item.price).reduce((a, b) => a + b, 0);
}
>>>>>>> theirs
"""
messages = [{"role": "user", "content": f"Resolve this merge conflict:\n```{conflict}```"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([prompt], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=False)
resolution = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(resolution)
Ollama 🦙️
ollama run hf.co/SoarAILabs/KiteResolve-20B/model-q4_k_m.gguf
- Downloads last month
- 32
Model tree for SoarAILabs/KiteResolve-20B
Evaluation results
- Exact Matchself-reported22.000
- Token F1self-reported0.617
- BLEUself-reported50.820
- ROUGE-Lself-reported58.640
- Levenshtein Similarityself-reported0.549
- Character Similarityself-reported0.765
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf SoarAILabs/KiteResolve-20B:Q4_K_M# Run inference directly in the terminal: llama-cli -hf SoarAILabs/KiteResolve-20B:Q4_K_M