sixfinger-phi2-merged
This model is a fine-tuned and merged version of Microsoft Phi-2 created by Six Finger Dev (Enes Altıparmak). It is a 2.7 billion parameter causal language model tailored to perform well on Turkish Question-Answering (QA), reasoning, and basic coding tasks.
Model Details
- Developer: Six Finger Dev (Enes Altıparmak - Kayseri Science High School)
- Architecture: Phi-2 Causal LM
- Parameters: ~2.7B
- Languages: Turkish (TR), English (EN)
- License: MIT
Training & Optimization
This model was likely fine-tuned using QLoRA against a custom Turkish instruction and multi-turn QA dataset (e.g., sixfingerdev/turkish-qa-multi-dialog-dataset). After fine-tuning, the PEFT adapters were fully merged back into the base model weights, meaning it can be loaded directly as a standalone checkpoint without needing the base model or adapter configuration.
Usage
You can load and generate text with this model directly using the transformers library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "sixfingerdev/sixfinger-phi2-merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16,
low_cpu_mem_usage=True
)
prompt = "Soru: Türkiyenin başkenti neresidir? Cevap:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations & Biases
While fine-tuned with instruction data, its behavior still heavily relies on prompt-completion formatting. Direct cues like Answer: or Cevap: yield the best deterministic outputs. In unstructured or lengthy multi-turn chat loops, the model may suffer from repetition or formatting drift compared to purely conversational templates.
- Downloads last month
- 13