metadata
language:
- tr
- en
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- turkish
- mistral
- instruction-tuned
- sft
- tr
- reasoning
- conversational
- low-resource
- turkish-nlp
datasets:
- ogulcanaydogan/Turkish-LLM-v10-Training
pipeline_tag: text-generation
Turkish-LLM-7B-Instruct
A Turkish-enhanced 7B language model fine-tuned from Mistral-7B-Instruct on curated Turkish instruction data.
Part of the Turkish LLM Family.
Highlights
- Lightweight - runs on consumer GPUs (8GB+ VRAM with quantization)
- GGUF available - Q4/Q5/Q8 quantizations
- Live demo - Try it on Spaces
Quick Start
With Ollama
ollama run hf.co/ogulcanaydogan/Turkish-LLM-7B-Instruct-GGUF:Q4_K_M
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ogulcanaydogan/Turkish-LLM-7B-Instruct", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ogulcanaydogan/Turkish-LLM-7B-Instruct")
messages = [{"role": "user", "content": "Turkiye'nin baskenti neresidir?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Turkish LLM Family
| Model | Size | GGUF |
|---|---|---|
| Turkish-LLM-7B | 7B | Download |
| Turkish-LLM-14B | 14B | Download |
| Turkish-LLM-32B | 32B | Download |
Citation
@misc{aydogan2026turkishllm,
title={Turkish LLM Family: Open-Source Turkish Language Models},
author={Ogulcan Aydogan},
year={2026},
url={https://huggingface.co/collections/ogulcanaydogan/turkish-llm-family-69b303b4ef1c36caffca4e94}
}