ogulcanaydogan's picture
Enhance model card with Ollama examples, GGUF links, and family table
5aa0fdc verified
metadata
language:
  - tr
  - en
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
  - turkish
  - mistral
  - instruction-tuned
  - sft
  - tr
  - reasoning
  - conversational
  - low-resource
  - turkish-nlp
datasets:
  - ogulcanaydogan/Turkish-LLM-v10-Training
pipeline_tag: text-generation

Turkish-LLM-7B-Instruct

A Turkish-enhanced 7B language model fine-tuned from Mistral-7B-Instruct on curated Turkish instruction data.

Part of the Turkish LLM Family.

Highlights

Quick Start

With Ollama

ollama run hf.co/ogulcanaydogan/Turkish-LLM-7B-Instruct-GGUF:Q4_K_M

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("ogulcanaydogan/Turkish-LLM-7B-Instruct", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("ogulcanaydogan/Turkish-LLM-7B-Instruct")

messages = [{"role": "user", "content": "Turkiye'nin baskenti neresidir?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Turkish LLM Family

Citation

@misc{aydogan2026turkishllm,
  title={Turkish LLM Family: Open-Source Turkish Language Models},
  author={Ogulcan Aydogan},
  year={2026},
  url={https://huggingface.co/collections/ogulcanaydogan/turkish-llm-family-69b303b4ef1c36caffca4e94}
}