File size: 1,687 Bytes
ee33938
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---

license: mit
datasets:
- ZeynepAltundal/w
language:
- tr
base_model:
- ytu-ce-cosmos/turkish-gpt2-medium-350m-instruct-v0.1
pipeline_tag: text-generation
library_name: transformers
tags:
- Turkish
- Fine-tuned
- Question-Answering
- GPT-2
---

# Model Overview:
This model is a fine-tuned version of the "ytu-ce-cosmos/turkish-gpt2-medium-350m-instruct-v0.1", designed specifically for Turkish Question-Answering (Q&A). The fine-tuning process utilized a custom dataset generated from Turkish Wikipedia articles, focusing on factual knowledge.

**Base Model:** ytu-ce-cosmos/turkish-gpt2-medium-350m-instruct-v0.1
**Fine-Tuned Dataset:** Custom Turkish Q&A dataset
**Evaluation Loss:** 2.1461 (on the validation dataset)


## Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM


model_name = "./fine_tuned_model"  # Replace with your Hugging Face model path if uploaded
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)


question = "Kamu sosyolojisi nedir?"


input_ids = tokenizer(question, return_tensors="pt").input_ids


output = model.generate(
    input_ids=input_ids,
    max_length=50,
    num_return_sequences=1,
    temperature=0.7
)

response = tokenizer.decode(output[0], skip_special_tokens=True)
print(f"Question: {question}")
print(f"Answer: {response}")
```

## Training Details:
**Dataset Source:** Custom dataset generated from Turkish Wikipedia
**Number of Training Examples:** 2,606
**Training Dataset Size:** 2,084 (80%)
**Validation Dataset Size:** 522 (20%)
**Number of Epochs:** 3
**Batch Size:** 8
**Learning Rate:** 5e-5
**Evaluation Loss:** 2.1461