File size: 4,698 Bytes
ad4e4f5
 
 
 
 
 
 
 
 
 
 
 
 
77d3de0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: apache-2.0
language:
- ta
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- qwen
- deepseek
- text-generation-inference
---
# **Reasoning-Distilled-ta-7B**  

Reasoning-Distilled-ta-7B is based on the *Qwen [KT] model*, which was distilled by *DeepSeek-AI/DeepSeek-R1-Distill-Qwen-7B*. It has been fine-tuned on specialized datasets focusing on **Tamil language-based reasoning tasks** and chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving in the Tamil language, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks in Tamil.  

# **Quickstart with Transformers**  

Here is a code snippet using `apply_chat_template` to show you how to load the tokenizer and model and generate content in Tamil:  

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Reasoning-Distilled-ta-7B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "பெரிய மொழி மாதிரிகள் பற்றி ஒரு சிறிய அறிமுகத்தை தரவும்."
messages = [
    {"role": "system", "content": "நீங்கள் DeepSeek-AI மூலம் உருவாக்கப்பட்ட Reasoning-Distilled-ta-7B. நீங்கள் ஒரு சக்திவாய்ந்த தமிழ் பகுத்தறிவு உதவியாளர்."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

### **Intended Use:**  
1. **Tamil Language Instruction-Following:** The model excels in understanding and executing detailed instructions in Tamil, making it ideal for automation systems, virtual assistants, and educational tools tailored for Tamil-speaking users.  
2. **Tamil Text Generation:** It can produce coherent, logically structured, and contextually relevant text in Tamil for use in content creation, summarization, and report writing.  
3. **Complex Reasoning Tasks in Tamil:** With its fine-tuning for chain-of-thought reasoning, the model is well-suited for multi-step problem-solving, logical deduction, and question-answering tasks in Tamil.  
4. **Research and Development:** It can support researchers and developers in exploring advancements in Tamil language processing, logical reasoning, and fine-tuning methodologies.  
5. **Educational Applications:** The model can assist in teaching logical reasoning and problem-solving in Tamil by generating step-by-step solutions.  

### **Limitations:**  
1. **Domain-Specific Knowledge:** While fine-tuned on reasoning datasets, the model may lack deep expertise in highly specialized or technical domains in Tamil.  
2. **Hallucination:** Like many large language models, it can generate incorrect or fabricated information, especially when reasoning beyond its training data.  
3. **Bias in Training Data:** The model's outputs may reflect biases present in the datasets it was fine-tuned on, which could limit its objectivity in certain contexts.  
4. **Performance on Non-Reasoning Tasks:** The model is optimized for chain-of-thought reasoning and may underperform on tasks that require simpler, less structured responses.  
5. **Resource-Intensive:** Running the model efficiently requires significant computational resources, which may limit accessibility for smaller-scale deployments.  
6. **Dependence on Input Quality:** The model’s performance heavily depends on the clarity and quality of the input provided. Ambiguous or poorly structured prompts may yield suboptimal results.  
7. **Limited Multilingual Support:** While optimized for Tamil, the model may not perform as well in other languages, especially those with significantly different linguistic structures.  

This model is designed to empower Tamil-speaking users with advanced reasoning and text-generation capabilities, while also addressing the unique challenges of working with the Tamil language.