File size: 4,800 Bytes
5e332c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d49edda
5e332c3
 
 
cfd7996
5e332c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
license: apache-2.0
language:
- en
- zh
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
- code
- reasoning
- R1
---

![vvvvvvvvvvv.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/cFNcCXQNciTqiIqEdtggJ.png)

# **Magellanic-Qwen-14B-R1**

> **Magellanic-Qwen-14B-R1** is based on the **DeepSeek-R1-Distill-Qwen-14B** modality architecture, enhanced specifically for **mathematical reasoning** and **coding reasoning**. This model advances the capabilities of 14B-parameter architectures, excelling in logic-based problem solving, programming tasks, and context-rich dialogue generation. It is fine-tuned with extended chain-of-thought reasoning and domain-specific datasets for improved comprehension, structured generation, and precision in technical tasks.

## **Key Improvements**
1. **Mathematical Reasoning Enhancements**  
   Optimized with datasets targeting arithmetic, algebra, calculus, and formal logic, improving step-by-step solution generation and explanation accuracy.  

2. **Coding Reasoning Enhancements**  
   Fine-tuned on diverse programming languages and reasoning-based coding problems (e.g., LeetCode, Codeforces, and real-world engineering tasks), significantly improving code generation, debugging, and documentation.  

3. **Enhanced General Knowledge**  
   Broad knowledge base across various domains enables accurate and coherent responses for diverse topics.  

4. **Improved Instruction Following**  
   Better handling of complex, multi-step instructions with structured and logically coherent outputs.  

5. **Versatile Adaptability**  
   Resilient across open-ended and structured prompts, adapting well to different interaction styles and subject areas.  

6. **Long-Context Support**  
   Supports up to **128K tokens** of input context and can generate up to **8K tokens** of output—ideal for in-depth technical and academic outputs.  

## **Quickstart with transformers**

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Magellanic-Qwen-14B-R1"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Explain how quicksort works with an example in Python."
messages = [
    {"role": "system", "content": "You are a helpful assistant skilled in coding and reasoning tasks."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```

## **Intended Use**

1. **Mathematics and Logic Tasks**  
   Solve and explain math problems, logical puzzles, and formula-based reasoning tasks step-by-step.  

2. **Programming and Development**  
   Assist in generating code, debugging, documenting functions, and solving algorithmic problems across multiple languages.  

3. **General-Purpose Reasoning**  
   Handle a wide variety of questions with accurate, contextual responses based on general knowledge and logic.  

4. **Educational Assistance**  
   Help students and educators with clear, structured explanations in STEM and non-STEM subjects.  

5. **Conversational AI & Chatbots**  
   Power intelligent assistants that require contextual awareness and technically sound responses.  

6. **Multilingual Applications**  
   Translate, summarize, and generate multilingual content for global users.  

7. **Long-Form Content Generation**  
   Generate coherent long articles, research summaries, and reports, especially with structured technical content.

## **Limitations**

1. **High Resource Usage**  
   Requires high-memory GPUs/TPUs for efficient inference, especially when utilizing 128K context.  

2. **Bias and Hallucination Risk**  
   May reflect biases from pretraining data and occasionally hallucinate plausible-sounding but incorrect facts.  

3. **Variability in Creative Tasks**  
   Less consistent in producing high-quality creative writing or highly subjective content.  

4. **Training Cutoff Constraints**  
   No access to real-world events beyond the last training snapshot.  

5. **Error Propagation in Long Outputs**  
   Minor early mistakes can compound in very long outputs.  

6. **Prompt Sensitivity**  
   Performance may vary depending on prompt clarity and structure.