File size: 3,520 Bytes
97039ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
license: mit
language:
  - ko
  - en
tags:
  - korean
  - reasoning
  - instruction-tuning
  - fine-tuning
  - llama
  - desspseek
  - distillation
  - sft
---

# 🧠 DeepSeek-R1-Distill-Llama-70B-Ko-Reasoning

> A large-scale Korean reasoning model fine-tuned from **deepseek-ai/DeepSeek-R1-Distill-Llama-70B**, designed to excel in logical and multi-hop reasoning tasks in Korean.

---

## 📌 Overview

**DeepSeek-R1-Distill-Llama-70B-Ko-Reasoning** is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore:

- The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models**
- The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants**
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks

This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.

---

## 🧑‍💻 Usage

Install Transformers >= 4.50:

```bash
pip install -U transformers
```

Basic example:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "DimensionSTP/DeepSeek-R1-Distill-Llama-70B-Ko-Reasoning"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "서울과 부산 중 어디가 더 커?"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

---

## 🧠 Base Model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B

The base model, [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B), is a CoT LLM developed by the DeepSeek AI team, fine tuned from Llama 3.3 instruct.
For more technical details, refer to the [Deepseek R1 Technical Report](https://arxiv.org/pdf/2501.12948).

---

## 🧱 Model Architecture

| Property         | Value                  |
|------------------|------------------------|
| Architecture     | LlamaForCausalLM       |
| Parameters       | 70B                    |
| Context Length   | 131,072 tokens         |
| Tokenizer        | LLamaTokenizer (BPE)   |

---

## 📅 Release Date

**Mar 2025**  
This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.

---

## 📬 Contact

For questions, collaborations, or deployment inquiries, please contact:

- 🤖 Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP)
- ✉️ Email: [ddang8jh@gmail.com]

---

## 📦 Available Checkpoints

-`main`: Final stable version from the `last` branch
- ✅ All training artifacts available (tokenizer, config, model weights)