DimensionSTP commited on
Commit
b72c61a
·
1 Parent(s): fcf529b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +139 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - ko
5
+ - en
6
+ tags:
7
+ - korean
8
+ - reasoning
9
+ - instruction-tuning
10
+ - fine-tuning
11
+ - llama
12
+ - desspseek
13
+ - distillation
14
+ - sft
15
+ ---
16
+
17
+ # 🧠 DeepSeek-R1-Distill-Llama-8B-Ko-Reasoning
18
+
19
+ > A large-scale Korean reasoning model fine-tuned from **deepseek-ai/DeepSeek-R1-Distill-Llama-8B**, designed to excel in logical and multi-hop reasoning tasks in Korean.
20
+
21
+ ---
22
+
23
+ ## 📌 Overview
24
+
25
+ **DeepSeek-R1-Distill-Llama-8B-Ko-Reasoning** is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore:
26
+
27
+ - The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models**
28
+ - The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants**
29
+ - The development of open-access models that rival proprietary alternatives in complex reasoning tasks
30
+
31
+ This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.
32
+
33
+ ---
34
+
35
+ ## 🧪 Benchmark Results
36
+
37
+ > - 📊 All benchmarks were measured using the **0-shot CoT (Chain-of-Thought)** method.
38
+ > - 📊 The **Score** represents either the **accuracy (%)** of correct answers or a rating on a **1-10 scale** from a judge model.
39
+ > - 📊 **LLM-as-a-judge** benchmarks were evaluated using **GPT-4o (2024-08-01-preview)**.
40
+
41
+ | **Benchmark** | **Score** |
42
+ |------------------|---------------|
43
+ | GPQA diamond | 58.8 |
44
+ | GSM8K | 55.3 |
45
+ | HAERAE | 70.8 |
46
+ | KSM | 71.2 |
47
+ | LogicKor | 7.84 |
48
+ | Math500 | 81.4 |
49
+ | MT-Bench | 7.44 |
50
+ | MT-Bench(Ko) | 7.09 |
51
+
52
+ ---
53
+
54
+ ## 🧑‍💻 Usage
55
+
56
+ Install Transformers >= 4.50:
57
+
58
+ ```bash
59
+ pip install -U transformers
60
+ ```
61
+
62
+ Basic example:
63
+
64
+ ```python
65
+ from transformers import AutoModelForCausalLM, AutoTokenizer
66
+
67
+ model_name = "DimensionSTP/DeepSeek-R1-Distill-Llama-8B-Ko-Reasoning"
68
+
69
+ model = AutoModelForCausalLM.from_pretrained(
70
+ model_name,
71
+ torch_dtype="auto",
72
+ device_map="auto"
73
+ )
74
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
75
+
76
+ prompt = "서울과 부산 중 어디가 더 커?"
77
+ messages = [
78
+ {"role": "user", "content": prompt}
79
+ ]
80
+ text = tokenizer.apply_chat_template(
81
+ messages,
82
+ tokenize=False,
83
+ add_generation_prompt=True
84
+ )
85
+
86
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
87
+
88
+ generated_ids = model.generate(
89
+ **model_inputs,
90
+ max_new_tokens=32768
91
+ )
92
+ generated_ids = [
93
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
94
+ ]
95
+
96
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
97
+ print(response)
98
+ ```
99
+
100
+ ---
101
+
102
+ ## 🧠 Base Model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
103
+
104
+ The base model, [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B), is a CoT LLM developed by the DeepSeek AI team, fine tuned from Llama 3.1 base.
105
+ For more technical details, refer to the [Deepseek R1 Technical Report](https://arxiv.org/pdf/2501.12948).
106
+
107
+ ---
108
+
109
+ ## 🧱 Model Architecture
110
+
111
+ | Property | Value |
112
+ |------------------|------------------------|
113
+ | Architecture | LlamaForCausalLM |
114
+ | Parameters | 8B |
115
+ | Context Length | 131,072 tokens |
116
+ | Tokenizer | LLamaTokenizer (BPE) |
117
+
118
+ ---
119
+
120
+ ## 📅 Release Date
121
+
122
+ **Mar 2025**
123
+ This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.
124
+
125
+ ---
126
+
127
+ ## 📬 Contact
128
+
129
+ For questions, collaborations, or deployment inquiries, please contact:
130
+
131
+ - 🤖 Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP)
132
+ - ✉️ Email: [ddang8jh@gmail.com]
133
+
134
+ ---
135
+
136
+ ## 📦 Available Checkpoints
137
+
138
+ - ✅ `main`: Final stable version from the `last` branch
139
+ - ✅ All training artifacts available (tokenizer, config, model weights)