DimensionSTP's picture
Update README.md
b72c61a
---
license: mit
language:
- ko
- en
tags:
- korean
- reasoning
- instruction-tuning
- fine-tuning
- llama
- desspseek
- distillation
- sft
---
# 🧠 DeepSeek-R1-Distill-Llama-8B-Ko-Reasoning
> A large-scale Korean reasoning model fine-tuned from **deepseek-ai/DeepSeek-R1-Distill-Llama-8B**, designed to excel in logical and multi-hop reasoning tasks in Korean.
---
## 📌 Overview
**DeepSeek-R1-Distill-Llama-8B-Ko-Reasoning** is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore:
- The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models**
- The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants**
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.
---
## 🧪 Benchmark Results
> - 📊 All benchmarks were measured using the **0-shot CoT (Chain-of-Thought)** method.
> - 📊 The **Score** represents either the **accuracy (%)** of correct answers or a rating on a **1-10 scale** from a judge model.
> - 📊 **LLM-as-a-judge** benchmarks were evaluated using **GPT-4o (2024-08-01-preview)**.
| **Benchmark** | **Score** |
|------------------|---------------|
| GPQA diamond | 58.8 |
| GSM8K | 55.3 |
| HAERAE | 70.8 |
| KSM | 71.2 |
| LogicKor | 7.84 |
| Math500 | 81.4 |
| MT-Bench | 7.44 |
| MT-Bench(Ko) | 7.09 |
---
## 🧑‍💻 Usage
Install Transformers >= 4.50:
```bash
pip install -U transformers
```
Basic example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "DimensionSTP/DeepSeek-R1-Distill-Llama-8B-Ko-Reasoning"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "서울과 부산 중 어디가 더 커?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
---
## 🧠 Base Model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
The base model, [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B), is a CoT LLM developed by the DeepSeek AI team, fine tuned from Llama 3.1 base.
For more technical details, refer to the [Deepseek R1 Technical Report](https://arxiv.org/pdf/2501.12948).
---
## 🧱 Model Architecture
| Property | Value |
|------------------|------------------------|
| Architecture | LlamaForCausalLM |
| Parameters | 8B |
| Context Length | 131,072 tokens |
| Tokenizer | LLamaTokenizer (BPE) |
---
## 📅 Release Date
**Mar 2025**
This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.
---
## 📬 Contact
For questions, collaborations, or deployment inquiries, please contact:
- 🤖 Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP)
- ✉️ Email: [ddang8jh@gmail.com]
---
## 📦 Available Checkpoints
-`main`: Final stable version from the `last` branch
- ✅ All training artifacts available (tokenizer, config, model weights)