|
|
--- |
|
|
license: mit |
|
|
language: |
|
|
- ko |
|
|
- en |
|
|
tags: |
|
|
- korean |
|
|
- reasoning |
|
|
- instruction-tuning |
|
|
- fine-tuning |
|
|
- llama |
|
|
- desspseek |
|
|
- distillation |
|
|
- sft |
|
|
--- |
|
|
|
|
|
# 🧠 DeepSeek-R1-Distill-Llama-70B-Ko-Reasoning |
|
|
|
|
|
> A large-scale Korean reasoning model fine-tuned from **deepseek-ai/DeepSeek-R1-Distill-Llama-70B**, designed to excel in logical and multi-hop reasoning tasks in Korean. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📌 Overview |
|
|
|
|
|
**DeepSeek-R1-Distill-Llama-70B-Ko-Reasoning** is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore: |
|
|
|
|
|
- The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models** |
|
|
- The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants** |
|
|
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks |
|
|
|
|
|
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧑💻 Usage |
|
|
|
|
|
Install Transformers >= 4.50: |
|
|
|
|
|
```bash |
|
|
pip install -U transformers |
|
|
``` |
|
|
|
|
|
Basic example: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_name = "DimensionSTP/DeepSeek-R1-Distill-Llama-70B-Ko-Reasoning" |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_name, |
|
|
torch_dtype="auto", |
|
|
device_map="auto" |
|
|
) |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
|
|
prompt = "서울과 부산 중 어디가 더 커?" |
|
|
messages = [ |
|
|
{"role": "user", "content": prompt} |
|
|
] |
|
|
text = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=False, |
|
|
add_generation_prompt=True |
|
|
) |
|
|
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
|
|
generated_ids = model.generate( |
|
|
**model_inputs, |
|
|
max_new_tokens=32768 |
|
|
) |
|
|
generated_ids = [ |
|
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
|
] |
|
|
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧠 Base Model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B |
|
|
|
|
|
The base model, [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B), is a CoT LLM developed by the DeepSeek AI team, fine tuned from Llama 3.3 instruct. |
|
|
For more technical details, refer to the [Deepseek R1 Technical Report](https://arxiv.org/pdf/2501.12948). |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧱 Model Architecture |
|
|
|
|
|
| Property | Value | |
|
|
|------------------|------------------------| |
|
|
| Architecture | LlamaForCausalLM | |
|
|
| Parameters | 70B | |
|
|
| Context Length | 131,072 tokens | |
|
|
| Tokenizer | LLamaTokenizer (BPE) | |
|
|
|
|
|
--- |
|
|
|
|
|
## 📅 Release Date |
|
|
|
|
|
**Mar 2025** |
|
|
This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📬 Contact |
|
|
|
|
|
For questions, collaborations, or deployment inquiries, please contact: |
|
|
|
|
|
- 🤖 Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP) |
|
|
- ✉️ Email: [ddang8jh@gmail.com] |
|
|
|
|
|
--- |
|
|
|
|
|
## 📦 Available Checkpoints |
|
|
|
|
|
- ✅ `main`: Final stable version from the `last` branch |
|
|
- ✅ All training artifacts available (tokenizer, config, model weights) |
|
|
|