|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- ko |
|
|
- en |
|
|
tags: |
|
|
- korean |
|
|
- reasoning |
|
|
- instruction-tuning |
|
|
- fine-tuning |
|
|
- trillion |
|
|
- llama |
|
|
- sft |
|
|
--- |
|
|
|
|
|
# π§ Trillion-7B-preview-Ko-Reasoning |
|
|
|
|
|
> A large-scale Korean reasoning model fine-tuned from **trillionlabs/Trillion-7B-preview**, designed to excel in logical and multi-hop reasoning tasks in Korean. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Overview |
|
|
|
|
|
**Trillion-7B-preview-Ko-Reasoning** is a fine-tuned version of [trillionlabs/Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore: |
|
|
|
|
|
- The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models** |
|
|
- The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants** |
|
|
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks |
|
|
|
|
|
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps. |
|
|
|
|
|
--- |
|
|
|
|
|
## π§ͺ Benchmark Results |
|
|
|
|
|
> - π All benchmarks were measured using the **0-shot CoT (Chain-of-Thought)** method. |
|
|
> - π The **Score** represents either the **accuracy (%)** of correct answers or a rating on a **1-10 scale** from a judge model. |
|
|
> - π **LLM-as-a-judge** benchmarks were evaluated using **GPT-4o (2024-08-01-preview)**. |
|
|
|
|
|
| **Benchmark** | **Score** | |
|
|
|------------------|---------------| |
|
|
| GPQA diamond | 56.2 | |
|
|
| GSM8K | 53.1 | |
|
|
| HAERAE | 73.7 | |
|
|
| KSM | 57.8 | |
|
|
| LogicKor | 8.40 | |
|
|
| Math500 | 72.8 | |
|
|
| MT-Bench | 7.90 | |
|
|
| MT-Bench(Ko) | 7.87 | |
|
|
|
|
|
--- |
|
|
|
|
|
## π§βπ» Usage |
|
|
|
|
|
Install Transformers >= 4.50: |
|
|
|
|
|
```bash |
|
|
pip install -U transformers |
|
|
``` |
|
|
|
|
|
Basic example: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model_name = "DimensionSTP/Trillion-7B-preview-Ko-Reasoning" |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_name, |
|
|
torch_dtype="auto", |
|
|
device_map="auto" |
|
|
) |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
|
|
prompt = "μμΈκ³Ό λΆμ° μ€ μ΄λκ° λ 컀?" |
|
|
messages = [ |
|
|
{"role": "user", "content": prompt} |
|
|
] |
|
|
text = tokenizer.apply_chat_template( |
|
|
messages, |
|
|
tokenize=False, |
|
|
add_generation_prompt=True |
|
|
) |
|
|
|
|
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
|
|
generated_ids = model.generate( |
|
|
**model_inputs, |
|
|
max_new_tokens=4096 |
|
|
) |
|
|
generated_ids = [ |
|
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
|
] |
|
|
|
|
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π§ Base Model: trillionlabs/Trillion-7B-preview |
|
|
|
|
|
The base model, [trillionlabs/Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview), is a LLM developed by the Trillion Labs. |
|
|
For more technical details, refer to the [Trillion 7B Technical Report](https://arxiv.org/pdf/2504.15431). |
|
|
|
|
|
--- |
|
|
|
|
|
## π§± Model Architecture |
|
|
|
|
|
| Property | Value | |
|
|
|------------------|------------------------| |
|
|
| Architecture | LlamaForCausalLM | |
|
|
| Parameters | 7B | |
|
|
| Context Length | 4,096 tokens | |
|
|
| Tokenizer | LlamaTokenizer (BPE) | |
|
|
|
|
|
--- |
|
|
|
|
|
## π
Release Date |
|
|
|
|
|
**Mar 2025** |
|
|
This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs. |
|
|
|
|
|
--- |
|
|
|
|
|
## π¬ Contact |
|
|
|
|
|
For questions, collaborations, or deployment inquiries, please contact: |
|
|
|
|
|
- π€ Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP) |
|
|
- βοΈ Email: [ddang8jh@gmail.com] |
|
|
|
|
|
--- |
|
|
|
|
|
## π¦ Available Checkpoints |
|
|
|
|
|
- β
`main`: Final stable version from the `last` branch |
|
|
- β
All training artifacts available (tokenizer, config, model weights) |
|
|
|