DimensionSTP's picture
Update README.md
697bf2c
---
license: gemma
language:
- ko
- en
tags:
- korean
- reasoning
- instruction-tuning
- fine-tuning
- gemma3
- sft
---
# 🧠 gemma-3-27b-it-Ko-Reasoning
> A large-scale Korean reasoning model fine-tuned from **google/gemma-3-27b-it**, designed to excel in logical and multi-hop reasoning tasks in Korean.
---
## πŸ“Œ Overview
**gemma-3-27b-it-Ko-Reasoning** is a fine-tuned version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore:
- The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models**
- The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants**
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.
---
## πŸ§ͺ Benchmark Results
> - πŸ“Š All benchmarks were measured using the **0-shot CoT (Chain-of-Thought)** method.
> - πŸ“Š The **Score** represents either the **accuracy (%)** of correct answers or a rating on a **1-10 scale** from a judge model.
> - πŸ“Š **LLM-as-a-judge** benchmarks were evaluated using **GPT-4o (2024-08-01-preview)**.
| **Benchmark** | **Score** |
|------------------|---------------|
| GPQA diamond | 72.1 |
| GSM8K | 70.5 |
| HAERAE | 85.2 |
| KSM | 78.7 |
| LogicKor | 9.47 |
| Math500 | 83.2 |
| MT-Bench | 9.48 |
| MT-Bench(Ko) | 9.20 |
---
## πŸ§‘β€πŸ’» Usage
Install Transformers >= 4.50:
```bash
pip install -U transformers
```
Basic example:
```python
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "DimensionSTP/gemma-3-27b-it-Ko-Reasoning"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "μ„œμšΈκ³Ό λΆ€μ‚° 쀑 μ–΄λ””κ°€ 더 컀?"}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=8192, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
---
## 🧠 Base Model: google/gemma-3-27b-it
The base model, [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it), is a VLM developed by the Google team.
For more technical details, refer to the [Gemma 3 Technical Report](https://arxiv.org/abs/2503.19786).
---
## 🧱 Model Architecture
| Property | Value |
|------------------|--------------------------------------|
| Architecture | Gemma3ForConditionalGeneration |
| Parameters | 27B |
| Context Length | 128,000 tokens |
| Tokenizer | Gemma3Tokenizer (BPE) |
---
## πŸ“… Release Date
**Mar 2025**
This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.
---
## πŸ“¬ Contact
For questions, collaborations, or deployment inquiries, please contact:
- πŸ€– Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP)
- βœ‰οΈ Email: [ddang8jh@gmail.com]
---
## πŸ“¦ Available Checkpoints
- βœ… `main`: Final stable version from the `last` branch
- βœ… All training artifacts available (tokenizer, config, model weights)