File size: 1,790 Bytes
de91065 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: apache-2.0
base_model: PrimeIntellect/Qwen3-0.6B
tags:
- text-generation
- chinese
- sft
- qwen3
datasets:
- ivanleomk/reverse-chinese-poems
language:
- zh
pipeline_tag: text-generation
---
# Reverse Chinese Text (SFT)
This model is a fine-tuned version of [PrimeIntellect/Qwen3-0.6B](https://huggingface.co/PrimeIntellect/Qwen3-0.6B) trained on the task of reversing Chinese text character-by-character.
## Training
- **Base Model:** PrimeIntellect/Qwen3-0.6B
- **Method:** Supervised Fine-Tuning (SFT)
- **Dataset:** [ivanleomk/reverse-chinese-poems](https://huggingface.co/datasets/ivanleomk/reverse-chinese-poems)
- **Training Steps:** 200
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Framework:** [Prime-RL](https://github.com/PrimeIntellect-ai/prime-rl)
## Benchmark Results
Evaluated on 1,000 samples from the test set:
| Model | Character Accuracy | Exact Match Rate |
|-------|-------------------|------------------|
| PrimeIntellect/Qwen3-0.6B (base) | 0.10% | 0.00% |
| **ivanleomk/reverse-chinese-text (SFT)** | **63.55%** | **9.60%** |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ivanleomk/reverse-chinese-text")
tokenizer = AutoTokenizer.from_pretrained("ivanleomk/reverse-chinese-text")
messages = [
{"role": "system", "content": "You are a text reversal assistant. Given Chinese text, reverse it character by character."},
{"role": "user", "content": "请反转以下文字:床前明月光"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
output = model.generate(input_ids, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
# Expected: 光月明前床
```
## License
Apache 2.0
|