Reverse Chinese Text (SFT)
This model is a fine-tuned version of PrimeIntellect/Qwen3-0.6B trained on the task of reversing Chinese text character-by-character.
Training
- Base Model: PrimeIntellect/Qwen3-0.6B
- Method: Supervised Fine-Tuning (SFT)
- Dataset: ivanleomk/reverse-chinese-poems
- Training Steps: 200
- Learning Rate: 2e-5
- Batch Size: 16
- Framework: Prime-RL
Benchmark Results
Evaluated on 1,000 samples from the test set:
| Model | Character Accuracy | Exact Match Rate |
|---|---|---|
| PrimeIntellect/Qwen3-0.6B (base) | 0.10% | 0.00% |
| ivanleomk/reverse-chinese-text (SFT) | 63.55% | 9.60% |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("ivanleomk/reverse-chinese-text")
tokenizer = AutoTokenizer.from_pretrained("ivanleomk/reverse-chinese-text")
messages = [
{"role": "system", "content": "You are a text reversal assistant. Given Chinese text, reverse it character by character."},
{"role": "user", "content": "请反转以下文字:床前明月光"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
output = model.generate(input_ids, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))
# Expected: 光月明前床
License
Apache 2.0
- Downloads last month
- 40