File size: 2,663 Bytes
217979c
 
 
 
 
 
 
 
c0e43ab
 
 
 
fc78f21
c0e43ab
 
 
 
 
 
217979c
 
c0e43ab
217979c
c0e43ab
 
217979c
 
 
 
c0e43ab
 
217979c
5ab964f
c0e43ab
 
 
 
 
 
 
 
217979c
 
c0e43ab
 
 
 
 
217979c
c0e43ab
 
 
 
 
 
217979c
c0e43ab
d04a7bc
 
 
217979c
 
 
 
c0e43ab
 
 
 
 
217979c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
library_name: transformers
model_name: Qemma-sft
tags:
- generated_from_trainer
- sft
- trl
licence: license
license: osl-3.0
datasets:
- O1-OPEN/OpenO1-SFT
- yahma/alpaca-cleaned
- Jackrong/gpt-oss-120b-reasoning-STEM-5K
language:
- en
base_model:
- google/gemma-3-1b-it
- Qwen/Qwen3-0.6B
pipeline_tag: text-generation
---

# Model Card for Qemma

**Qemma** is a HuggingFace-native hybrid model that merges **Gemma-3 (1B)** and **Qwen-3 (0.6B)** at the weight level (no adapters).
Design: Gemma MLP/body + Qwen attention/head, projected and aligned to Gemma’s hidden size. The model is then SFT-tuned for stepwise reasoning.

## Quick start

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "reaperdoesntknow/Qemma-sft"
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16).eval()

messages = [{"role": "user", "content": "Explain finite-scale discrepancy Δ_r in one paragraph."}]
inputs = tok.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")

out = model.generate(inputs, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.9)
print(tok.decode(out[0], skip_special_tokens=True))
```

## What’s inside

* **Architecture:** Gemma-3 backbone (26 layers, hidden 1152, MLP 6912) with **Qwen-style attention** regrouped to Gemma’s 4×256 heads.
* **Tokenizer:** Gemma-3 tokenizer and chat template (see `chat_template.jinja`).
* **Training:** SFT for instruction following and stepwise reasoning.

## Intended use & limitations

**Use:** research, instruction following, code/help, analysis, further SFT/RLHF.
**Limits:** may hallucinate; not for safety-critical, medical, legal, or financial decisions. Follow dataset/model licenses.

## Training procedure

* ~512 warm-start steps (Alpaca-style data)
* 256 Additional pretraining steps on (O1-OPEN/OpenO1-SFT)
* 128  SFT steps with  (Jackrong/gpt-oss-120b-reasoning-STEM-5K)
* 256 SFT steps with (O1-OPEN/OpenO1-SFT)


### Framework versions

* TRL: 0.25.0
* Transformers: 4.57.1
* Pytorch: 2.8.0+cpu
* Datasets: 4.4.1
* Tokenizers: 0.22.1

## Citations



Cite TRL as:
    
```bibtex
@misc{vonwerra2022trl,
	title        = {{TRL: Transformer Reinforcement Learning}},
	author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
	year         = 2020,
	journal      = {GitHub repository},
	publisher    = {GitHub},
	howpublished = {\url{https://github.com/huggingface/trl}}
}
```