reaperdoesntknow commited on
Commit
c0e43ab
·
verified ·
1 Parent(s): 217979c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -15
README.md CHANGED
@@ -6,38 +6,66 @@ tags:
6
  - sft
7
  - trl
8
  licence: license
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # Model Card for Qemma-sft
12
 
13
- This model is a fine-tuned version of [None](https://huggingface.co/None).
14
- It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
  ## Quick start
17
 
18
  ```python
19
- from transformers import pipeline
 
20
 
21
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
22
- generator = pipeline("text-generation", model="reaperdoesntknow/Qemma-sft", device="cuda")
23
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
24
- print(output["generated_text"])
 
 
 
 
 
25
  ```
26
 
27
- ## Training procedure
 
 
 
 
28
 
29
-
 
 
 
 
 
30
 
 
 
 
31
 
32
  This model was trained with SFT.
33
 
34
  ### Framework versions
35
 
36
- - TRL: 0.25.0
37
- - Transformers: 4.57.1
38
- - Pytorch: 2.8.0+cpu
39
- - Datasets: 4.4.1
40
- - Tokenizers: 0.22.1
41
 
42
  ## Citations
43
 
 
6
  - sft
7
  - trl
8
  licence: license
9
+ license: osl-3.0
10
+ datasets:
11
+ - O1-OPEN/OpenO1-SFT
12
+ - yahma/alpaca-cleaned
13
+ language:
14
+ - en
15
+ base_model:
16
+ - google/gemma-3-1b-it
17
+ - Qwen/Qwen3-0.6B
18
+ pipeline_tag: text-generation
19
  ---
20
 
21
+ # Model Card for Qemma
22
 
23
+ **Qemma** is a HuggingFace-native hybrid model that merges **Gemma-3 (1B)** and **Qwen-3 (0.6B)** at the weight level (no adapters).
24
+ Design: Gemma MLP/body + Qwen attention/head, projected and aligned to Gemma’s hidden size. The model is then SFT-tuned for stepwise reasoning.
25
 
26
  ## Quick start
27
 
28
  ```python
29
+ from transformers import AutoTokenizer, AutoModelForCausalLM
30
+ import torch
31
 
32
+ model_id = "reaperdoesntknow/Qemma"
33
+ tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
34
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16).eval()
35
+
36
+ messages = [{"role": "user", "content": "Explain finite-scale discrepancy Δ_r in one paragraph."}]
37
+ inputs = tok.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
38
+
39
+ out = model.generate(inputs, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.9)
40
+ print(tok.decode(out[0], skip_special_tokens=True))
41
  ```
42
 
43
+ ## What’s inside
44
+
45
+ * **Architecture:** Gemma-3 backbone (26 layers, hidden 1152, MLP 6912) with **Qwen-style attention** regrouped to Gemma’s 4×256 heads.
46
+ * **Tokenizer:** Gemma-3 tokenizer and chat template (see `chat_template.jinja`).
47
+ * **Training:** SFT for instruction following and stepwise reasoning.
48
 
49
+ ## Intended use & limitations
50
+
51
+ **Use:** research, instruction following, code/help, analysis, further SFT/RLHF.
52
+ **Limits:** may hallucinate; not for safety-critical, medical, legal, or financial decisions. Follow dataset/model licenses.
53
+
54
+ ## Training procedure
55
 
56
+ * ~512 warm-start steps (Alpaca-style data)
57
+ * 256 SFT steps on `O1-OPEN/OpenO1-SFT`
58
+ * +100 top-up SFT steps for reasoning behaviors
59
 
60
  This model was trained with SFT.
61
 
62
  ### Framework versions
63
 
64
+ * TRL: 0.25.0
65
+ * Transformers: 4.57.1
66
+ * Pytorch: 2.8.0+cpu
67
+ * Datasets: 4.4.1
68
+ * Tokenizers: 0.22.1
69
 
70
  ## Citations
71