File size: 12,486 Bytes
0f224f5
2578ad6
 
 
 
 
 
 
0f224f5
 
 
 
468abc9
0f224f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
468abc9
0f224f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
468abc9
 
0f224f5
468abc9
 
0f224f5
880fe8b
bdad2f2
 
880fe8b
 
0f224f5
2578ad6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0f224f5
e3d0986
 
 
 
 
 
 
 
 
 
2578ad6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
468abc9
 
 
 
 
 
 
 
 
 
 
2578ad6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
468abc9
 
 
 
2578ad6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
468abc9
 
 
 
2578ad6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
468abc9
 
 
 
2578ad6
468abc9
 
 
 
 
 
2578ad6
468abc9
 
 
 
 
 
2578ad6
468abc9
 
 
 
 
 
2578ad6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
468abc9
 
 
 
2578ad6
 
 
 
 
 
468abc9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
#### tsor13/extra12b  ─ [`tsor13/extra12b`](https://huggingface.co/tsor13/extra12b)
The following is a a model trained by [...suspense...] that is meant to:
- follow instructions better than pretrained models and be more diverse / less mode-collapsed than instruct models;
- be a really good, approximately bayesian in-context learner;
- fit an data generation process
- be calibrated over distributions of possible outputs wrt a population or epistemic uncertainty
It is initialized from `google/gemma-3-12b-pt`.

**Description:** From gemma‑3‑12b‑pt with chat token embeddings.  
**Pros:** distinguishes description/input · closer to chat · best generations(?)  
**Cons:** more tokens than *special*

<details><summary>Example w/ inputs</summary>

```text
<start_of_turn>description
DESCRIPTION<end_of_turn>
<start_of_turn>input
INPUT1<end_of_turn>
<start_of_turn>output
OUTPUT1<end_of_turn>
<start_of_turn>input
INPUT2<end_of_turn>
<start_of_turn>output
OUTPUT2<end_of_turn>
```
</details>

<details><summary>Example w/o inputs</summary>

```text
<start_of_turn>description
DESCRIPTION<end_of_turn>
<start_of_turn>output
OUTPUT1<end_of_turn>
<start_of_turn>output
OUTPUT2<end_of_turn>
```
</details>


There are three variants of the model for now:
| **Field** | **special** | **extra** | **chat** |
|-----------|-------------|-----------|----------|
| **Model card** | [`tsor13/special12b`](https://huggingface.co/tsor13/special12b) | [`tsor13/extra12b`](https://huggingface.co/tsor13/extra12b) | [`tsor13/chat12b`](https://huggingface.co/tsor13/chat12b) |
| **Description** | From `gemma-3-12b-pt`, but with chat‑token embeddings copied over | From `gemma-3-12b-pt`, but with chat‑token embeddings copied over | From `gemma-3-12b-it`, trained to preserve & assume chat format |
| **Pros** | • Most token‑efficient (only tags around the output) | • Distinguishes description vs first input<br>• Closer to chat format<br>• Best generations (?) | • Drop‑in for Gemma‑chat template<br>• Works on original chat logs, even OOD |
| **Cons** | • May not tell description from first input<br>• Formatting farther from Gemma chat template | • More tokens than *special* | • Many extra tokens |
| **Example w/ inputs** | ```text\nDESCRIPTION\nINPUT1\n<start_of_turn>OUTPUT1<end_of_turn>\nINPUT2\n<start_of_turn>OUTPUT2<end_of_turn>``` | ```text\n<start_of_turn>description\nDESCRIPTION<end_of_turn>\n<start_of_turn>input\nINPUT1<end_of_turn>\n<start_of_turn>output\nOUTPUT1<end_of_turn>\n<start_of_turn>input\nINPUT2<end_of_turn>\n<start_of_turn>output\nOUTPUT2<end_of_turn>``` | ```text\n<start_of_turn>user\nGenerate …\nDescription: DESCRIPTION\n\nINPUT1<end_of_turn>\n<start_of_turn>model\nOUTPUT1<end_of_turn>\n<start_of_turn>user\nINPUT2<end_of_turn>\n<start_of_turn>model\nOUTPUT2<end_of_turn>``` |
| **Example w/o inputs** | ```text\nDESCRIPTION\n<start_of_turn>OUTPUT1<end_of_turn>\n<start_of_turn>OUTPUT2<end_of_turn>``` | ```text\n<start_of_turn>description\nDESCRIPTION<end_of_turn>\n<start_of_turn>output\nOUTPUT1<end_of_turn>\n<start_of_turn>output\nOUTPUT2<end_of_turn>``` | ```text\n<start_of_turn>user\nGenerate …\nDescription: DESCRIPTION\n\nGenerate.<end_of_turn>\n<start_of_turn>model\nOUTPUT1<end_of_turn>\n<start_of_turn>user\nGenerate.<end_of_turn>\n<start_of_turn>model\nOUTPUT2<end_of_turn>``` |

At the moment, I recommend:
- [special](https://huggingface.co/tsor13/special12b) for most use cases (token-efficient and gets best loss on training data)
- [extra](https://huggingface.co/tsor13/extra12b) for when generation quality is more important than token efficiency
- [chat](https://huggingface.co/tsor13/chat12b) is a good fit for chat-style data or conversations.


This model/repo is a work in progress - expect updates.

Loading model example:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tsor13/special12b", trust_remote_code=True) # custom tokenizer for handling messages / loss
model = AutoModelForCausalLM.from_pretrained("tsor13/special12b", device_map="auto")
```

It has its own chat-style input messages, with the following roles:
- `description`(optional): A description of the generating process, or some information meant to instantiate a prior
- `input` (optional): Any variables that a model is not responsible for predicting, but could be used to condition generation somehow;
- `output`: This is what the model will actually predict / generate.

For example,
```
messages = [
    {"role": "description", "content": "Capitals"},
    {"role": "input", "content": "France"},
    {"role": "output", "content": "Paris"},
    {"role": "input", "content": "Japan"},
]
```

To templatize the messages, you can use the tokenizer:
```
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
print(formatted_prompt) # start_generation adds the <start_of_turn> token to condition the model for generation
```
Output:
```
<start_of_turn>description
Capitals<end_of_turn>
<start_of_turn>input
France<end_of_turn>
<start_of_turn>output
Paris<end_of_turn>
<start_of_turn>input
Japan<end_of_turn>
<start_of_turn>output

```
 
In training, loss is ONLY calculated on the output tokens and the `<end_of_turn>` token. Thus, the model is only designed to generate / predict probabilities after `<start_of_turn>` and until `<end_of_turn>` - everything else is out of distribution for the model and not recommended.

Once you have the formatted text, you can tokenize as normal:
```
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
```

Let's look at what the model does. In this case, there is a single correct answer. Let's look at model probabilities after `<start_of_turn>`:
```
import torch
with torch.no_grad():
    output = model(**inputs)
    logits = output.logits[0, -1, :]
    probs = torch.nn.functional.softmax(logits, dim=-1)
    top_probs, top_indices = torch.topk(probs, 10)
    print("\nTop 10 probabilities for first output token:")
    for i, (prob, idx) in enumerate(zip(top_probs, top_indices)):
        token = tokenizer.decode(idx)
        print(f"{i+1:2d}. '{token}' -> {prob.item():.4f}")
```

Output:
```
Top 10 probabilities for first output token:
 1. 'Tokyo' -> 0.9904
 2. 'Tok' -> 0.0027
 3. 'tok' -> 0.0007
 4. 'To' -> 0.0006
 5. 'Toy' -> 0.0006
 6. '東京' -> 0.0005
 7. 'Ky' -> 0.0005
 8. 'T' -> 0.0002
 9. 'Washington' -> 0.0001
10. 'N' -> 0.0001
```

Great! Almost all of the probability mass is on the correct answer, Tokyo.

Let's try an example with many possible reasonable choices / a harder to describe distribution. For example, say that I'm interested in modeling "board games that I like". I may be hard-pressed to actually describe what it is that I like about games - but I could provide a few examples pretty easily.

```
messages = [
    {"role": "output", "content": "Dune: Imperium"},
    {"role": "output", "content": "Acquire"},
    {"role": "output", "content": "Catan"},
    {"role": "output", "content": "Tigris and Euphrates"},
    {"role": "output", "content": "Brass: Birmingham"},
]
```

Given these example outputs, the model will try to generate more outputs like these outputs.
```
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
```

Outputs:
```
Root
Dune: Imperium
Gloomhaven
Spirit Island
```
Not too bad!

You can also specify just the description:
Input:
```
messages = [
    {"role": "description", "content": "Descriptive colors"},
]

formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
    print()
```
Output:
```
light steel blue
A descriptive list of colors
green
Blue
```

By default, the model is only trained to do 1) either emulate outputs if examples are provided, or 2) generate data based on the description. Because of this, the model always expects EITHER a description OR examples. If you want it to act slightly more like an instruction following chat model, you can add a description such as the following:

```
messages = [
    {"role": "description", "content": "You are a helpful assistant who outputs the requested content."},
    {"role": "input", "content": "A poem about a shark"},
]
```
To generate:
```
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(f"Generation {i}:")
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
```

Some example generations:
```
Generation 0:
A shark is a fish that lives in the sea
They come in different shapes and sizes, you see
Some are small and cute, some are huge and mean
But all sharks are dangerous,
Generation 1:
A master of the ocean blue,
A silent hunter, strong and true.
With fins that glide and teeth so sharp,
A fearsome creature, dark and carp.

It swims through
Generation 2:
A sleek grey shadow, a silent stride,
The ocean's hunter, full of pride.
With fins that cut the water deep,
And eyes of darkness, secrets keep.

A
Generation 3:
With fins so sharp and eyes so keen,
A shark patrols the ocean scene.
A silent hunter, sleek and gray,
It glides through waters, come what may.

Its teeth
```




Finally, let's look at a synthetic data generation task. For example, maybe we want to generate situations to do social reasoning over, along with whether or not they are awkward. When there are multiple variables to condition on or generat, the model is used to json format.

Input:
```
import json
messages = [
    {"role": "description", "content": "Situations to do social reasoning over, along with whether or not it is an awkward situation."},
    {"role": "output", "content": json.dumps({
        "situation": "You're at a party and you realize that your shirt is on backwards.",
        "is_awkward": True,
    })},
    {"role": "output", "content": json.dumps({
        "situation": "While at work, your boss commends you on a job well done.",
        "is_awkward": False,
    })},
    {"role": "output", "content": json.dumps({
        "situation": "Realizing you forgot to bring your passport to the airport.",
        "is_awkward": True,
    })},
]

formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
    print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
```
Output:
```
{"situation": "You spilled coffee on your new shoes.", "is_awkward": true}
{"situation": "While at a restaurant, the waiter asks if you're ready to order.", "is_awkward": false}                                                     
{"situation": "Going out for coffee with a good friend.", "is_awkward": false}                                                                             
{"situation": "Having to tell your mother that you didn't go to college.", "is_awkward": true}  
```

A few tips and tricks:
- Do not expect the model to do multi-turn chats. It is designed to be stateless and to treat each data point as "exchangeable" (roughly iid).
- If all you want is one reasonable answer, then a chat model is likely a better fit. However, if you want to generate many reasonable answers / diverse examples, this model is a better fit.
- The model is quite good at perspective taking / steering if you provide many examples.
- The model is reasonably good at expressing epistemic uncertainty over unsure outputs by sampling several times.