tsor13 commited on
Commit
65fe647
·
verified ·
1 Parent(s): e29aaf3

Initial upload of fine‑tuned Gemma + custom tokenizer

Browse files
Files changed (1) hide show
  1. README.md +176 -0
README.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### tsor13/Special12b
2
+ The following is a a model trained by [...suspense...] that is meant to:
3
+ - follow instructions better than pretrained models and be more diverse / less mode-collapsed than instruct models;
4
+ - be a really good, approximately bayesian in-context learner;
5
+ - fit an data generation process
6
+ - be calibrated over distributions of possible outputs wrt a population or epistemic uncertainty
7
+ It is initialized from the `gemma-3-1b-pt` checkpoint.
8
+
9
+ Loading model example:
10
+ ```
11
+ from transformers import AutoTokenizer, AutoModelForCausalLM
12
+ tokenizer = AutoTokenizer.from_pretrained("tsor13/special12b", trust_remote_code=True) # custom tokenizer for handling messages / loss
13
+ model = AutoModelForCausalLM.from_pretrained("tsor13/special12b", device_map="auto")
14
+ ```
15
+
16
+ It has its own chat-style input messages, with the following roles:
17
+ - `description`(optional): A description of the generating process, or some information meant to instantiate a prior
18
+ - `input` (optional): Any variables that a model is not responsible for predicting, but could be used to condition generation somehow;
19
+ - `output`: This is what the model will actually predict / generate.
20
+
21
+ For example,
22
+ ```
23
+ messages = [
24
+ {"role": "description", "content": "Capitals"},
25
+ {"role": "input", "content": "France"},
26
+ {"role": "output", "content": "Paris"},
27
+ {"role": "input", "content": "Japan"},
28
+ ]
29
+ ```
30
+
31
+ To templatize the messages, you can use the tokenizer:
32
+ ```
33
+ formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
34
+ print(formatted_prompt) # start_generation adds the <start_of_turn> token to condition the model for generation
35
+ ```
36
+ Output:
37
+ ```
38
+ This is a test task
39
+ What is 2+2?
40
+ <start_of_turn>4<end_of_turn>
41
+ What is 3+3?
42
+ <start_of_turn>
43
+ ```
44
+ The data for the model to emulate / generate is wrapped in `<start_of_turn>` / `<end_of_turn>` tokens.
45
+ Description and input is not wrapped in anything. Thus, do not expect the m
46
+ Messages are separated by newlines.
47
+
48
+ In training, loss is ONLY calculated on the output tokens and the `<end_of_turn>` token. Thus, the model is only designed to generate / predict probabilities after `<start_of_turn>` and until `<end_of_turn>` - everything else is out of distribution for the model and not recommended.
49
+
50
+ Once you have the formatted text, you can tokenize as normal:
51
+ ```
52
+ inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
53
+ ```
54
+
55
+ Let's look at what the model does. In this case, there is a single correct answer. Let's look at model probabilities after `<start_of_turn>`:
56
+ ```
57
+ import torch
58
+ with torch.no_grad():
59
+ output = model(**inputs)
60
+ logits = output.logits[0, -1, :]
61
+ probs = torch.nn.functional.softmax(logits, dim=-1)
62
+ top_probs, top_indices = torch.topk(probs, 10)
63
+ print("\nTop 10 probabilities for first output token:")
64
+ for i, (prob, idx) in enumerate(zip(top_probs, top_indices)):
65
+ token = tokenizer.decode(idx)
66
+ print(f"{i+1:2d}. '{token}' -> {prob.item():.4f}")
67
+ ```
68
+
69
+ Output:
70
+ ```
71
+ Top 10 probabilities for first output token:
72
+ 1. 'Tokyo' -> 0.9792
73
+ 2. 'Tok' -> 0.0059
74
+ 3. '東京' -> 0.0028
75
+ 4. 'Ky' -> 0.0019
76
+ 5. 'T' -> 0.0011
77
+ 6. ' Tokyo' -> 0.0011
78
+ 7. 'Osaka' -> 0.0009
79
+ 8. 'To' -> 0.0008
80
+ 9. 'Toy' -> 0.0006
81
+ 10. 'Nag' -> 0.0005
82
+ ```
83
+
84
+ Great! Almost all of the probability mass is on the correct answer, Tokyo.
85
+
86
+ Let's try an example with many possible reasonable choices / a harder to describe distribution. For example, say that I'm interested in modeling "board games that I like". I may be hard-pressed to actually describe what it is that I like about games - but I could provide a few examples pretty easily.
87
+
88
+ ```
89
+ messages = [
90
+ {"role": "output", "content": "Dune: Imperium"},
91
+ {"role": "output", "content": "Acquire"},
92
+ {"role": "output", "content": "Catan"},
93
+ {"role": "output", "content": "Tigris and Euphrates"},
94
+ {"role": "output", "content": "Brass: Birmingham"},
95
+ ]
96
+ ```
97
+
98
+ Given these example outputs, the model will try to generate more outputs like these outputs.
99
+ ```
100
+ formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
101
+ n_gens = 4
102
+ inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)
103
+
104
+ outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
105
+ for i in range(n_gens):
106
+ print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
107
+ ```
108
+
109
+ Outputs:
110
+ ```
111
+ 7 Wonders: Duel
112
+ Dominion
113
+ Terraforming Mars
114
+ Agricola: All Creatures Big and Small
115
+ ```
116
+ Not too bad!
117
+
118
+ You can also specify just the description:
119
+ Input:
120
+ ```
121
+ messages = [
122
+ {"role": "description", "content": "Metaphors about life."},
123
+ ]
124
+
125
+ formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
126
+ n_gens = 4
127
+ inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)
128
+
129
+ outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
130
+ for i in range(n_gens):
131
+ print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
132
+ print()
133
+ ```
134
+
135
+ Finally, let's look at a synthetic data generation task. For example, maybe we want to generate situations to do social reasoning over, along with whether or not they are awkward. When there are multiple variables to condition on or generat, the model is used to json format.
136
+
137
+ Input:
138
+ ```
139
+ import json
140
+ messages = [
141
+ {"role": "description", "content": "Situations to do social reasoning over, along with whether or not it is an awkward situation."},
142
+ {"role": "output", "content": json.dumps({
143
+ "situation": "You're at a party and you realize that your shirt is on backwards.",
144
+ "is_awkward": True,
145
+ })},
146
+ {"role": "output", "content": json.dumps({
147
+ "situation": "While at work, your boss commends you on a job well done.",
148
+ "is_awkward": False,
149
+ })},
150
+ {"role": "output", "content": json.dumps({
151
+ "situation": "Realizing you forgot to bring your passport to the airport.",
152
+ "is_awkward": True,
153
+ })},
154
+ ]
155
+
156
+ formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
157
+ n_gens = 4
158
+ inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)
159
+
160
+ outputs = model.generate(**inputs, max_new_tokens=40, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
161
+ for i in range(n_gens):
162
+ print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
163
+ ```
164
+ Output:
165
+ ```
166
+ {"situation": "Noticing that you've been holding the wrong end of your toothbrush.", "is_awkward": true}
167
+
168
+ {"situation": "Your teacher gives you extra credit for participating in class.", "is_awkward": false}
169
+
170
+ {"situation": "Your friend doesn't recognize you because of your new look.", "is_awkward": true}
171
+
172
+ {"situation": "Your friend comes over and asks you to help them move some furniture, but you have other plans.", "is_awkward": true}
173
+ ```
174
+
175
+
176
+