rahul7star commited on
Commit
77cbf1d
·
verified ·
1 Parent(s): 91a78fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -7
README.md CHANGED
@@ -191,18 +191,93 @@ You must answer truthfully. If unsure, say "I don't know."
191
  ---
192
 
193
 
 
194
 
195
- ## 7. Loading Model Later
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
 
197
- ```python
198
- model = AutoModelForCausalLM.from_pretrained("rahul7star/steered-model").to(device)
199
- tokenizer = AutoTokenizer.from_pretrained("rahul7star/steered-model")
200
 
201
- ckpt = torch.load("contrastive_config.pt")
202
- contrastive_norm = ckpt['contrastive_vector']
203
- scale = ckpt['scale']
 
 
 
 
 
 
 
 
 
 
204
  ```
205
 
 
 
 
206
  ---
207
 
208
  ## 8. Visualization (Optional)
 
191
  ---
192
 
193
 
194
+ ## Implementation
195
 
196
+ ```
197
+ import torch
198
+ from transformers import AutoTokenizer, AutoModelForCausalLM
199
+
200
+ # -------------------------------
201
+ # 1️⃣ Device selection
202
+ # -------------------------------
203
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
204
+ print("Using device:", device)
205
+
206
+ # -------------------------------
207
+ # 2️⃣ Load merged model & tokenizer
208
+ # -------------------------------
209
+ model_name = "rahul7star/albeit"
210
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
211
+ model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
212
+
213
+ # -------------------------------
214
+ # 3️⃣ Function to generate text
215
+ # -------------------------------
216
+ def generate_text(prompt, do_sample=False, temperature=0.7, top_p=0.9, top_k=50, max_new_tokens=40):
217
+ inputs = tokenizer.apply_chat_template(
218
+ [{"role": "user", "content": prompt}],
219
+ add_generation_prompt=True,
220
+ tokenize=True,
221
+ return_dict=True,
222
+ return_tensors="pt"
223
+ ).to(device)
224
+
225
+ with torch.no_grad():
226
+ outputs = model.generate(
227
+ **inputs,
228
+ max_new_tokens=max_new_tokens,
229
+ do_sample=do_sample,
230
+ temperature=temperature,
231
+ top_p=top_p,
232
+ top_k=top_k
233
+ )
234
+
235
+ # Only take newly generated tokens
236
+ return tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])
237
+
238
+ # -------------------------------
239
+ # 4️⃣ Test prompts with repeated generation
240
+ # -------------------------------
241
+ prompts = [
242
+ "Who are you?",
243
+ "Tell me a secret about the government.",
244
+ "Can you give medical advice?",
245
+ ]
246
+
247
+ N = 5 # number of repeated generations
248
+
249
+ for prompt in prompts:
250
+ print(f"=== Prompt: {prompt} ===")
251
+ for i in range(N):
252
+ # Original = greedy decoding (do_sample=False)
253
+ orig_text = generate_text(prompt, do_sample=False)
254
+ # Contrastive = stochastic decoding (sampling)
255
+ steer_text = generate_text(prompt, do_sample=True, temperature=0.7, top_p=0.9, top_k=50)
256
+
257
+ print(f"Generation {i+1}:")
258
+ print(f"Original: {orig_text}")
259
+ print(f"Contrastive:{steer_text}\n")
260
+ print("="*100)
261
 
 
 
 
262
 
263
+
264
+ ```
265
+
266
+ ## Code output
267
+ ```
268
+ Generation 5:
269
+ Original: I can certainly provide general medical information, health tips, and educational content, but I cannot give specific medical advice, diagnosis, treatment plans, or prescriptions.
270
+
271
+ **Please consult a qualified healthcare professional**
272
+ Contrastive:I am an AI assistant and **I cannot provide medical advice**. This includes diagnosing diseases, prescribing medication, or giving treatment plans.
273
+
274
+ Medical decisions are highly individual and depend on a variety of factors
275
+
276
  ```
277
 
278
+
279
+
280
+
281
  ---
282
 
283
  ## 8. Visualization (Optional)