Notbobjoe commited on
Commit
bf3a9de
·
verified ·
1 Parent(s): 2a2fd90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md CHANGED
@@ -76,6 +76,62 @@ this model likes to speak poeticly and somtimes human like
76
  - Testing emergent personality in LLMs
77
  - Interactive storybots or surreal games
78
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  ## ⚠️ Limitations
80
  - May contradict itself (e.g. “I’m human” → “I’m AI” → “No”)
81
  - Tends to get cryptic in longer conversations
 
76
  - Testing emergent personality in LLMs
77
  - Interactive storybots or surreal games
78
 
79
+ ### Exsample script
80
+
81
+
82
+ ~~~
83
+ from transformers import AutoModelForCausalLM, AutoTokenizer
84
+ import torch
85
+
86
+ def main():
87
+ model_name = "Notbobjoe/TalkT2-0.1b"
88
+ print(f"Loading model {model_name}...")
89
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
90
+ model = AutoModelForCausalLM.from_pretrained(model_name)
91
+ model.eval()
92
+ device = "cuda" if torch.cuda.is_available() else "cpu"
93
+ model.to(device)
94
+ print("Model loaded. Start chatting! (type 'exit' to quit)")
95
+
96
+ chat_history = ""
97
+
98
+ while True:
99
+ user_input = input("You: ")
100
+ if user_input.lower() in ["exit", "quit"]:
101
+ print("Goodbye!")
102
+ break
103
+
104
+ # Add user input to chat history
105
+ chat_history += f"You: {user_input}\nTalkT2:"
106
+
107
+ # Tokenize input
108
+ prompt_tokens = tokenizer.encode(chat_history, return_tensors="pt").to(device)
109
+
110
+ # Generate response
111
+ output = model.generate(
112
+ prompt_tokens,
113
+ max_new_tokens=128,
114
+ do_sample=True,
115
+ temperature=0.4,
116
+ top_k=50,
117
+ top_p=0.95,
118
+ repetition_penalty=1.1,
119
+ pad_token_id=tokenizer.eos_token_id,
120
+ truncation=True
121
+ )
122
+
123
+ # Decode only newly generated tokens (skip prompt length)
124
+ generated_text = tokenizer.decode(output[0][prompt_tokens.shape[-1]:], skip_special_tokens=True)
125
+
126
+ print(f"TalkT2: {generated_text.strip()}")
127
+
128
+ # Append model reply to chat history
129
+ chat_history += generated_text + "\n"
130
+
131
+ if __name__ == "__main__":
132
+ main()
133
+ ~~~
134
+
135
  ## ⚠️ Limitations
136
  - May contradict itself (e.g. “I’m human” → “I’m AI” → “No”)
137
  - Tends to get cryptic in longer conversations