PingVortex commited on
Commit
ac37b9b
·
verified ·
1 Parent(s): ac81859

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ ---
5
+ # VLM 1
6
+
7
+ - First model of VLM series (**V**ortex **L**anguage **M**odel)
8
+
9
+ ## To talk with it:
10
+ - Open [Google Colab](https://colab.research.google.com/)
11
+ - Create new notebook
12
+ - Paste this code in the cell:
13
+ ```python
14
+ !pip install transformers
15
+ from transformers import AutoModelForCausalLM, AutoTokenizer
16
+ import torch
17
+
18
+ model_id = "PingVortex/VLM-1"
19
+
20
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
21
+ model = AutoModelForCausalLM.from_pretrained(model_id)
22
+
23
+ print("VLM 1 Chat\nType 'exit' to quit")
24
+
25
+ while True:
26
+ user_input = input("You: ")
27
+ if user_input.strip().lower() == "exit":
28
+ break
29
+
30
+ input_ids = tokenizer(user_input, return_tensors="pt").input_ids
31
+ input_ids = input_ids[:, -1024:]
32
+
33
+ with torch.no_grad():
34
+ output = model.generate(
35
+ input_ids,
36
+ max_new_tokens=50,
37
+ do_sample=True,
38
+ temperature=0.7,
39
+ top_p=0.9,
40
+ pad_token_id=tokenizer.eos_token_id
41
+ )
42
+
43
+ new_tokens = output[0][input_ids.shape[1]:]
44
+ response = tokenizer.decode(new_tokens, skip_special_tokens=True)
45
+
46
+ print("VLM:", response.strip())
47
+ ```