PingVortex commited on
Commit
aa365be
·
verified ·
1 Parent(s): fbe96e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -3
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ ---
5
+ # VLM 1 K3
6
+
7
+ - Last model of VLM series (**V**ortex **L**anguage **M**odel)
8
+ - K stands for **K**nowledge (Higher is better)
9
+
10
+ ## Use the model:
11
+ - Open [Google Colab](https://colab.research.google.com/)
12
+ - Create new notebook
13
+ - Paste this code in the cell:
14
+ ```python
15
+ !pip install transformers
16
+ from transformers import AutoModelForCausalLM, AutoTokenizer
17
+ import torch
18
+
19
+ model_id = "PingVortex/VLM-1-K3"
20
+
21
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
22
+ model = AutoModelForCausalLM.from_pretrained(model_id)
23
+
24
+ print("VLM Chat\nType 'exit' to quit")
25
+
26
+ while True:
27
+ user_input = input("You: ")
28
+ if user_input.strip().lower() == "exit":
29
+ break
30
+
31
+ input_ids = tokenizer(user_input, return_tensors="pt").input_ids
32
+ input_ids = input_ids[:, -1024:]
33
+
34
+ with torch.no_grad():
35
+ output = model.generate(
36
+ input_ids,
37
+ max_new_tokens=50,
38
+ do_sample=True,
39
+ temperature=0.7,
40
+ top_p=0.9,
41
+ pad_token_id=tokenizer.eos_token_id
42
+ )
43
+
44
+ new_tokens = output[0][input_ids.shape[1]:]
45
+ response = tokenizer.decode(new_tokens, skip_special_tokens=True)
46
+
47
+ print("VLM:", response.strip())
48
+ ```