thangvip commited on
Commit
a50cb31
·
verified ·
1 Parent(s): 2e1b33b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -37,6 +37,27 @@ This is the model card of a 🤗 transformers model that has been pushed on the
37
  ## Uses
38
 
39
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ### Direct Use
42
 
 
37
  ## Uses
38
 
39
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
+ ```python
41
+ from transformers import AutoModelForCausalLM, AutoTokenizer
42
+ import torch
43
+
44
+ model = AutoModelForCausalLM.from_pretrained("thangvip/vilord-1.8B-instruct", device_map="auto", cache_dir="./cache").eval()
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained("thangvip/vilord-1.8B-instruct", cache_dir="./cache")
47
+
48
+
49
+ messages = [
50
+ {'role': 'system', 'content': "bạn là trợ lý AI hữu ích"},
51
+ {"role": "user", "content": "Nước nào có diện tích lớn nhất?"},
52
+ ]
53
+ text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
54
+ print(text)
55
+ inputs = tokenizer(text, return_tensors="pt")
56
+ inputs = {k: v.to("cuda") for k, v in inputs.items()}
57
+
58
+ outputs = model.generate(**inputs, tokenizer=tokenizer, max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.1, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id, stop_strings=['<|im_end|>'])
59
+ print(tokenizer.decode(outputs[0]))
60
+ ```
61
 
62
  ### Direct Use
63