Safetensors
cohere
Muhammadreza commited on
Commit
11489c3
·
verified ·
1 Parent(s): 8d13f96

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -3
README.md CHANGED
@@ -1,3 +1,64 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - fr
6
+ - de
7
+ - es
8
+ - it
9
+ - pt
10
+ - ja
11
+ - ko
12
+ - zh
13
+ - ar
14
+ - el
15
+ - fa
16
+ - pl
17
+ - id
18
+ - cs
19
+ - he
20
+ - hi
21
+ - nl
22
+ - ro
23
+ - ru
24
+ - tr
25
+ - uk
26
+ - vi
27
+ ---
28
+
29
+ # Hormoz 8B
30
+
31
+ ## Introduction
32
+
33
+ ## How to run (transformers)
34
+
35
+ ### Install transformers
36
+
37
+ ```
38
+ pip install transformers --upgrade
39
+ ```
40
+
41
+ _Note:_ For better performance, you may need to install `accelerate` package as well.
42
+
43
+ ### Inference
44
+
45
+ ```python
46
+ from transformers import AutoTokenizer, AutoModelForCausalLM
47
+
48
+ model_id = "mann-e/Hormoz-8B"
49
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
50
+ model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
51
+
52
+ messages = [{"role": "user", "content": "What is the answer to universe, life and everything?"}]
53
+ input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
54
+
55
+ gen_tokens = model.generate(
56
+ input_ids,
57
+ max_new_tokens=1024,
58
+ do_sample=True,
59
+ temperature=1.0,
60
+ )
61
+
62
+ gen_text = tokenizer.decode(gen_tokens[0])
63
+ print(gen_text)
64
+ ```