Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +61 -49
README.md CHANGED
@@ -1,50 +1,62 @@
1
- ---
2
- library_name: transformers
3
- tags:
4
- - LoRA
5
- license: apache-2.0
6
- datasets:
7
- - TIGER-Lab/MathInstruct
8
- language:
9
- - en
10
- base_model:
11
- - Qwen/Qwen2.5-7B-Instruct
12
- pipeline_tag: text-generation
13
- ---
14
-
15
- ![Komodo-Logo](Komodo-Logo.jpg)
16
-
17
- Komodo is a Qwen 2.5-7B-Instruct-FineTuned model on TIGER-Lab/MathInstruct dataset to increase math performance of the base model.
18
-
19
- This model is 4bit-quantized. You should import it 8bit if you want to use 7B parameters!
20
-
21
- Suggested Usage:
22
- ```py
23
- tokenizer = AutoTokenizer.from_pretrained("suayptalha/Komodo-7B-Instruct")
24
- model = AutoModelForCausalLM.from_pretrained("suayptalha/Komodo-7B-Instruct")
25
-
26
- example_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
27
-
28
- ### Instruction:
29
- {}
30
-
31
- ### Input:
32
- {}
33
-
34
- ### Response:
35
- {}"""
36
-
37
- inputs = tokenizer(
38
- [
39
- example_prompt.format(
40
- "", #Your question here
41
- "", #Given input here
42
- "", #Output (for training)
43
- )
44
- ], return_tensors = "pt").to("cuda")
45
-
46
- outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True)
47
- tokenizer.batch_decode(outputs)
48
- ```
49
-
 
 
 
 
 
 
 
 
 
 
 
 
50
  <a href="https://www.buymeacoffee.com/suayptalha" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - LoRA
5
+ license: apache-2.0
6
+ datasets:
7
+ - TIGER-Lab/MathInstruct
8
+ language:
9
+ - zho
10
+ - eng
11
+ - fra
12
+ - spa
13
+ - por
14
+ - deu
15
+ - ita
16
+ - rus
17
+ - jpn
18
+ - kor
19
+ - vie
20
+ - tha
21
+ - ara
22
+ base_model:
23
+ - Qwen/Qwen2.5-7B-Instruct
24
+ pipeline_tag: text-generation
25
+ ---
26
+
27
+ ![Komodo-Logo](Komodo-Logo.jpg)
28
+
29
+ Komodo is a Qwen 2.5-7B-Instruct-FineTuned model on TIGER-Lab/MathInstruct dataset to increase math performance of the base model.
30
+
31
+ This model is 4bit-quantized. You should import it 8bit if you want to use 7B parameters!
32
+
33
+ Suggested Usage:
34
+ ```py
35
+ tokenizer = AutoTokenizer.from_pretrained("suayptalha/Komodo-7B-Instruct")
36
+ model = AutoModelForCausalLM.from_pretrained("suayptalha/Komodo-7B-Instruct")
37
+
38
+ example_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
39
+
40
+ ### Instruction:
41
+ {}
42
+
43
+ ### Input:
44
+ {}
45
+
46
+ ### Response:
47
+ {}"""
48
+
49
+ inputs = tokenizer(
50
+ [
51
+ example_prompt.format(
52
+ "", #Your question here
53
+ "", #Given input here
54
+ "", #Output (for training)
55
+ )
56
+ ], return_tensors = "pt").to("cuda")
57
+
58
+ outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True)
59
+ tokenizer.batch_decode(outputs)
60
+ ```
61
+
62
  <a href="https://www.buymeacoffee.com/suayptalha" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>