Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +68 -54
README.md CHANGED
@@ -1,55 +1,69 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-7B-Instruct
4
- - Qwen/Qwen2.5-7B-Instruct
5
- tags:
6
- - merge
7
- - mergekit
8
- - lazymergekit
9
- - Qwen/Qwen2.5-7B-Instruct
10
- ---
11
-
12
- # Qwen2.5-mini-Instruct
13
-
14
- Qwen2.5-mini-Instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
15
- * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
16
- * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
17
-
18
- ## 🧩 Configuration
19
-
20
- ```yaml
21
- dtype: bfloat16
22
- merge_method: passthrough
23
- slices:
24
- - sources:
25
- - layer_range: [0, 12]
26
- model: Qwen/Qwen2.5-7B-Instruct
27
- - sources:
28
- - layer_range: [18, 28]
29
- model: Qwen/Qwen2.5-7B-Instruct
30
- ```
31
-
32
- ## 💻 Usage
33
-
34
- ```python
35
- !pip install -qU transformers accelerate
36
-
37
- from transformers import AutoTokenizer
38
- import transformers
39
- import torch
40
-
41
- model = "win10/Qwen2.5-mini-Instruct"
42
- messages = [{"role": "user", "content": "What is a large language model?"}]
43
-
44
- tokenizer = AutoTokenizer.from_pretrained(model)
45
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
46
- pipeline = transformers.pipeline(
47
- "text-generation",
48
- model=model,
49
- torch_dtype=torch.float16,
50
- device_map="auto",
51
- )
52
-
53
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
54
- print(outputs[0]["generated_text"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ```
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B-Instruct
4
+ - Qwen/Qwen2.5-7B-Instruct
5
+ tags:
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - Qwen/Qwen2.5-7B-Instruct
10
+ language:
11
+ - zho
12
+ - eng
13
+ - fra
14
+ - spa
15
+ - por
16
+ - deu
17
+ - ita
18
+ - rus
19
+ - jpn
20
+ - kor
21
+ - vie
22
+ - tha
23
+ - ara
24
+ ---
25
+
26
+ # Qwen2.5-mini-Instruct
27
+
28
+ Qwen2.5-mini-Instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
29
+ * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
30
+ * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
31
+
32
+ ## 🧩 Configuration
33
+
34
+ ```yaml
35
+ dtype: bfloat16
36
+ merge_method: passthrough
37
+ slices:
38
+ - sources:
39
+ - layer_range: [0, 12]
40
+ model: Qwen/Qwen2.5-7B-Instruct
41
+ - sources:
42
+ - layer_range: [18, 28]
43
+ model: Qwen/Qwen2.5-7B-Instruct
44
+ ```
45
+
46
+ ## 💻 Usage
47
+
48
+ ```python
49
+ !pip install -qU transformers accelerate
50
+
51
+ from transformers import AutoTokenizer
52
+ import transformers
53
+ import torch
54
+
55
+ model = "win10/Qwen2.5-mini-Instruct"
56
+ messages = [{"role": "user", "content": "What is a large language model?"}]
57
+
58
+ tokenizer = AutoTokenizer.from_pretrained(model)
59
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
60
+ pipeline = transformers.pipeline(
61
+ "text-generation",
62
+ model=model,
63
+ torch_dtype=torch.float16,
64
+ device_map="auto",
65
+ )
66
+
67
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
68
+ print(outputs[0]["generated_text"])
69
  ```