Files changed (1) hide show
  1. README.md +115 -103
README.md CHANGED
@@ -1,104 +1,116 @@
1
- ---
2
- language:
3
- - en
4
- pipeline_tag: text-classification
5
- tags:
6
- - pretrained
7
- license: apache-2.0
8
- library_name: sentence-transformers
9
- base_model:
10
- - Qwen/Qwen2.5-7B
11
- ---
12
-
13
- # Qwen2.5-7B-embed-base
14
-
15
- ## Model Details
16
- Qwen2.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
17
-
18
- ## Requirements
19
- The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
20
- ```
21
- KeyError: 'Qwen2.5'
22
- ```
23
-
24
- ## Usage
25
- The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
26
-
27
- ## Inference
28
- ```python
29
- from sentence_transformers import SentenceTransformer
30
- import torch
31
-
32
- # 1. Load a pretrained Sentence Transformer model
33
- model = SentenceTransformer("ssmits/Qwen2.5-7B-embed-base") # device = "cpu" when <= 24 GB VRAM
34
-
35
- # The sentences to encode
36
- sentences = [
37
- "The weather is lovely today.",
38
- "It's so sunny outside!",
39
- "He drove to the stadium.",
40
- ]
41
-
42
- # 2. Calculate embeddings by calling model.encode()
43
- embeddings = model.encode(sentences)
44
- print(embeddings.shape)
45
- # (3, 3584)
46
-
47
- # 3. Calculate the embedding similarities
48
- # Assuming embeddings is a numpy array, convert it to a torch tensor
49
- embeddings_tensor = torch.tensor(embeddings)
50
-
51
- # Using torch to compute cosine similarity matrix
52
- similarities = torch.nn.functional.cosine_similarity(embeddings_tensor.unsqueeze(0), embeddings_tensor.unsqueeze(1), dim=2)
53
-
54
- print(similarities)
55
- # tensor([[1.0000, 0.8608, 0.6609],
56
- # [0.8608, 1.0000, 0.7046],
57
- # [0.6609, 0.7046, 1.0000]])
58
- ```
59
-
60
- Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.
61
-
62
- ## Inference (HuggingFace Transformers)
63
- Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
64
-
65
- ```python
66
- from transformers import AutoTokenizer, AutoModel
67
- import torch
68
-
69
- #Mean Pooling - Take attention mask into account for correct averaging
70
- def mean_pooling(model_output, attention_mask):
71
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
72
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
73
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
74
-
75
- # Sentences we want sentence embeddings for
76
- sentences = ['This is an example sentence', 'Each sentence is converted']
77
-
78
- # Load model from HuggingFace Hub
79
- tokenizer = AutoTokenizer.from_pretrained('ssmits/Qwen2.5-7B-embed-base')
80
- model = AutoModel.from_pretrained('ssmits/Qwen2.5-7B-embed-base') # device = "cpu" when <= 24 GB VRAM
81
-
82
- # Tokenize sentences
83
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
84
-
85
- # Compute token embeddings
86
- with torch.no_grad():
87
- model_output = model(**encoded_input)
88
-
89
- # Perform pooling. In this case, mean pooling.
90
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
91
-
92
- print("Sentence embeddings:")
93
- print(sentence_embeddings)
94
- ```
95
-
96
- ### How to enable Multi-GPU
97
- ```python
98
- from transformers import AutoModel
99
- from torch.nn import DataParallel
100
-
101
- model = AutoModel.from_pretrained("ssmits/Qwen2.5-7B-embed-base")
102
- for module_key, module in model._modules.items():
103
- model._modules[module_key] = DataParallel(module)
 
 
 
 
 
 
 
 
 
 
 
 
104
  ```
 
1
+ ---
2
+ language:
3
+ - zho
4
+ - eng
5
+ - fra
6
+ - spa
7
+ - por
8
+ - deu
9
+ - ita
10
+ - rus
11
+ - jpn
12
+ - kor
13
+ - vie
14
+ - tha
15
+ - ara
16
+ pipeline_tag: text-classification
17
+ tags:
18
+ - pretrained
19
+ license: apache-2.0
20
+ library_name: sentence-transformers
21
+ base_model:
22
+ - Qwen/Qwen2.5-7B
23
+ ---
24
+
25
+ # Qwen2.5-7B-embed-base
26
+
27
+ ## Model Details
28
+ Qwen2.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
29
+
30
+ ## Requirements
31
+ The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
32
+ ```
33
+ KeyError: 'Qwen2.5'
34
+ ```
35
+
36
+ ## Usage
37
+ The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
38
+
39
+ ## Inference
40
+ ```python
41
+ from sentence_transformers import SentenceTransformer
42
+ import torch
43
+
44
+ # 1. Load a pretrained Sentence Transformer model
45
+ model = SentenceTransformer("ssmits/Qwen2.5-7B-embed-base") # device = "cpu" when <= 24 GB VRAM
46
+
47
+ # The sentences to encode
48
+ sentences = [
49
+ "The weather is lovely today.",
50
+ "It's so sunny outside!",
51
+ "He drove to the stadium.",
52
+ ]
53
+
54
+ # 2. Calculate embeddings by calling model.encode()
55
+ embeddings = model.encode(sentences)
56
+ print(embeddings.shape)
57
+ # (3, 3584)
58
+
59
+ # 3. Calculate the embedding similarities
60
+ # Assuming embeddings is a numpy array, convert it to a torch tensor
61
+ embeddings_tensor = torch.tensor(embeddings)
62
+
63
+ # Using torch to compute cosine similarity matrix
64
+ similarities = torch.nn.functional.cosine_similarity(embeddings_tensor.unsqueeze(0), embeddings_tensor.unsqueeze(1), dim=2)
65
+
66
+ print(similarities)
67
+ # tensor([[1.0000, 0.8608, 0.6609],
68
+ # [0.8608, 1.0000, 0.7046],
69
+ # [0.6609, 0.7046, 1.0000]])
70
+ ```
71
+
72
+ Note: In my tests it utilizes more than 24GB (RTX 4090), so an A100 or A6000 would be required for inference.
73
+
74
+ ## Inference (HuggingFace Transformers)
75
+ Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
76
+
77
+ ```python
78
+ from transformers import AutoTokenizer, AutoModel
79
+ import torch
80
+
81
+ #Mean Pooling - Take attention mask into account for correct averaging
82
+ def mean_pooling(model_output, attention_mask):
83
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
84
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
85
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
86
+
87
+ # Sentences we want sentence embeddings for
88
+ sentences = ['This is an example sentence', 'Each sentence is converted']
89
+
90
+ # Load model from HuggingFace Hub
91
+ tokenizer = AutoTokenizer.from_pretrained('ssmits/Qwen2.5-7B-embed-base')
92
+ model = AutoModel.from_pretrained('ssmits/Qwen2.5-7B-embed-base') # device = "cpu" when <= 24 GB VRAM
93
+
94
+ # Tokenize sentences
95
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
96
+
97
+ # Compute token embeddings
98
+ with torch.no_grad():
99
+ model_output = model(**encoded_input)
100
+
101
+ # Perform pooling. In this case, mean pooling.
102
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
103
+
104
+ print("Sentence embeddings:")
105
+ print(sentence_embeddings)
106
+ ```
107
+
108
+ ### How to enable Multi-GPU
109
+ ```python
110
+ from transformers import AutoModel
111
+ from torch.nn import DataParallel
112
+
113
+ model = AutoModel.from_pretrained("ssmits/Qwen2.5-7B-embed-base")
114
+ for module_key, module in model._modules.items():
115
+ model._modules[module_key] = DataParallel(module)
116
  ```