BaoLocTown commited on
Commit
c2af354
·
verified ·
1 Parent(s): 32e1a05

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
38
+ colbert_linear.pt filter=lfs diff=lfs merge=lfs -text
39
+ sentencepiece.bpe.model filter=lfs diff=lfs merge=lfs -text
40
+ training_args.bin filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ datasets:
4
+ - GreenNode/GreenNode-Table-Markdown-Retrieval
5
+ language:
6
+ - vi
7
+ library_name: sentence-transformers
8
+ pipeline_tag: sentence-similarity
9
+ tags:
10
+ - sentence-transformers
11
+ - sentence-similarity
12
+ - feature-extraction
13
+ widget: []
14
+ metrics:
15
+ - InfoNCE
16
+ license: cc-by-4.0
17
+ ---
18
+
19
+ # SentenceTransformer
20
+
21
+ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
22
+
23
+ ## Model Details
24
+
25
+ ### Model Description
26
+ - **Model Type:** Sentence Transformer
27
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
28
+ - **Maximum Sequence Length:** 8192 tokens
29
+ - **Output Dimensionality:** 1024 tokens
30
+ - **Similarity Function:** Cosine Similarity
31
+ - **Training Dataset:** - GreenNode/GreenNode-Table-Markdown-Retrieval
32
+ - **Language:** Vietnamese
33
+ - **License:** cc-by-4.0
34
+
35
+ ### Model Sources
36
+
37
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
38
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
39
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
40
+
41
+ ### Full Model Architecture
42
+
43
+ ```
44
+ SentenceTransformer(
45
+ (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
46
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
47
+ (2): Normalize()
48
+ )
49
+ ```
50
+
51
+ ## Usage
52
+
53
+ ### Direct Usage (Sentence Transformers)
54
+
55
+ First install the Sentence Transformers library:
56
+
57
+ ```bash
58
+ pip install -U sentence-transformers
59
+ ```
60
+
61
+ Then you can load this model and run inference.
62
+ ```python
63
+ from sentence_transformers import SentenceTransformer
64
+
65
+ # Download from the 🤗 Hub
66
+ model = SentenceTransformer("sentence_transformers_model_id")
67
+ # Run inference
68
+ sentences = [
69
+ 'The weather is lovely today.',
70
+ "It's so sunny outside!",
71
+ 'He drove to the stadium.',
72
+ ]
73
+ embeddings = model.encode(sentences)
74
+ print(embeddings.shape)
75
+ # [3, 1024]
76
+
77
+ # Get the similarity scores for the embeddings
78
+ similarities = model.similarity(embeddings, embeddings)
79
+ print(similarities.shape)
80
+ # [3, 3]
81
+ ```
82
+
83
+ <!--
84
+ ### Direct Usage (Transformers)
85
+
86
+ <details><summary>Click to see the direct usage in Transformers</summary>
87
+
88
+ </details>
89
+ -->
90
+
91
+ <!--
92
+ ### Downstream Usage (Sentence Transformers)
93
+
94
+ You can finetune this model on your own dataset.
95
+
96
+ <details><summary>Click to expand</summary>
97
+
98
+ </details>
99
+ -->
100
+
101
+ <!--
102
+ ### Out-of-Scope Use
103
+
104
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
105
+ -->
106
+
107
+ <!--
108
+ ## Bias, Risks and Limitations
109
+
110
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
111
+ -->
112
+
113
+ <!--
114
+ ### Recommendations
115
+
116
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
117
+ -->
118
+
119
+ ## Training Details
120
+ ## Evaluation
121
+ ### Table: Performance comparison of various models on GreenNodeTableRetrieval
122
+ Dataset: [GreenNode/GreenNode-Table-Markdown-Retrieval](https://huggingface.co/datasets/GreenNode/GreenNode-Table-Markdown-Retrieval-VN)
123
+
124
+ | Model Name | MAP@5 ↑ | MRR@5 ↑ | NDCG@5 ↑ | Recall@5 ↑ | Mean ↑ |
125
+ |--------------------------------------------|--------:|--------:|---------:|-----------:|-------:|
126
+ | **Multilingual Embedding models** | | | | | |
127
+ | me5_small | 33.75 | 33.75 | 35.68 | 41.49 | 36.17 |
128
+ | me5_large | 38.16 | 38.16 | 40.27 | 46.62 | 40.80 |
129
+ | M3-Embedding | 36.52 | 36.52 | 38.60 | 44.84 | 39.12 |
130
+ | OpenAI-embedding-v3 | 30.61 | 30.61 | 32.57 | 38.46 | 33.06 |
131
+ | **Vietnamese Embedding models (Prior Work)**| | | | | |
132
+ | halong-embedding | 32.15 | 32.15 | 34.13 | 40.09 | 34.63 |
133
+ | sup-SimCSE-VietNamese-phobert_base | 10.90 | 10.90 | 12.03 | 15.41 | 12.31 |
134
+ | vietnamese-bi-encoder | 13.61 | 13.61 | 14.63 | 17.68 | 14.89 |
135
+ | **GreenNode-Embedding (Our Work)** | | | | | |
136
+ | *M3-GN-VN* | _41.85_ | _41.85_ | _44.15_ | _57.05_ | _46.23_ |
137
+ | **M3-GN-VN-Mixed** | **42.08** | **42.08** | **44.33** | **51.06** | **44.89** |
138
+ ### Table: Performance comparison of various models on ZacLegalTextRetrieval
139
+ Dataset: [GreenNode/zalo-ai-legal-text-retrieval-vn](https://huggingface.co/datasets/GreenNode/zalo-ai-legal-text-retrieval-vn)
140
+
141
+ | Model Name | MAP@5 ↑ | MRR@5 ↑ | NDCG@5 ↑ | Recall@5 ↑ | Mean ↑ |
142
+ |--------------------------------------------|--------:|--------:|---------:|-----------:|-------:|
143
+ | **Multilingual Embedding models** | | | | | |
144
+ | me5_small | 54.68 | 54.37 | 58.32 | 69.16 | 59.13 |
145
+ | me5_large | 60.14 | 59.62 | 64.17 | 76.02 | 64.99 |
146
+ | *M3-Embedding* | _69.34_ | _68.96_ | _73.70_ | _86.68_ | _74.67_ |
147
+ | OpenAI-embedding-v3 | 38.68 | 38.80 | 41.53 | 49.94 | 41.74 |
148
+ | **Vietnamese Embedding models (Prior Work)**| | | | | |
149
+ | halong-embedding | 52.57 | 52.28 | 56.64 | 68.72 | 57.55 |
150
+ | sup-SimCSE-VietNamese-phobert_base | 25.15 | 25.07 | 27.81 | 35.79 | 28.46 |
151
+ | vietnamese-bi-encoder | 54.88 | 54.47 | 59.10 | 79.51 | 61.99 |
152
+ | **GreenNode-Embedding (Our Work)** | | | | | |
153
+ | M3-GN-VN | 65.03 | 64.80 | 69.19 | 81.66 | 70.17 |
154
+ | **M3-GN-VN-Mixed** | **69.75** | **69.28** | **74.01** | **86.74** | **74.95** |
155
+ ### Table: Performance comparison of various models on VieQuADRetrieval
156
+ Dataset: [taidng/UIT-ViQuAD2.0](https://huggingface.co/datasets/taidng/UIT-ViQuAD2.0)
157
+
158
+ | Model Name | MAP@5 ↑ | MRR@5 ↑ | NDCG@5 ↑ | Recall@5 ↑ | Mean ↑ |
159
+ |--------------------------------------------|--------:|--------:|---------:|-----------:|-------:|
160
+ | **Multilingual Embedding models** | | | | | |
161
+ | me5_small | 40.42 | 69.21 | 50.05 | 50.71 | 52.60 |
162
+ | me5_large | 44.18 | 67.81 | 53.04 | 55.86 | 55.22 |
163
+ | *M3-Embedding* | _44.08_ | _72.28_ | _54.07_ | _56.01_ | _56.61_ |
164
+ | OpenAI-embedding-v3 | 32.39 | 53.97 | 40.48 | 43.02 | 42.47 |
165
+ | **Vietnamese Embedding models (Prior Work)**| | | | | |
166
+ | halong-embedding | 39.42 | 62.31 | 48.63 | 52.73 | 50.77 |
167
+ | sup-SimCSE-VietNamese-phobert_base | 20.45 | 35.99 | 26.73 | 29.59 | 28.19 |
168
+ | vietnamese-bi-encoder | 31.89 | 54.62 | 40.26 | 42.53 | 42.33 |
169
+ | **GreenNode-Embedding (Our Work)** | | | | | |
170
+ | M3-GN-VN | 42.85 | 71.98 | 52.90 | 54.25 | 55.50 |
171
+ | **M3-GN-VN-Mixed** | **44.20** | **72.64** | **54.30** | **56.30** | **56.86** |
172
+
173
+ ### Table: Performance comparison of various models on GreenNodeTableRetrieval (Hit Rate)
174
+
175
+ | Model Name | Hit Rate@1 ↑ | Hit Rate@5 ↑ | Hit Rate@10 ↑ | Hit Rate@20 ↑ |
176
+ |------------------------------------------------|--------------|--------------|---------------|---------------|
177
+ | **Multilingual Embedding models** | | | | |
178
+ | me5_small | 38.99 | 53.37 | 59.28 | 65.09 |
179
+ | me5_large | 43.99 | 59.74 | 65.74 | 71.59 |
180
+ | bge-m3 | 42.15 | 57.00 | 63.05 | 68.96 |
181
+ | OpenAI-embedding-v3 | - | - | - | - |
182
+ | **Vietnamese Embedding models (Prior Work)** | | | | |
183
+ | halong-embedding | 37.22 | 52.49 | 58.57 | 64.64 |
184
+ | sup-SimCSE-VietNamese-phobert_base | 14.00 | 24.74 | 30.32 | 36.44 |
185
+ | vietnamese-bi-encoder | 16.89 | 25.94 | 30.50 | 35.70 |
186
+ | **GreenNode-Embedding (Our Work)** | | | | |
187
+ | **M3-GN-VN** | **48.31** | **64.60** | **70.83** | **76.46** |
188
+ | *M3-GN-VN-Mixed* | _47.94_ | _64.24_ | _70.43_ | _76.14_ |
189
+
190
+
191
+ ### Framework Versions
192
+ - Python: 3.10.14
193
+ - Sentence Transformers: 3.0.1
194
+ - Transformers: 4.42.4
195
+ - PyTorch: 2.3.1
196
+ - Accelerate: 0.33.0
197
+ - Datasets: 2.20.0
198
+ - Tokenizers: 0.19.1
199
+
200
+ ## Citation
201
+
202
+ ### BibTeX
203
+
204
+ <!--
205
+ ## Glossary
206
+
207
+ *Clearly define terms in order to be accessible across audiences.*
208
+ -->
209
+
210
+ <!--
211
+ ## Model Card Authors
212
+
213
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
214
+ -->
215
+
216
+ <!--
217
+ ## Model Card Contact
218
+
219
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
220
+ -->
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "GreenNode/GreenNode-Embedding-Large-VN-Mixed-V1",
3
+ "architectures": [
4
+ "XLMRobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 4096,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 8194,
17
+ "model_type": "xlm-roberta",
18
+ "num_attention_heads": 16,
19
+ "num_hidden_layers": 24,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.42.4",
25
+ "type_vocab_size": 1,
26
+ "use_cache": true,
27
+ "vocab_size": 250002
28
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.42.4",
5
+ "pytorch": "2.3.1"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fc784e75e3d2a5e66751bfd59863638976149a9ba0fe7b9439df21ac5ad4799
3
+ size 2271064456
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false
4
+ }
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:249df0778f236f6ece390de0de746838ef25b9d6954b68c2ee71249e0a9d8fd4
3
+ size 17082799
tokenizer_config.json ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "mask_token": "<mask>",
49
+ "model_max_length": 8192,
50
+ "pad_token": "<pad>",
51
+ "sep_token": "</s>",
52
+ "sp_model_kwargs": {},
53
+ "tokenizer_class": "XLMRobertaTokenizer",
54
+ "unk_token": "<unk>"
55
+ }