MossaabDev commited on
Commit
01a0f1e
·
verified ·
1 Parent(s): 264c407

Upload 11 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ unigram.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  - generated_from_trainer
8
  - dataset_size:193
9
  - loss:CosineSimilarityLoss
10
- base_model: sentence-transformers/all-MiniLM-L6-v2
11
  widget:
12
  - source_sentence: I saw someone killing a cat in the street, I felt helpless and
13
  sad
@@ -55,16 +55,16 @@ pipeline_tag: sentence-similarity
55
  library_name: sentence-transformers
56
  ---
57
 
58
- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
59
 
60
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
61
 
62
  ## Model Details
63
 
64
  ### Model Description
65
  - **Model Type:** Sentence Transformer
66
- - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
67
- - **Maximum Sequence Length:** 256 tokens
68
  - **Output Dimensionality:** 384 dimensions
69
  - **Similarity Function:** Cosine Similarity
70
  <!-- - **Training Dataset:** Unknown -->
@@ -81,9 +81,8 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [s
81
 
82
  ```
83
  SentenceTransformer(
84
- (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
85
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
86
- (2): Normalize()
87
  )
88
  ```
89
 
@@ -116,9 +115,9 @@ print(embeddings.shape)
116
  # Get the similarity scores for the embeddings
117
  similarities = model.similarity(embeddings, embeddings)
118
  print(similarities)
119
- # tensor([[1.0000, 0.9817, 0.9870],
120
- # [0.9817, 1.0000, 0.9923],
121
- # [0.9870, 0.9923, 1.0000]])
122
  ```
123
 
124
  <!--
@@ -169,7 +168,7 @@ You can finetune this model on your own dataset.
169
  | | sentence_0 | sentence_1 | label |
170
  |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------|
171
  | type | string | string | float |
172
- | details | <ul><li>min: 5 tokens</li><li>mean: 11.27 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 35.92 tokens</li><li>max: 121 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.9</li><li>max: 1.0</li></ul> |
173
  * Samples:
174
  | sentence_0 | sentence_1 | label |
175
  |:-------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
@@ -186,7 +185,7 @@ You can finetune this model on your own dataset.
186
  ### Training Hyperparameters
187
  #### Non-Default Hyperparameters
188
 
189
- - `num_train_epochs`: 20
190
  - `multi_dataset_batch_sampler`: round_robin
191
 
192
  #### All Hyperparameters
@@ -209,7 +208,7 @@ You can finetune this model on your own dataset.
209
  - `adam_beta2`: 0.999
210
  - `adam_epsilon`: 1e-08
211
  - `max_grad_norm`: 1
212
- - `num_train_epochs`: 20
213
  - `max_steps`: -1
214
  - `lr_scheduler_type`: linear
215
  - `lr_scheduler_kwargs`: {}
@@ -314,12 +313,6 @@ You can finetune this model on your own dataset.
314
 
315
  </details>
316
 
317
- ### Training Logs
318
- | Epoch | Step | Training Loss |
319
- |:-----:|:----:|:-------------:|
320
- | 20.0 | 500 | 0.0455 |
321
-
322
-
323
  ### Framework Versions
324
  - Python: 3.12.7
325
  - Sentence Transformers: 5.1.1
 
7
  - generated_from_trainer
8
  - dataset_size:193
9
  - loss:CosineSimilarityLoss
10
+ base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
11
  widget:
12
  - source_sentence: I saw someone killing a cat in the street, I felt helpless and
13
  sad
 
55
  library_name: sentence-transformers
56
  ---
57
 
58
+ # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
59
 
60
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
61
 
62
  ## Model Details
63
 
64
  ### Model Description
65
  - **Model Type:** Sentence Transformer
66
+ - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 86741b4e3f5cb7765a600d3a3d55a0f6a6cb443d -->
67
+ - **Maximum Sequence Length:** 128 tokens
68
  - **Output Dimensionality:** 384 dimensions
69
  - **Similarity Function:** Cosine Similarity
70
  <!-- - **Training Dataset:** Unknown -->
 
81
 
82
  ```
83
  SentenceTransformer(
84
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
85
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
 
86
  )
87
  ```
88
 
 
115
  # Get the similarity scores for the embeddings
116
  similarities = model.similarity(embeddings, embeddings)
117
  print(similarities)
118
+ # tensor([[1.0000, 0.9072, 0.9224],
119
+ # [0.9072, 1.0000, 0.9847],
120
+ # [0.9224, 0.9847, 1.0000]])
121
  ```
122
 
123
  <!--
 
168
  | | sentence_0 | sentence_1 | label |
169
  |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------|
170
  | type | string | string | float |
171
+ | details | <ul><li>min: 5 tokens</li><li>mean: 12.27 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 39.33 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.9</li><li>max: 1.0</li></ul> |
172
  * Samples:
173
  | sentence_0 | sentence_1 | label |
174
  |:-------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
 
185
  ### Training Hyperparameters
186
  #### Non-Default Hyperparameters
187
 
188
+ - `num_train_epochs`: 10
189
  - `multi_dataset_batch_sampler`: round_robin
190
 
191
  #### All Hyperparameters
 
208
  - `adam_beta2`: 0.999
209
  - `adam_epsilon`: 1e-08
210
  - `max_grad_norm`: 1
211
+ - `num_train_epochs`: 10
212
  - `max_steps`: -1
213
  - `lr_scheduler_type`: linear
214
  - `lr_scheduler_kwargs`: {}
 
313
 
314
  </details>
315
 
 
 
 
 
 
 
316
  ### Framework Versions
317
  - Python: 3.12.7
318
  - Sentence Transformers: 5.1.1
config.json CHANGED
@@ -15,11 +15,11 @@
15
  "max_position_embeddings": 512,
16
  "model_type": "bert",
17
  "num_attention_heads": 12,
18
- "num_hidden_layers": 6,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
  "transformers_version": "4.57.1",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
- "vocab_size": 30522
25
  }
 
15
  "max_position_embeddings": 512,
16
  "model_type": "bert",
17
  "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
  "transformers_version": "4.57.1",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
+ "vocab_size": 250037
25
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:54909b627c8aa3dc3e717e8a772b491f32716f6e655aa81ea45192fc35bd5717
3
- size 90864192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:256726c15cf1c3568a8a75d936a5c205b017384461e1768d5bb26a4721ac1c38
3
+ size 470637416
modules.json CHANGED
@@ -10,11 +10,5 @@
10
  "name": "1",
11
  "path": "1_Pooling",
12
  "type": "sentence_transformers.models.Pooling"
13
- },
14
- {
15
- "idx": 2,
16
- "name": "2",
17
- "path": "2_Normalize",
18
- "type": "sentence_transformers.models.Normalize"
19
  }
20
  ]
 
10
  "name": "1",
11
  "path": "1_Pooling",
12
  "type": "sentence_transformers.models.Pooling"
 
 
 
 
 
 
13
  }
14
  ]
sentence_bert_config.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
- "max_seq_length": 256,
3
  "do_lower_case": false
4
  }
 
1
  {
2
+ "max_seq_length": 128,
3
  "do_lower_case": false
4
  }
special_tokens_map.json CHANGED
@@ -1,34 +1,48 @@
1
  {
 
 
 
 
 
 
 
2
  "cls_token": {
3
- "content": "[CLS]",
4
  "lstrip": false,
5
  "normalized": false,
6
  "rstrip": false,
7
  "single_word": false
8
  },
9
- "mask_token": {
10
- "content": "[MASK]",
11
  "lstrip": false,
12
  "normalized": false,
13
  "rstrip": false,
14
  "single_word": false
15
  },
 
 
 
 
 
 
 
16
  "pad_token": {
17
- "content": "[PAD]",
18
  "lstrip": false,
19
  "normalized": false,
20
  "rstrip": false,
21
  "single_word": false
22
  },
23
  "sep_token": {
24
- "content": "[SEP]",
25
  "lstrip": false,
26
  "normalized": false,
27
  "rstrip": false,
28
  "single_word": false
29
  },
30
  "unk_token": {
31
- "content": "[UNK]",
32
  "lstrip": false,
33
  "normalized": false,
34
  "rstrip": false,
 
1
  {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
  "cls_token": {
10
+ "content": "<s>",
11
  "lstrip": false,
12
  "normalized": false,
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
+ "eos_token": {
17
+ "content": "</s>",
18
  "lstrip": false,
19
  "normalized": false,
20
  "rstrip": false,
21
  "single_word": false
22
  },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
  "pad_token": {
31
+ "content": "<pad>",
32
  "lstrip": false,
33
  "normalized": false,
34
  "rstrip": false,
35
  "single_word": false
36
  },
37
  "sep_token": {
38
+ "content": "</s>",
39
  "lstrip": false,
40
  "normalized": false,
41
  "rstrip": false,
42
  "single_word": false
43
  },
44
  "unk_token": {
45
+ "content": "<unk>",
46
  "lstrip": false,
47
  "normalized": false,
48
  "rstrip": false,
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -1,65 +1,65 @@
1
  {
2
  "added_tokens_decoder": {
3
  "0": {
4
- "content": "[PAD]",
5
  "lstrip": false,
6
  "normalized": false,
7
  "rstrip": false,
8
  "single_word": false,
9
  "special": true
10
  },
11
- "100": {
12
- "content": "[UNK]",
13
  "lstrip": false,
14
  "normalized": false,
15
  "rstrip": false,
16
  "single_word": false,
17
  "special": true
18
  },
19
- "101": {
20
- "content": "[CLS]",
21
  "lstrip": false,
22
  "normalized": false,
23
  "rstrip": false,
24
  "single_word": false,
25
  "special": true
26
  },
27
- "102": {
28
- "content": "[SEP]",
29
  "lstrip": false,
30
  "normalized": false,
31
  "rstrip": false,
32
  "single_word": false,
33
  "special": true
34
  },
35
- "103": {
36
- "content": "[MASK]",
37
- "lstrip": false,
38
  "normalized": false,
39
  "rstrip": false,
40
  "single_word": false,
41
  "special": true
42
  }
43
  },
 
44
  "clean_up_tokenization_spaces": false,
45
- "cls_token": "[CLS]",
46
- "do_basic_tokenize": true,
47
  "do_lower_case": true,
 
48
  "extra_special_tokens": {},
49
- "mask_token": "[MASK]",
50
  "max_length": 128,
51
- "model_max_length": 256,
52
- "never_split": null,
53
  "pad_to_multiple_of": null,
54
- "pad_token": "[PAD]",
55
  "pad_token_type_id": 0,
56
  "padding_side": "right",
57
- "sep_token": "[SEP]",
58
  "stride": 0,
59
  "strip_accents": null,
60
  "tokenize_chinese_chars": true,
61
  "tokenizer_class": "BertTokenizer",
62
  "truncation_side": "right",
63
  "truncation_strategy": "longest_first",
64
- "unk_token": "[UNK]"
65
  }
 
1
  {
2
  "added_tokens_decoder": {
3
  "0": {
4
+ "content": "<s>",
5
  "lstrip": false,
6
  "normalized": false,
7
  "rstrip": false,
8
  "single_word": false,
9
  "special": true
10
  },
11
+ "1": {
12
+ "content": "<pad>",
13
  "lstrip": false,
14
  "normalized": false,
15
  "rstrip": false,
16
  "single_word": false,
17
  "special": true
18
  },
19
+ "2": {
20
+ "content": "</s>",
21
  "lstrip": false,
22
  "normalized": false,
23
  "rstrip": false,
24
  "single_word": false,
25
  "special": true
26
  },
27
+ "3": {
28
+ "content": "<unk>",
29
  "lstrip": false,
30
  "normalized": false,
31
  "rstrip": false,
32
  "single_word": false,
33
  "special": true
34
  },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
  "normalized": false,
39
  "rstrip": false,
40
  "single_word": false,
41
  "special": true
42
  }
43
  },
44
+ "bos_token": "<s>",
45
  "clean_up_tokenization_spaces": false,
46
+ "cls_token": "<s>",
 
47
  "do_lower_case": true,
48
+ "eos_token": "</s>",
49
  "extra_special_tokens": {},
50
+ "mask_token": "<mask>",
51
  "max_length": 128,
52
+ "model_max_length": 128,
 
53
  "pad_to_multiple_of": null,
54
+ "pad_token": "<pad>",
55
  "pad_token_type_id": 0,
56
  "padding_side": "right",
57
+ "sep_token": "</s>",
58
  "stride": 0,
59
  "strip_accents": null,
60
  "tokenize_chinese_chars": true,
61
  "tokenizer_class": "BertTokenizer",
62
  "truncation_side": "right",
63
  "truncation_strategy": "longest_first",
64
+ "unk_token": "<unk>"
65
  }
unigram.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da145b5e7700ae40f16691ec32a0b1fdc1ee3298db22a31ea55f57a966c4a65d
3
+ size 14763260