radoslavralev commited on
Commit
5d04c75
·
verified ·
1 Parent(s): 787854f

Add new SentenceTransformer model

Browse files
Files changed (1) hide show
  1. README.md +82 -66
README.md CHANGED
@@ -9,32 +9,36 @@ tags:
9
  - loss:MultipleNegativesRankingLoss
10
  base_model: prajjwal1/bert-small
11
  widget:
12
- - source_sentence: How do I polish my English skills?
 
13
  sentences:
14
- - How can we polish English skills?
15
- - Why should I move to Israel as a Jew?
16
- - What are vitamins responsible for?
17
- - source_sentence: Can I use the Kozuka Gothic Pro font as a font-face on my web site?
 
18
  sentences:
19
- - Can I use the Kozuka Gothic Pro font as a font-face on my web site?
20
- - Why are Google, Facebook, YouTube and other social networking sites banned in
21
- China?
22
- - What font is used in Bloomberg Terminal?
23
- - source_sentence: Is Quora the best Q&A site?
 
24
  sentences:
25
- - What was the best Quora question ever?
26
- - Is Quora the best inquiry site?
27
- - Where do I buy Oway hair products online?
28
- - source_sentence: How can I customize my walking speed on Google Maps?
 
29
  sentences:
30
- - How do I bring back Google maps icon in my home screen?
31
- - How many pages are there in all the Harry Potter books combined?
32
- - How can I customize my walking speed on Google Maps?
33
- - source_sentence: DId something exist before the Big Bang?
34
  sentences:
35
- - How can I improve my memory problem?
36
- - Where can I buy Fairy Tail Manga?
37
- - Is there a scientific name for what existed before the Big Bang?
38
  pipeline_tag: sentence-similarity
39
  library_name: sentence-transformers
40
  ---
@@ -85,12 +89,12 @@ Then you can load this model and run inference.
85
  from sentence_transformers import SentenceTransformer
86
 
87
  # Download from the 🤗 Hub
88
- model = SentenceTransformer("sentence_transformers_model_id")
89
  # Run inference
90
  sentences = [
91
- 'DId something exist before the Big Bang?',
92
- 'Is there a scientific name for what existed before the Big Bang?',
93
- 'Where can I buy Fairy Tail Manga?',
94
  ]
95
  embeddings = model.encode(sentences)
96
  print(embeddings.shape)
@@ -99,9 +103,9 @@ print(embeddings.shape)
99
  # Get the similarity scores for the embeddings
100
  similarities = model.similarity(embeddings, embeddings)
101
  print(similarities)
102
- # tensor([[ 1.0000, 0.7596, -0.0398],
103
- # [ 0.7596, 1.0000, -0.0308],
104
- # [-0.0398, -0.0308, 1.0000]])
105
  ```
106
 
107
  <!--
@@ -147,18 +151,18 @@ You can finetune this model on your own dataset.
147
  #### Unnamed Dataset
148
 
149
  * Size: 100,000 training samples
150
- * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
151
  * Approximate statistics based on the first 1000 samples:
152
- | | sentence_0 | sentence_1 | sentence_2 |
153
- |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
154
- | type | string | string | string |
155
- | details | <ul><li>min: 3 tokens</li><li>mean: 15.53 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 15.5 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.87 tokens</li><li>max: 128 tokens</li></ul> |
156
  * Samples:
157
- | sentence_0 | sentence_1 | sentence_2 |
158
- |:----------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|
159
- | <code>Is there visitor entry facility in Jaipur airport. How much is the ticket?</code> | <code>Is there visitor entry facility in Jaipur airport. How much is the ticket?</code> | <code>How much is the airport tax in bogota?</code> |
160
- | <code>Which concept is more important: good planning or hard work?</code> | <code>Which concept is more important: good planning or hard work?</code> | <code>What is important in life: luck or hard work?</code> |
161
- | <code>What is the most efficient way to make money?</code> | <code>How can I make my money make money?</code> | <code>What can one learn about Quantum Mechanics in 10 minutes?</code> |
162
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
163
  ```json
164
  {
@@ -171,10 +175,20 @@ You can finetune this model on your own dataset.
171
  ### Training Hyperparameters
172
  #### Non-Default Hyperparameters
173
 
174
- - `per_device_train_batch_size`: 64
175
- - `per_device_eval_batch_size`: 64
 
 
 
 
176
  - `fp16`: True
177
- - `multi_dataset_batch_sampler`: round_robin
 
 
 
 
 
 
178
 
179
  #### All Hyperparameters
180
  <details><summary>Click to expand</summary>
@@ -183,24 +197,24 @@ You can finetune this model on your own dataset.
183
  - `do_predict`: False
184
  - `eval_strategy`: no
185
  - `prediction_loss_only`: True
186
- - `per_device_train_batch_size`: 64
187
- - `per_device_eval_batch_size`: 64
188
  - `per_gpu_train_batch_size`: None
189
  - `per_gpu_eval_batch_size`: None
190
  - `gradient_accumulation_steps`: 1
191
  - `eval_accumulation_steps`: None
192
  - `torch_empty_cache_steps`: None
193
- - `learning_rate`: 5e-05
194
- - `weight_decay`: 0.0
195
  - `adam_beta1`: 0.9
196
  - `adam_beta2`: 0.999
197
  - `adam_epsilon`: 1e-08
198
- - `max_grad_norm`: 1
199
- - `num_train_epochs`: 3
200
- - `max_steps`: -1
201
  - `lr_scheduler_type`: linear
202
  - `lr_scheduler_kwargs`: {}
203
- - `warmup_ratio`: 0.0
204
  - `warmup_steps`: 0
205
  - `log_level`: passive
206
  - `log_level_replica`: warning
@@ -228,9 +242,9 @@ You can finetune this model on your own dataset.
228
  - `tpu_num_cores`: None
229
  - `tpu_metrics_debug`: False
230
  - `debug`: []
231
- - `dataloader_drop_last`: False
232
- - `dataloader_num_workers`: 0
233
- - `dataloader_prefetch_factor`: None
234
  - `past_index`: -1
235
  - `disable_tqdm`: False
236
  - `remove_unused_columns`: True
@@ -245,23 +259,23 @@ You can finetune this model on your own dataset.
245
  - `parallelism_config`: None
246
  - `deepspeed`: None
247
  - `label_smoothing_factor`: 0.0
248
- - `optim`: adamw_torch_fused
249
  - `optim_args`: None
250
  - `adafactor`: False
251
  - `group_by_length`: False
252
  - `length_column_name`: length
253
  - `project`: huggingface
254
  - `trackio_space_id`: trackio
255
- - `ddp_find_unused_parameters`: None
256
  - `ddp_bucket_cap_mb`: None
257
  - `ddp_broadcast_buffers`: False
258
  - `dataloader_pin_memory`: True
259
  - `dataloader_persistent_workers`: False
260
  - `skip_memory_metrics`: True
261
  - `use_legacy_prediction_loop`: False
262
- - `push_to_hub`: False
263
  - `resume_from_checkpoint`: None
264
- - `hub_model_id`: None
265
  - `hub_strategy`: every_save
266
  - `hub_private_repo`: None
267
  - `hub_always_push`: False
@@ -295,7 +309,7 @@ You can finetune this model on your own dataset.
295
  - `average_tokens_across_devices`: True
296
  - `prompts`: None
297
  - `batch_sampler`: batch_sampler
298
- - `multi_dataset_batch_sampler`: round_robin
299
  - `router_mapping`: {}
300
  - `learning_rate_mapping`: {}
301
 
@@ -304,15 +318,17 @@ You can finetune this model on your own dataset.
304
  ### Training Logs
305
  | Epoch | Step | Training Loss |
306
  |:------:|:----:|:-------------:|
307
- | 0.3199 | 500 | 0.2284 |
308
- | 0.6398 | 1000 | 0.0571 |
309
- | 0.9597 | 1500 | 0.0486 |
310
- | 1.2796 | 2000 | 0.0378 |
311
- | 1.5995 | 2500 | 0.0367 |
312
- | 1.9194 | 3000 | 0.0338 |
313
- | 2.2393 | 3500 | 0.0327 |
314
- | 2.5592 | 4000 | 0.0285 |
315
- | 2.8791 | 4500 | 0.0285 |
 
 
316
 
317
 
318
  ### Framework Versions
 
9
  - loss:MultipleNegativesRankingLoss
10
  base_model: prajjwal1/bert-small
11
  widget:
12
+ - source_sentence: How would it effect our economy if we ban Chinese products in our
13
+ country.?
14
  sentences:
15
+ - How would it effect our economy if we ban Chinese products in our country.?
16
+ - Which cities in India is suitable for part time teaching job where one can prepare
17
+ for civil services exam?
18
+ - What are the best one-liners?
19
+ - source_sentence: Why do we need Java programming?
20
  sentences:
21
+ - What is Java? What do I need it for?
22
+ - Why are Instagram filters free?
23
+ - Can I still get funding to study in UCL (Computer Vision Masters), if I graduated
24
+ with a BSc in Computer Science with a 2:1 and I have 3 years or web developent
25
+ expirience?
26
+ - source_sentence: How can I get a job in Dubai if I am living in U.S?
27
  sentences:
28
+ - My Xiaomi Redmi 2 all of a sudden got heated up and then turned off. Now its not
29
+ charging. What should I do?
30
+ - How can one get a job in Dubai?
31
+ - Which is the best series to watch after FRIENDS ?
32
+ - source_sentence: What is the myth behind Mona Lisa smile?
33
  sentences:
34
+ - Do people understand the message in Mona Lisa Smile?
35
+ - How does the $9 computer work?
36
+ - What is the myth behind Mona Lisa smile?
37
+ - source_sentence: What is the font used in the desktop version of Instagram for comments?
38
  sentences:
39
+ - What is capital of china?
40
+ - What font is used for “Turon”?
41
+ - What is the font used in the desktop version of Instagram for comments?
42
  pipeline_tag: sentence-similarity
43
  library_name: sentence-transformers
44
  ---
 
89
  from sentence_transformers import SentenceTransformer
90
 
91
  # Download from the 🤗 Hub
92
+ model = SentenceTransformer("redis/model-a-baseline")
93
  # Run inference
94
  sentences = [
95
+ 'What is the font used in the desktop version of Instagram for comments?',
96
+ 'What is the font used in the desktop version of Instagram for comments?',
97
+ 'What font is used for “Turon”?',
98
  ]
99
  embeddings = model.encode(sentences)
100
  print(embeddings.shape)
 
103
  # Get the similarity scores for the embeddings
104
  similarities = model.similarity(embeddings, embeddings)
105
  print(similarities)
106
+ # tensor([[1.0000, 1.0000, 0.3513],
107
+ # [1.0000, 1.0000, 0.3513],
108
+ # [0.3513, 0.3513, 1.0000]])
109
  ```
110
 
111
  <!--
 
151
  #### Unnamed Dataset
152
 
153
  * Size: 100,000 training samples
154
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
155
  * Approximate statistics based on the first 1000 samples:
156
+ | | anchor | positive | negative |
157
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
158
+ | type | string | string | string |
159
+ | details | <ul><li>min: 6 tokens</li><li>mean: 15.51 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.34 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 17.07 tokens</li><li>max: 128 tokens</li></ul> |
160
  * Samples:
161
+ | anchor | positive | negative |
162
+ |:---------------------------------------------------------|:-------------------------------------------------------|:------------------------------------------------------------------------------------------------|
163
+ | <code>How do you trace a fake phone number?</code> | <code>How do you trace a fake phone number?</code> | <code>How do I trace an internet connection phone number from middle East?</code> |
164
+ | <code>How do I draw cartoon monsters?</code> | <code>How do I draw cartoon monsters?</code> | <code>How do you draw cartoons?</code> |
165
+ | <code>Do you believe in an afterlife? If so, why?</code> | <code>Do you believe that there's an afterlife?</code> | <code>When a guy looks at you with dreamy, hooded eyes, does he realize he's doing this?</code> |
166
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
167
  ```json
168
  {
 
175
  ### Training Hyperparameters
176
  #### Non-Default Hyperparameters
177
 
178
+ - `per_device_train_batch_size`: 256
179
+ - `per_device_eval_batch_size`: 256
180
+ - `learning_rate`: 2e-05
181
+ - `weight_decay`: 0.001
182
+ - `max_steps`: 1170
183
+ - `warmup_ratio`: 0.1
184
  - `fp16`: True
185
+ - `dataloader_drop_last`: True
186
+ - `dataloader_num_workers`: 1
187
+ - `dataloader_prefetch_factor`: 1
188
+ - `optim`: adamw_torch
189
+ - `ddp_find_unused_parameters`: False
190
+ - `push_to_hub`: True
191
+ - `hub_model_id`: redis/model-a-baseline
192
 
193
  #### All Hyperparameters
194
  <details><summary>Click to expand</summary>
 
197
  - `do_predict`: False
198
  - `eval_strategy`: no
199
  - `prediction_loss_only`: True
200
+ - `per_device_train_batch_size`: 256
201
+ - `per_device_eval_batch_size`: 256
202
  - `per_gpu_train_batch_size`: None
203
  - `per_gpu_eval_batch_size`: None
204
  - `gradient_accumulation_steps`: 1
205
  - `eval_accumulation_steps`: None
206
  - `torch_empty_cache_steps`: None
207
+ - `learning_rate`: 2e-05
208
+ - `weight_decay`: 0.001
209
  - `adam_beta1`: 0.9
210
  - `adam_beta2`: 0.999
211
  - `adam_epsilon`: 1e-08
212
+ - `max_grad_norm`: 1.0
213
+ - `num_train_epochs`: 3.0
214
+ - `max_steps`: 1170
215
  - `lr_scheduler_type`: linear
216
  - `lr_scheduler_kwargs`: {}
217
+ - `warmup_ratio`: 0.1
218
  - `warmup_steps`: 0
219
  - `log_level`: passive
220
  - `log_level_replica`: warning
 
242
  - `tpu_num_cores`: None
243
  - `tpu_metrics_debug`: False
244
  - `debug`: []
245
+ - `dataloader_drop_last`: True
246
+ - `dataloader_num_workers`: 1
247
+ - `dataloader_prefetch_factor`: 1
248
  - `past_index`: -1
249
  - `disable_tqdm`: False
250
  - `remove_unused_columns`: True
 
259
  - `parallelism_config`: None
260
  - `deepspeed`: None
261
  - `label_smoothing_factor`: 0.0
262
+ - `optim`: adamw_torch
263
  - `optim_args`: None
264
  - `adafactor`: False
265
  - `group_by_length`: False
266
  - `length_column_name`: length
267
  - `project`: huggingface
268
  - `trackio_space_id`: trackio
269
+ - `ddp_find_unused_parameters`: False
270
  - `ddp_bucket_cap_mb`: None
271
  - `ddp_broadcast_buffers`: False
272
  - `dataloader_pin_memory`: True
273
  - `dataloader_persistent_workers`: False
274
  - `skip_memory_metrics`: True
275
  - `use_legacy_prediction_loop`: False
276
+ - `push_to_hub`: True
277
  - `resume_from_checkpoint`: None
278
+ - `hub_model_id`: redis/model-a-baseline
279
  - `hub_strategy`: every_save
280
  - `hub_private_repo`: None
281
  - `hub_always_push`: False
 
309
  - `average_tokens_across_devices`: True
310
  - `prompts`: None
311
  - `batch_sampler`: batch_sampler
312
+ - `multi_dataset_batch_sampler`: proportional
313
  - `router_mapping`: {}
314
  - `learning_rate_mapping`: {}
315
 
 
318
  ### Training Logs
319
  | Epoch | Step | Training Loss |
320
  |:------:|:----:|:-------------:|
321
+ | 0.2564 | 100 | 0.8076 |
322
+ | 0.5128 | 200 | 0.1403 |
323
+ | 0.7692 | 300 | 0.1169 |
324
+ | 1.0256 | 400 | 0.1085 |
325
+ | 1.2821 | 500 | 0.0938 |
326
+ | 1.5385 | 600 | 0.09 |
327
+ | 1.7949 | 700 | 0.0898 |
328
+ | 2.0513 | 800 | 0.0826 |
329
+ | 2.3077 | 900 | 0.0797 |
330
+ | 2.5641 | 1000 | 0.081 |
331
+ | 2.8205 | 1100 | 0.0803 |
332
 
333
 
334
  ### Framework Versions