radoslavralev commited on
Commit
31b8286
·
verified ·
1 Parent(s): 212ab83

Training in progress, step 1053

Browse files
README.md CHANGED
@@ -9,38 +9,32 @@ tags:
9
  - loss:MultipleNegativesRankingLoss
10
  base_model: prajjwal1/bert-small
11
  widget:
12
- - source_sentence: How would it effect our economy if we ban Chinese products in our
13
- country.?
14
  sentences:
15
- - How would it effect our economy if we ban Chinese products in our country.?
16
- - Which cities in India is suitable for part time teaching job where one can prepare
17
- for civil services exam?
18
- - What is the font used in the desktop version of Instagram for comments?
19
- - source_sentence: Why do we need Java programming?
20
  sentences:
21
- - What is Java? What do I need it for?
22
- - Which is the best website to prepare for the Infosys written test?
23
- - Can I still get funding to study in UCL (Computer Vision Masters), if I graduated
24
- with a BSc in Computer Science with a 2:1 and I have 3 years or web developent
25
- expirience?
26
- - source_sentence: What is capital of china?
27
  sentences:
28
- - How many businesses does Donald Trump own?
29
- - What is capital of china?
30
- - Where is the capital of China?
31
- - source_sentence: My Xiaomi Redmi 2 all of a sudden got heated up and then turned
32
- off. Now its not charging. What should I do?
33
  sentences:
34
- - My Xiaomi Redmi 2 all of a sudden got heated up and then turned off. Now its not
35
- charging. I should Whatdo?
36
- - How does the $9 computer work?
37
- - My Xiaomi Redmi 2 all of a sudden got heated up and then turned off. Now its not
38
- charging. What should I do?
39
- - source_sentence: How can I get a job in Dubai if I am living in U.S?
40
  sentences:
41
- - What is the myth behind Mona Lisa smile?
42
- - Which is the best series to watch after FRIENDS ?
43
- - How can one get a job in Dubai?
44
  pipeline_tag: sentence-similarity
45
  library_name: sentence-transformers
46
  ---
@@ -91,12 +85,12 @@ Then you can load this model and run inference.
91
  from sentence_transformers import SentenceTransformer
92
 
93
  # Download from the 🤗 Hub
94
- model = SentenceTransformer("redis/model-b-structured")
95
  # Run inference
96
  sentences = [
97
- 'How can I get a job in Dubai if I am living in U.S?',
98
- 'How can one get a job in Dubai?',
99
- 'Which is the best series to watch after FRIENDS ?',
100
  ]
101
  embeddings = model.encode(sentences)
102
  print(embeddings.shape)
@@ -105,9 +99,9 @@ print(embeddings.shape)
105
  # Get the similarity scores for the embeddings
106
  similarities = model.similarity(embeddings, embeddings)
107
  print(similarities)
108
- # tensor([[ 1.0000, 0.8904, -0.0302],
109
- # [ 0.8904, 1.0000, 0.0224],
110
- # [-0.0302, 0.0224, 1.0000]])
111
  ```
112
 
113
  <!--
@@ -153,18 +147,18 @@ You can finetune this model on your own dataset.
153
  #### Unnamed Dataset
154
 
155
  * Size: 100,000 training samples
156
- * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
157
  * Approximate statistics based on the first 1000 samples:
158
- | | anchor | positive | negative |
159
- |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
160
- | type | string | string | string |
161
- | details | <ul><li>min: 6 tokens</li><li>mean: 15.51 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.34 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.63 tokens</li><li>max: 128 tokens</li></ul> |
162
  * Samples:
163
- | anchor | positive | negative |
164
- |:---------------------------------------------------------|:-------------------------------------------------------|:----------------------------------------------------------------------------------|
165
- | <code>How do you trace a fake phone number?</code> | <code>How do you trace a fake phone number?</code> | <code>How do I trace an internet connection phone number from middle East?</code> |
166
- | <code>How do I draw cartoon monsters?</code> | <code>How do I draw cartoon monsters?</code> | <code>How do cartoon monsters draw I?</code> |
167
- | <code>Do you believe in an afterlife? If so, why?</code> | <code>Do you believe that there's an afterlife?</code> | <code>Do you believe not in an afterlife ? If so , why ?</code> |
168
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
169
  ```json
170
  {
@@ -177,20 +171,10 @@ You can finetune this model on your own dataset.
177
  ### Training Hyperparameters
178
  #### Non-Default Hyperparameters
179
 
180
- - `per_device_train_batch_size`: 256
181
- - `per_device_eval_batch_size`: 256
182
- - `learning_rate`: 2e-05
183
- - `weight_decay`: 0.001
184
- - `max_steps`: 1170
185
- - `warmup_ratio`: 0.1
186
  - `fp16`: True
187
- - `dataloader_drop_last`: True
188
- - `dataloader_num_workers`: 1
189
- - `dataloader_prefetch_factor`: 1
190
- - `optim`: adamw_torch
191
- - `ddp_find_unused_parameters`: False
192
- - `push_to_hub`: True
193
- - `hub_model_id`: redis/model-b-structured
194
 
195
  #### All Hyperparameters
196
  <details><summary>Click to expand</summary>
@@ -199,24 +183,24 @@ You can finetune this model on your own dataset.
199
  - `do_predict`: False
200
  - `eval_strategy`: no
201
  - `prediction_loss_only`: True
202
- - `per_device_train_batch_size`: 256
203
- - `per_device_eval_batch_size`: 256
204
  - `per_gpu_train_batch_size`: None
205
  - `per_gpu_eval_batch_size`: None
206
  - `gradient_accumulation_steps`: 1
207
  - `eval_accumulation_steps`: None
208
  - `torch_empty_cache_steps`: None
209
- - `learning_rate`: 2e-05
210
- - `weight_decay`: 0.001
211
  - `adam_beta1`: 0.9
212
  - `adam_beta2`: 0.999
213
  - `adam_epsilon`: 1e-08
214
- - `max_grad_norm`: 1.0
215
- - `num_train_epochs`: 3.0
216
- - `max_steps`: 1170
217
  - `lr_scheduler_type`: linear
218
  - `lr_scheduler_kwargs`: {}
219
- - `warmup_ratio`: 0.1
220
  - `warmup_steps`: 0
221
  - `log_level`: passive
222
  - `log_level_replica`: warning
@@ -244,9 +228,9 @@ You can finetune this model on your own dataset.
244
  - `tpu_num_cores`: None
245
  - `tpu_metrics_debug`: False
246
  - `debug`: []
247
- - `dataloader_drop_last`: True
248
- - `dataloader_num_workers`: 1
249
- - `dataloader_prefetch_factor`: 1
250
  - `past_index`: -1
251
  - `disable_tqdm`: False
252
  - `remove_unused_columns`: True
@@ -261,23 +245,23 @@ You can finetune this model on your own dataset.
261
  - `parallelism_config`: None
262
  - `deepspeed`: None
263
  - `label_smoothing_factor`: 0.0
264
- - `optim`: adamw_torch
265
  - `optim_args`: None
266
  - `adafactor`: False
267
  - `group_by_length`: False
268
  - `length_column_name`: length
269
  - `project`: huggingface
270
  - `trackio_space_id`: trackio
271
- - `ddp_find_unused_parameters`: False
272
  - `ddp_bucket_cap_mb`: None
273
  - `ddp_broadcast_buffers`: False
274
  - `dataloader_pin_memory`: True
275
  - `dataloader_persistent_workers`: False
276
  - `skip_memory_metrics`: True
277
  - `use_legacy_prediction_loop`: False
278
- - `push_to_hub`: True
279
  - `resume_from_checkpoint`: None
280
- - `hub_model_id`: redis/model-b-structured
281
  - `hub_strategy`: every_save
282
  - `hub_private_repo`: None
283
  - `hub_always_push`: False
@@ -311,7 +295,7 @@ You can finetune this model on your own dataset.
311
  - `average_tokens_across_devices`: True
312
  - `prompts`: None
313
  - `batch_sampler`: batch_sampler
314
- - `multi_dataset_batch_sampler`: proportional
315
  - `router_mapping`: {}
316
  - `learning_rate_mapping`: {}
317
 
@@ -320,17 +304,15 @@ You can finetune this model on your own dataset.
320
  ### Training Logs
321
  | Epoch | Step | Training Loss |
322
  |:------:|:----:|:-------------:|
323
- | 0.2564 | 100 | 1.0792 |
324
- | 0.5128 | 200 | 0.2584 |
325
- | 0.7692 | 300 | 0.1967 |
326
- | 1.0256 | 400 | 0.1808 |
327
- | 1.2821 | 500 | 0.1528 |
328
- | 1.5385 | 600 | 0.1471 |
329
- | 1.7949 | 700 | 0.1416 |
330
- | 2.0513 | 800 | 0.1363 |
331
- | 2.3077 | 900 | 0.1259 |
332
- | 2.5641 | 1000 | 0.1219 |
333
- | 2.8205 | 1100 | 0.1212 |
334
 
335
 
336
  ### Framework Versions
 
9
  - loss:MultipleNegativesRankingLoss
10
  base_model: prajjwal1/bert-small
11
  widget:
12
+ - source_sentence: How do I calculate IQ?
 
13
  sentences:
14
+ - What is the easiest way to know my IQ?
15
+ - How do I calculate not IQ ?
16
+ - What are some creative and innovative business ideas with less investment in India?
17
+ - source_sentence: How can I learn martial arts in my home?
 
18
  sentences:
19
+ - How can I learn martial arts by myself?
20
+ - What are the advantages and disadvantages of investing in gold?
21
+ - Can people see that I have looked at their pictures on instagram if I am not following
22
+ them?
23
+ - source_sentence: When Enterprise picks you up do you have to take them back?
 
24
  sentences:
25
+ - Are there any software Training institute in Tuticorin?
26
+ - When Enterprise picks you up do you have to take them back?
27
+ - When Enterprise picks you up do them have to take youback?
28
+ - source_sentence: What are some non-capital goods?
 
29
  sentences:
30
+ - What are capital goods?
31
+ - How is the value of [math]\pi[/math] calculated?
32
+ - What are some non-capital goods?
33
+ - source_sentence: What is the QuickBooks technical support phone number in New York?
 
 
34
  sentences:
35
+ - What caused the Great Depression?
36
+ - Can I apply for PR in Canada?
37
+ - Which is the best QuickBooks Hosting Support Number in New York?
38
  pipeline_tag: sentence-similarity
39
  library_name: sentence-transformers
40
  ---
 
85
  from sentence_transformers import SentenceTransformer
86
 
87
  # Download from the 🤗 Hub
88
+ model = SentenceTransformer("sentence_transformers_model_id")
89
  # Run inference
90
  sentences = [
91
+ 'What is the QuickBooks technical support phone number in New York?',
92
+ 'Which is the best QuickBooks Hosting Support Number in New York?',
93
+ 'Can I apply for PR in Canada?',
94
  ]
95
  embeddings = model.encode(sentences)
96
  print(embeddings.shape)
 
99
  # Get the similarity scores for the embeddings
100
  similarities = model.similarity(embeddings, embeddings)
101
  print(similarities)
102
+ # tensor([[1.0000, 0.8563, 0.0594],
103
+ # [0.8563, 1.0000, 0.1245],
104
+ # [0.0594, 0.1245, 1.0000]])
105
  ```
106
 
107
  <!--
 
147
  #### Unnamed Dataset
148
 
149
  * Size: 100,000 training samples
150
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
151
  * Approximate statistics based on the first 1000 samples:
152
+ | | sentence_0 | sentence_1 | sentence_2 |
153
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
154
+ | type | string | string | string |
155
+ | details | <ul><li>min: 6 tokens</li><li>mean: 15.79 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.68 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.37 tokens</li><li>max: 67 tokens</li></ul> |
156
  * Samples:
157
+ | sentence_0 | sentence_1 | sentence_2 |
158
+ |:-----------------------------------------------------------------|:-----------------------------------------------------------------|:----------------------------------------------------------------------------------|
159
+ | <code>Is masturbating bad for boys?</code> | <code>Is masturbating bad for boys?</code> | <code>How harmful or unhealthy is masturbation?</code> |
160
+ | <code>Does a train engine move in reverse?</code> | <code>Does a train engine move in reverse?</code> | <code>Time moves forward, not in reverse. Doesn't that make time a vector?</code> |
161
+ | <code>What is the most badass thing anyone has ever done?</code> | <code>What is the most badass thing anyone has ever done?</code> | <code>anyone is the most badass thing Whathas ever done?</code> |
162
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
163
  ```json
164
  {
 
171
  ### Training Hyperparameters
172
  #### Non-Default Hyperparameters
173
 
174
+ - `per_device_train_batch_size`: 64
175
+ - `per_device_eval_batch_size`: 64
 
 
 
 
176
  - `fp16`: True
177
+ - `multi_dataset_batch_sampler`: round_robin
 
 
 
 
 
 
178
 
179
  #### All Hyperparameters
180
  <details><summary>Click to expand</summary>
 
183
  - `do_predict`: False
184
  - `eval_strategy`: no
185
  - `prediction_loss_only`: True
186
+ - `per_device_train_batch_size`: 64
187
+ - `per_device_eval_batch_size`: 64
188
  - `per_gpu_train_batch_size`: None
189
  - `per_gpu_eval_batch_size`: None
190
  - `gradient_accumulation_steps`: 1
191
  - `eval_accumulation_steps`: None
192
  - `torch_empty_cache_steps`: None
193
+ - `learning_rate`: 5e-05
194
+ - `weight_decay`: 0.0
195
  - `adam_beta1`: 0.9
196
  - `adam_beta2`: 0.999
197
  - `adam_epsilon`: 1e-08
198
+ - `max_grad_norm`: 1
199
+ - `num_train_epochs`: 3
200
+ - `max_steps`: -1
201
  - `lr_scheduler_type`: linear
202
  - `lr_scheduler_kwargs`: {}
203
+ - `warmup_ratio`: 0.0
204
  - `warmup_steps`: 0
205
  - `log_level`: passive
206
  - `log_level_replica`: warning
 
228
  - `tpu_num_cores`: None
229
  - `tpu_metrics_debug`: False
230
  - `debug`: []
231
+ - `dataloader_drop_last`: False
232
+ - `dataloader_num_workers`: 0
233
+ - `dataloader_prefetch_factor`: None
234
  - `past_index`: -1
235
  - `disable_tqdm`: False
236
  - `remove_unused_columns`: True
 
245
  - `parallelism_config`: None
246
  - `deepspeed`: None
247
  - `label_smoothing_factor`: 0.0
248
+ - `optim`: adamw_torch_fused
249
  - `optim_args`: None
250
  - `adafactor`: False
251
  - `group_by_length`: False
252
  - `length_column_name`: length
253
  - `project`: huggingface
254
  - `trackio_space_id`: trackio
255
+ - `ddp_find_unused_parameters`: None
256
  - `ddp_bucket_cap_mb`: None
257
  - `ddp_broadcast_buffers`: False
258
  - `dataloader_pin_memory`: True
259
  - `dataloader_persistent_workers`: False
260
  - `skip_memory_metrics`: True
261
  - `use_legacy_prediction_loop`: False
262
+ - `push_to_hub`: False
263
  - `resume_from_checkpoint`: None
264
+ - `hub_model_id`: None
265
  - `hub_strategy`: every_save
266
  - `hub_private_repo`: None
267
  - `hub_always_push`: False
 
295
  - `average_tokens_across_devices`: True
296
  - `prompts`: None
297
  - `batch_sampler`: batch_sampler
298
+ - `multi_dataset_batch_sampler`: round_robin
299
  - `router_mapping`: {}
300
  - `learning_rate_mapping`: {}
301
 
 
304
  ### Training Logs
305
  | Epoch | Step | Training Loss |
306
  |:------:|:----:|:-------------:|
307
+ | 0.3199 | 500 | 0.4294 |
308
+ | 0.6398 | 1000 | 0.1268 |
309
+ | 0.9597 | 1500 | 0.1 |
310
+ | 1.2796 | 2000 | 0.0792 |
311
+ | 1.5995 | 2500 | 0.0706 |
312
+ | 1.9194 | 3000 | 0.0687 |
313
+ | 2.2393 | 3500 | 0.0584 |
314
+ | 2.5592 | 4000 | 0.057 |
315
+ | 2.8791 | 4500 | 0.0581 |
 
 
316
 
317
 
318
  ### Framework Versions
eval/Information-Retrieval_evaluation_val_results.csv ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ epoch,steps,cosine-Accuracy@1,cosine-Accuracy@3,cosine-Accuracy@5,cosine-Precision@1,cosine-Recall@1,cosine-Precision@3,cosine-Recall@3,cosine-Precision@5,cosine-Recall@5,cosine-MRR@1,cosine-MRR@5,cosine-MRR@10,cosine-NDCG@10,cosine-MAP@100
2
+ 0,0,0.7522,0.8716,0.8976,0.7522,0.7522,0.29053333333333337,0.8716,0.17951999999999999,0.8976,0.7522,0.8141766666666673,0.8179282539682551,0.8443897738734513,0.8201287535897707
3
+ 1.4245014245014245,500,0.8984,0.9626,0.9796,0.8984,0.8984,0.3208666666666667,0.9626,0.19591999999999998,0.9796,0.8984,0.9308266666666667,0.9325202380952388,0.9471587549365493,0.9330120361154303
4
+ 2.849002849002849,1000,0.903,0.9652,0.9802,0.903,0.903,0.32173333333333337,0.9652,0.19603999999999996,0.9802,0.903,0.93429,0.93595873015873,0.9497950442756341,0.9364845314523799
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3bda190751c2a2c839baccca3468f63db04cc240fdaac7d0f5db8395b7af1ac9
3
  size 114011616
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cc9a88efd8d3822ebf670b0c7aef92d3a665b705509767b6b8654e668f60314
3
  size 114011616
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a092c87251b9ea1b35ca53910ab2e812e560538928b7756b1aaf26fcba3cf013
3
  size 6161
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7eb099fd92e0bda303c54813aea8d9078b59fb57bebde2e5cbe081c0de10f11
3
  size 6161