LamaDiab commited on
Commit
a26a984
·
verified ·
1 Parent(s): 6b349d5

Training in progress, epoch 5, checkpoint

Browse files
checkpoint-27515/1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
checkpoint-27515/README.md ADDED
@@ -0,0 +1,439 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:704308
9
+ - loss:MultipleNegativesSymmetricRankingLoss
10
+ base_model: sentence-transformers/all-MiniLM-L6-v2
11
+ widget:
12
+ - source_sentence: must kindergarten backpack mermazing 2 cases
13
+ sentences:
14
+ - wide leg popline pants b22
15
+ - ' kindergarten mermazing backpack '
16
+ - bag
17
+ - source_sentence: derby cap toe shoes - brown
18
+ sentences:
19
+ - natural leather shoes
20
+ - shoe
21
+ - 925 sterling silver heart ear studs with genuine european crystals
22
+ - source_sentence: rembrandt's eyes
23
+ sentences:
24
+ - art book
25
+ - ' rembrandt''s eyes book'
26
+ - canvas frame 100% cotton 350 gsm 20 cm triangle m e5303t
27
+ - source_sentence: essence multi task concealer 15 natural nude
28
+ sentences:
29
+ - face make-up
30
+ - ' essence concealer'
31
+ - rowntrees fruit pastilles
32
+ - source_sentence: parker ingenuity ct black lacquer so959210
33
+ sentences:
34
+ - lagu-family barber shop toy
35
+ - ' pen'
36
+ - pen
37
+ pipeline_tag: sentence-similarity
38
+ library_name: sentence-transformers
39
+ metrics:
40
+ - cosine_accuracy
41
+ model-index:
42
+ - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
43
+ results:
44
+ - task:
45
+ type: triplet
46
+ name: Triplet
47
+ dataset:
48
+ name: Unknown
49
+ type: unknown
50
+ metrics:
51
+ - type: cosine_accuracy
52
+ value: 0.9601430296897888
53
+ name: Cosine Accuracy
54
+ ---
55
+
56
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
57
+
58
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
59
+
60
+ ## Model Details
61
+
62
+ ### Model Description
63
+ - **Model Type:** Sentence Transformer
64
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
65
+ - **Maximum Sequence Length:** 256 tokens
66
+ - **Output Dimensionality:** 384 dimensions
67
+ - **Similarity Function:** Cosine Similarity
68
+ <!-- - **Training Dataset:** Unknown -->
69
+ <!-- - **Language:** Unknown -->
70
+ <!-- - **License:** Unknown -->
71
+
72
+ ### Model Sources
73
+
74
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
75
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
76
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
77
+
78
+ ### Full Model Architecture
79
+
80
+ ```
81
+ SentenceTransformer(
82
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
83
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
84
+ (2): Normalize()
85
+ )
86
+ ```
87
+
88
+ ## Usage
89
+
90
+ ### Direct Usage (Sentence Transformers)
91
+
92
+ First install the Sentence Transformers library:
93
+
94
+ ```bash
95
+ pip install -U sentence-transformers
96
+ ```
97
+
98
+ Then you can load this model and run inference.
99
+ ```python
100
+ from sentence_transformers import SentenceTransformer
101
+
102
+ # Download from the 🤗 Hub
103
+ model = SentenceTransformer("LamaDiab/NewMiniLM-V15Data-128BATCH-SemanticEngine")
104
+ # Run inference
105
+ sentences = [
106
+ 'parker ingenuity ct black lacquer so959210',
107
+ ' pen',
108
+ 'lagu-family barber shop toy',
109
+ ]
110
+ embeddings = model.encode(sentences)
111
+ print(embeddings.shape)
112
+ # [3, 384]
113
+
114
+ # Get the similarity scores for the embeddings
115
+ similarities = model.similarity(embeddings, embeddings)
116
+ print(similarities)
117
+ # tensor([[ 1.0000, 0.2524, -0.0132],
118
+ # [ 0.2524, 1.0000, 0.1220],
119
+ # [-0.0132, 0.1220, 1.0000]])
120
+ ```
121
+
122
+ <!--
123
+ ### Direct Usage (Transformers)
124
+
125
+ <details><summary>Click to see the direct usage in Transformers</summary>
126
+
127
+ </details>
128
+ -->
129
+
130
+ <!--
131
+ ### Downstream Usage (Sentence Transformers)
132
+
133
+ You can finetune this model on your own dataset.
134
+
135
+ <details><summary>Click to expand</summary>
136
+
137
+ </details>
138
+ -->
139
+
140
+ <!--
141
+ ### Out-of-Scope Use
142
+
143
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
144
+ -->
145
+
146
+ ## Evaluation
147
+
148
+ ### Metrics
149
+
150
+ #### Triplet
151
+
152
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
153
+
154
+ | Metric | Value |
155
+ |:--------------------|:-----------|
156
+ | **cosine_accuracy** | **0.9601** |
157
+
158
+ <!--
159
+ ## Bias, Risks and Limitations
160
+
161
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
162
+ -->
163
+
164
+ <!--
165
+ ### Recommendations
166
+
167
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
168
+ -->
169
+
170
+ ## Training Details
171
+
172
+ ### Training Dataset
173
+
174
+ #### Unnamed Dataset
175
+
176
+ * Size: 704,308 training samples
177
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>itemCategory</code>
178
+ * Approximate statistics based on the first 1000 samples:
179
+ | | anchor | positive | itemCategory |
180
+ |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
181
+ | type | string | string | string |
182
+ | details | <ul><li>min: 3 tokens</li><li>mean: 8.06 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.35 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.93 tokens</li><li>max: 9 tokens</li></ul> |
183
+ * Samples:
184
+ | anchor | positive | itemCategory |
185
+ |:-------------------------------------------------------------|:--------------------------------------------------|:-------------------------------------|
186
+ | <code>rilastil sunlaude comfort dye fluid spf50 50 ml</code> | <code>spf50 sunscreen</code> | <code>sunscreen</code> |
187
+ | <code>lemon and powder leather slippers</code> | <code>genuine cow leather</code> | <code>slipper</code> |
188
+ | <code>erastapex trio</code> | <code>erastapex trio olmesartan medoxomil</code> | <code>blood disorder medicine</code> |
189
+ * Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
190
+ ```json
191
+ {
192
+ "scale": 20.0,
193
+ "similarity_fct": "cos_sim",
194
+ "gather_across_devices": false
195
+ }
196
+ ```
197
+
198
+ ### Evaluation Dataset
199
+
200
+ #### Unnamed Dataset
201
+
202
+ * Size: 9,509 evaluation samples
203
+ * Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, and <code>itemCategory</code>
204
+ * Approximate statistics based on the first 1000 samples:
205
+ | | anchor | positive | negative | itemCategory |
206
+ |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
207
+ | type | string | string | string | string |
208
+ | details | <ul><li>min: 3 tokens</li><li>mean: 9.63 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 6.17 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.79 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.88 tokens</li><li>max: 10 tokens</li></ul> |
209
+ * Samples:
210
+ | anchor | positive | negative | itemCategory |
211
+ |:---------------------------------------------------------------------|:----------------------------------|:----------------------------------------------------------|:------------------------------------|
212
+ | <code>pilot mechanical pencil progrex h-127 - 0.7 mm</code> | <code>0.7 mm pencil</code> | <code>tracing sketch a3 70 gr 50 sheets</code> | <code>pencil</code> |
213
+ | <code>superior drawing marker -pen - set of 12 colors - 2 nib</code> | <code> marker pen set </code> | <code>wunder chocolate strawberry ganache & coulis</code> | <code>marker</code> |
214
+ | <code>first person singular author: haruki murakami</code> | <code>haruki murakami book</code> | <code>dark hot chocolate sugar free</code> | <code>literature and fiction</code> |
215
+ * Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
216
+ ```json
217
+ {
218
+ "scale": 20.0,
219
+ "similarity_fct": "cos_sim",
220
+ "gather_across_devices": false
221
+ }
222
+ ```
223
+
224
+ ### Training Hyperparameters
225
+ #### Non-Default Hyperparameters
226
+
227
+ - `eval_strategy`: steps
228
+ - `per_device_train_batch_size`: 128
229
+ - `per_device_eval_batch_size`: 128
230
+ - `weight_decay`: 0.001
231
+ - `num_train_epochs`: 5
232
+ - `warmup_ratio`: 0.2
233
+ - `fp16`: True
234
+ - `dataloader_num_workers`: 1
235
+ - `dataloader_prefetch_factor`: 2
236
+ - `dataloader_persistent_workers`: True
237
+ - `push_to_hub`: True
238
+ - `hub_model_id`: LamaDiab/NewMiniLM-V15Data-128BATCH-SemanticEngine
239
+ - `hub_strategy`: all_checkpoints
240
+
241
+ #### All Hyperparameters
242
+ <details><summary>Click to expand</summary>
243
+
244
+ - `overwrite_output_dir`: False
245
+ - `do_predict`: False
246
+ - `eval_strategy`: steps
247
+ - `prediction_loss_only`: True
248
+ - `per_device_train_batch_size`: 128
249
+ - `per_device_eval_batch_size`: 128
250
+ - `per_gpu_train_batch_size`: None
251
+ - `per_gpu_eval_batch_size`: None
252
+ - `gradient_accumulation_steps`: 1
253
+ - `eval_accumulation_steps`: None
254
+ - `torch_empty_cache_steps`: None
255
+ - `learning_rate`: 5e-05
256
+ - `weight_decay`: 0.001
257
+ - `adam_beta1`: 0.9
258
+ - `adam_beta2`: 0.999
259
+ - `adam_epsilon`: 1e-08
260
+ - `max_grad_norm`: 1.0
261
+ - `num_train_epochs`: 5
262
+ - `max_steps`: -1
263
+ - `lr_scheduler_type`: linear
264
+ - `lr_scheduler_kwargs`: {}
265
+ - `warmup_ratio`: 0.2
266
+ - `warmup_steps`: 0
267
+ - `log_level`: passive
268
+ - `log_level_replica`: warning
269
+ - `log_on_each_node`: True
270
+ - `logging_nan_inf_filter`: True
271
+ - `save_safetensors`: True
272
+ - `save_on_each_node`: False
273
+ - `save_only_model`: False
274
+ - `restore_callback_states_from_checkpoint`: False
275
+ - `no_cuda`: False
276
+ - `use_cpu`: False
277
+ - `use_mps_device`: False
278
+ - `seed`: 42
279
+ - `data_seed`: None
280
+ - `jit_mode_eval`: False
281
+ - `use_ipex`: False
282
+ - `bf16`: False
283
+ - `fp16`: True
284
+ - `fp16_opt_level`: O1
285
+ - `half_precision_backend`: auto
286
+ - `bf16_full_eval`: False
287
+ - `fp16_full_eval`: False
288
+ - `tf32`: None
289
+ - `local_rank`: 0
290
+ - `ddp_backend`: None
291
+ - `tpu_num_cores`: None
292
+ - `tpu_metrics_debug`: False
293
+ - `debug`: []
294
+ - `dataloader_drop_last`: False
295
+ - `dataloader_num_workers`: 1
296
+ - `dataloader_prefetch_factor`: 2
297
+ - `past_index`: -1
298
+ - `disable_tqdm`: False
299
+ - `remove_unused_columns`: True
300
+ - `label_names`: None
301
+ - `load_best_model_at_end`: False
302
+ - `ignore_data_skip`: False
303
+ - `fsdp`: []
304
+ - `fsdp_min_num_params`: 0
305
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
306
+ - `fsdp_transformer_layer_cls_to_wrap`: None
307
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
308
+ - `deepspeed`: None
309
+ - `label_smoothing_factor`: 0.0
310
+ - `optim`: adamw_torch
311
+ - `optim_args`: None
312
+ - `adafactor`: False
313
+ - `group_by_length`: False
314
+ - `length_column_name`: length
315
+ - `ddp_find_unused_parameters`: None
316
+ - `ddp_bucket_cap_mb`: None
317
+ - `ddp_broadcast_buffers`: False
318
+ - `dataloader_pin_memory`: True
319
+ - `dataloader_persistent_workers`: True
320
+ - `skip_memory_metrics`: True
321
+ - `use_legacy_prediction_loop`: False
322
+ - `push_to_hub`: True
323
+ - `resume_from_checkpoint`: None
324
+ - `hub_model_id`: LamaDiab/NewMiniLM-V15Data-128BATCH-SemanticEngine
325
+ - `hub_strategy`: all_checkpoints
326
+ - `hub_private_repo`: None
327
+ - `hub_always_push`: False
328
+ - `hub_revision`: None
329
+ - `gradient_checkpointing`: False
330
+ - `gradient_checkpointing_kwargs`: None
331
+ - `include_inputs_for_metrics`: False
332
+ - `include_for_metrics`: []
333
+ - `eval_do_concat_batches`: True
334
+ - `fp16_backend`: auto
335
+ - `push_to_hub_model_id`: None
336
+ - `push_to_hub_organization`: None
337
+ - `mp_parameters`:
338
+ - `auto_find_batch_size`: False
339
+ - `full_determinism`: False
340
+ - `torchdynamo`: None
341
+ - `ray_scope`: last
342
+ - `ddp_timeout`: 1800
343
+ - `torch_compile`: False
344
+ - `torch_compile_backend`: None
345
+ - `torch_compile_mode`: None
346
+ - `include_tokens_per_second`: False
347
+ - `include_num_input_tokens_seen`: False
348
+ - `neftune_noise_alpha`: None
349
+ - `optim_target_modules`: None
350
+ - `batch_eval_metrics`: False
351
+ - `eval_on_start`: False
352
+ - `use_liger_kernel`: False
353
+ - `liger_kernel_config`: None
354
+ - `eval_use_gather_object`: False
355
+ - `average_tokens_across_devices`: False
356
+ - `prompts`: None
357
+ - `batch_sampler`: batch_sampler
358
+ - `multi_dataset_batch_sampler`: proportional
359
+ - `router_mapping`: {}
360
+ - `learning_rate_mapping`: {}
361
+
362
+ </details>
363
+
364
+ ### Training Logs
365
+ | Epoch | Step | Training Loss | Validation Loss | cosine_accuracy |
366
+ |:------:|:-----:|:-------------:|:---------------:|:---------------:|
367
+ | 0.0002 | 1 | 3.1229 | - | - |
368
+ | 0.1817 | 1000 | 2.6857 | 1.6310 | 0.9441 |
369
+ | 0.3634 | 2000 | 2.0541 | 1.5448 | 0.9472 |
370
+ | 0.5452 | 3000 | 1.7335 | 1.5236 | 0.9485 |
371
+ | 0.7269 | 4000 | 1.2495 | 1.5552 | 0.9433 |
372
+ | 0.9086 | 5000 | 0.813 | 1.5794 | 0.9472 |
373
+ | 1.0903 | 6000 | 1.0512 | 1.4544 | 0.9567 |
374
+ | 1.2720 | 7000 | 1.2912 | 1.4492 | 0.9563 |
375
+ | 1.4538 | 8000 | 1.1994 | 1.4519 | 0.9568 |
376
+ | 1.6355 | 9000 | 1.0662 | 1.4635 | 0.9545 |
377
+ | 1.8172 | 10000 | 0.6724 | 1.5717 | 0.9454 |
378
+ | 1.9989 | 11000 | 0.4761 | 1.5509 | 0.9503 |
379
+ | 2.1806 | 12000 | 1.0468 | 1.4510 | 0.9591 |
380
+ | 2.3623 | 13000 | 0.9871 | 1.4625 | 0.9608 |
381
+ | 2.5441 | 14000 | 0.9596 | 1.4531 | 0.9606 |
382
+ | 2.7258 | 15000 | 0.7272 | 1.4685 | 0.9589 |
383
+ | 2.9075 | 16000 | 0.4716 | 1.5063 | 0.9549 |
384
+ | 3.0892 | 17000 | 0.6495 | 1.4401 | 0.9626 |
385
+ | 3.2709 | 18000 | 0.8911 | 1.4418 | 0.9642 |
386
+ | 3.4527 | 19000 | 0.871 | 1.4658 | 0.9635 |
387
+ | 3.6344 | 20000 | 0.8008 | 1.4879 | 0.9594 |
388
+ | 3.8161 | 21000 | 0.5084 | 1.4949 | 0.9579 |
389
+ | 3.9978 | 22000 | 0.3552 | 1.5567 | 0.9568 |
390
+ | 4.1795 | 23000 | 0.8254 | 1.4609 | 0.9651 |
391
+ | 4.3613 | 24000 | 0.8164 | 1.4704 | 0.9641 |
392
+ | 4.5430 | 25000 | 0.8078 | 1.4598 | 0.9635 |
393
+ | 4.7247 | 26000 | 0.6181 | 1.4891 | 0.9602 |
394
+ | 4.9064 | 27000 | 0.3932 | 1.4990 | 0.9601 |
395
+
396
+
397
+ ### Framework Versions
398
+ - Python: 3.11.13
399
+ - Sentence Transformers: 5.1.2
400
+ - Transformers: 4.53.3
401
+ - PyTorch: 2.6.0+cu124
402
+ - Accelerate: 1.9.0
403
+ - Datasets: 4.4.1
404
+ - Tokenizers: 0.21.2
405
+
406
+ ## Citation
407
+
408
+ ### BibTeX
409
+
410
+ #### Sentence Transformers
411
+ ```bibtex
412
+ @inproceedings{reimers-2019-sentence-bert,
413
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
414
+ author = "Reimers, Nils and Gurevych, Iryna",
415
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
416
+ month = "11",
417
+ year = "2019",
418
+ publisher = "Association for Computational Linguistics",
419
+ url = "https://arxiv.org/abs/1908.10084",
420
+ }
421
+ ```
422
+
423
+ <!--
424
+ ## Glossary
425
+
426
+ *Clearly define terms in order to be accessible across audiences.*
427
+ -->
428
+
429
+ <!--
430
+ ## Model Card Authors
431
+
432
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
433
+ -->
434
+
435
+ <!--
436
+ ## Model Card Contact
437
+
438
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
439
+ -->
checkpoint-27515/config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 6,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.53.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
checkpoint-27515/config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "5.1.2",
4
+ "transformers": "4.53.3",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "model_type": "SentenceTransformer",
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
checkpoint-27515/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab46837b6aa48118c28b7d88672c402453181d5305a631fc1c39b695eb3d4dcf
3
+ size 90864192
checkpoint-27515/modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
checkpoint-27515/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:271505e83f40635949a6b9419e074d430492bbfb2ef2935ef89a7d7cb52e2fa8
3
+ size 180607738
checkpoint-27515/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f26cd6c84529e824af823dad6fea142b045513b36d087a08f524eec725af67e
3
+ size 14244
checkpoint-27515/scaler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbcb34e3fed1b71caa3386a7f85c59cb348148a1604463bf40b6df54642477c4
3
+ size 988
checkpoint-27515/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1eee312c3c0687fe4d8d88cb599f956ac76941593685e2745e60e19b7b8dde45
3
+ size 1064
checkpoint-27515/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
checkpoint-27515/special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
checkpoint-27515/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-27515/tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 128,
51
+ "model_max_length": 256,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
checkpoint-27515/trainer_state.json ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 5.0,
6
+ "eval_steps": 1000,
7
+ "global_step": 27515,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.00018171906232963838,
14
+ "grad_norm": 7.594760894775391,
15
+ "learning_rate": 0.0,
16
+ "loss": 3.1229,
17
+ "step": 1
18
+ },
19
+ {
20
+ "epoch": 0.1817190623296384,
21
+ "grad_norm": 6.193614959716797,
22
+ "learning_rate": 9.076867163365438e-06,
23
+ "loss": 2.6857,
24
+ "step": 1000
25
+ },
26
+ {
27
+ "epoch": 0.1817190623296384,
28
+ "eval_cosine_accuracy": 0.9440529942512512,
29
+ "eval_loss": 1.6310006380081177,
30
+ "eval_runtime": 21.3914,
31
+ "eval_samples_per_second": 444.525,
32
+ "eval_steps_per_second": 3.506,
33
+ "step": 1000
34
+ },
35
+ {
36
+ "epoch": 0.3634381246592768,
37
+ "grad_norm": 5.20527458190918,
38
+ "learning_rate": 1.8153734326730876e-05,
39
+ "loss": 2.0541,
40
+ "step": 2000
41
+ },
42
+ {
43
+ "epoch": 0.3634381246592768,
44
+ "eval_cosine_accuracy": 0.9472079277038574,
45
+ "eval_loss": 1.544768214225769,
46
+ "eval_runtime": 21.6963,
47
+ "eval_samples_per_second": 438.277,
48
+ "eval_steps_per_second": 3.457,
49
+ "step": 2000
50
+ },
51
+ {
52
+ "epoch": 0.5451571869889151,
53
+ "grad_norm": 4.887779712677002,
54
+ "learning_rate": 2.7239687443212796e-05,
55
+ "loss": 1.7335,
56
+ "step": 3000
57
+ },
58
+ {
59
+ "epoch": 0.5451571869889151,
60
+ "eval_cosine_accuracy": 0.948469877243042,
61
+ "eval_loss": 1.5235735177993774,
62
+ "eval_runtime": 21.262,
63
+ "eval_samples_per_second": 447.229,
64
+ "eval_steps_per_second": 3.527,
65
+ "step": 3000
66
+ },
67
+ {
68
+ "epoch": 0.7268762493185535,
69
+ "grad_norm": 6.290886878967285,
70
+ "learning_rate": 3.632564055969471e-05,
71
+ "loss": 1.2495,
72
+ "step": 4000
73
+ },
74
+ {
75
+ "epoch": 0.7268762493185535,
76
+ "eval_cosine_accuracy": 0.9433168768882751,
77
+ "eval_loss": 1.5551835298538208,
78
+ "eval_runtime": 21.377,
79
+ "eval_samples_per_second": 444.824,
80
+ "eval_steps_per_second": 3.508,
81
+ "step": 4000
82
+ },
83
+ {
84
+ "epoch": 0.9085953116481919,
85
+ "grad_norm": 8.564718246459961,
86
+ "learning_rate": 4.5411593676176635e-05,
87
+ "loss": 0.813,
88
+ "step": 5000
89
+ },
90
+ {
91
+ "epoch": 0.9085953116481919,
92
+ "eval_cosine_accuracy": 0.9472079277038574,
93
+ "eval_loss": 1.5794354677200317,
94
+ "eval_runtime": 21.6842,
95
+ "eval_samples_per_second": 438.522,
96
+ "eval_steps_per_second": 3.459,
97
+ "step": 5000
98
+ },
99
+ {
100
+ "epoch": 1.0903143739778303,
101
+ "grad_norm": 5.288447856903076,
102
+ "learning_rate": 4.8875613301835365e-05,
103
+ "loss": 1.0512,
104
+ "step": 6000
105
+ },
106
+ {
107
+ "epoch": 1.0903143739778303,
108
+ "eval_cosine_accuracy": 0.9566726088523865,
109
+ "eval_loss": 1.4543931484222412,
110
+ "eval_runtime": 23.0837,
111
+ "eval_samples_per_second": 411.936,
112
+ "eval_steps_per_second": 3.249,
113
+ "step": 6000
114
+ },
115
+ {
116
+ "epoch": 1.2720334363074688,
117
+ "grad_norm": 4.507662296295166,
118
+ "learning_rate": 4.660412502271488e-05,
119
+ "loss": 1.2912,
120
+ "step": 7000
121
+ },
122
+ {
123
+ "epoch": 1.2720334363074688,
124
+ "eval_cosine_accuracy": 0.9562519788742065,
125
+ "eval_loss": 1.4492090940475464,
126
+ "eval_runtime": 21.5059,
127
+ "eval_samples_per_second": 442.159,
128
+ "eval_steps_per_second": 3.487,
129
+ "step": 7000
130
+ },
131
+ {
132
+ "epoch": 1.453752498637107,
133
+ "grad_norm": 4.300741672515869,
134
+ "learning_rate": 4.433263674359441e-05,
135
+ "loss": 1.1994,
136
+ "step": 8000
137
+ },
138
+ {
139
+ "epoch": 1.453752498637107,
140
+ "eval_cosine_accuracy": 0.956777811050415,
141
+ "eval_loss": 1.4519122838974,
142
+ "eval_runtime": 21.4819,
143
+ "eval_samples_per_second": 442.651,
144
+ "eval_steps_per_second": 3.491,
145
+ "step": 8000
146
+ },
147
+ {
148
+ "epoch": 1.6354715609667454,
149
+ "grad_norm": 4.133264064788818,
150
+ "learning_rate": 4.206569144103217e-05,
151
+ "loss": 1.0662,
152
+ "step": 9000
153
+ },
154
+ {
155
+ "epoch": 1.6354715609667454,
156
+ "eval_cosine_accuracy": 0.9544641971588135,
157
+ "eval_loss": 1.4635158777236938,
158
+ "eval_runtime": 21.6809,
159
+ "eval_samples_per_second": 438.588,
160
+ "eval_steps_per_second": 3.459,
161
+ "step": 9000
162
+ },
163
+ {
164
+ "epoch": 1.817190623296384,
165
+ "grad_norm": 5.683165073394775,
166
+ "learning_rate": 3.9794203161911684e-05,
167
+ "loss": 0.6724,
168
+ "step": 10000
169
+ },
170
+ {
171
+ "epoch": 1.817190623296384,
172
+ "eval_cosine_accuracy": 0.9454201459884644,
173
+ "eval_loss": 1.571725845336914,
174
+ "eval_runtime": 21.3873,
175
+ "eval_samples_per_second": 444.61,
176
+ "eval_steps_per_second": 3.507,
177
+ "step": 10000
178
+ },
179
+ {
180
+ "epoch": 1.998909685626022,
181
+ "grad_norm": 7.759120464324951,
182
+ "learning_rate": 3.752498637107033e-05,
183
+ "loss": 0.4761,
184
+ "step": 11000
185
+ },
186
+ {
187
+ "epoch": 1.998909685626022,
188
+ "eval_cosine_accuracy": 0.9502576589584351,
189
+ "eval_loss": 1.550882339477539,
190
+ "eval_runtime": 22.2267,
191
+ "eval_samples_per_second": 427.818,
192
+ "eval_steps_per_second": 3.374,
193
+ "step": 11000
194
+ },
195
+ {
196
+ "epoch": 2.1806287479556605,
197
+ "grad_norm": 4.08748197555542,
198
+ "learning_rate": 3.525349809194985e-05,
199
+ "loss": 1.0468,
200
+ "step": 12000
201
+ },
202
+ {
203
+ "epoch": 2.1806287479556605,
204
+ "eval_cosine_accuracy": 0.9590913653373718,
205
+ "eval_loss": 1.4509832859039307,
206
+ "eval_runtime": 21.4477,
207
+ "eval_samples_per_second": 443.358,
208
+ "eval_steps_per_second": 3.497,
209
+ "step": 12000
210
+ },
211
+ {
212
+ "epoch": 2.362347810285299,
213
+ "grad_norm": 4.058324813842773,
214
+ "learning_rate": 3.2982009812829365e-05,
215
+ "loss": 0.9871,
216
+ "step": 13000
217
+ },
218
+ {
219
+ "epoch": 2.362347810285299,
220
+ "eval_cosine_accuracy": 0.9607740044593811,
221
+ "eval_loss": 1.4624947309494019,
222
+ "eval_runtime": 21.4174,
223
+ "eval_samples_per_second": 443.985,
224
+ "eval_steps_per_second": 3.502,
225
+ "step": 13000
226
+ },
227
+ {
228
+ "epoch": 2.5440668726149376,
229
+ "grad_norm": 4.1556572914123535,
230
+ "learning_rate": 3.071279302198801e-05,
231
+ "loss": 0.9596,
232
+ "step": 14000
233
+ },
234
+ {
235
+ "epoch": 2.5440668726149376,
236
+ "eval_cosine_accuracy": 0.9605636596679688,
237
+ "eval_loss": 1.4531124830245972,
238
+ "eval_runtime": 21.3445,
239
+ "eval_samples_per_second": 445.501,
240
+ "eval_steps_per_second": 3.514,
241
+ "step": 14000
242
+ },
243
+ {
244
+ "epoch": 2.7257859349445757,
245
+ "grad_norm": 6.094749450683594,
246
+ "learning_rate": 2.8443576231146652e-05,
247
+ "loss": 0.7272,
248
+ "step": 15000
249
+ },
250
+ {
251
+ "epoch": 2.7257859349445757,
252
+ "eval_cosine_accuracy": 0.9588810801506042,
253
+ "eval_loss": 1.4685230255126953,
254
+ "eval_runtime": 21.4386,
255
+ "eval_samples_per_second": 443.546,
256
+ "eval_steps_per_second": 3.498,
257
+ "step": 15000
258
+ },
259
+ {
260
+ "epoch": 2.907504997274214,
261
+ "grad_norm": 4.291028022766113,
262
+ "learning_rate": 2.6172087952026168e-05,
263
+ "loss": 0.4716,
264
+ "step": 16000
265
+ },
266
+ {
267
+ "epoch": 2.907504997274214,
268
+ "eval_cosine_accuracy": 0.9548848271369934,
269
+ "eval_loss": 1.5063105821609497,
270
+ "eval_runtime": 21.5765,
271
+ "eval_samples_per_second": 440.711,
272
+ "eval_steps_per_second": 3.476,
273
+ "step": 16000
274
+ },
275
+ {
276
+ "epoch": 3.0892240596038523,
277
+ "grad_norm": 4.386570453643799,
278
+ "learning_rate": 2.3900599672905687e-05,
279
+ "loss": 0.6495,
280
+ "step": 17000
281
+ },
282
+ {
283
+ "epoch": 3.0892240596038523,
284
+ "eval_cosine_accuracy": 0.9625617861747742,
285
+ "eval_loss": 1.4400604963302612,
286
+ "eval_runtime": 23.2722,
287
+ "eval_samples_per_second": 408.599,
288
+ "eval_steps_per_second": 3.223,
289
+ "step": 17000
290
+ },
291
+ {
292
+ "epoch": 3.270943121933491,
293
+ "grad_norm": 4.523783206939697,
294
+ "learning_rate": 2.163138288206433e-05,
295
+ "loss": 0.8911,
296
+ "step": 18000
297
+ },
298
+ {
299
+ "epoch": 3.270943121933491,
300
+ "eval_cosine_accuracy": 0.9642444252967834,
301
+ "eval_loss": 1.4418244361877441,
302
+ "eval_runtime": 21.9789,
303
+ "eval_samples_per_second": 432.643,
304
+ "eval_steps_per_second": 3.412,
305
+ "step": 18000
306
+ },
307
+ {
308
+ "epoch": 3.4526621842631293,
309
+ "grad_norm": 4.639398097991943,
310
+ "learning_rate": 1.935989460294385e-05,
311
+ "loss": 0.871,
312
+ "step": 19000
313
+ },
314
+ {
315
+ "epoch": 3.4526621842631293,
316
+ "eval_cosine_accuracy": 0.9635082483291626,
317
+ "eval_loss": 1.4658271074295044,
318
+ "eval_runtime": 21.6688,
319
+ "eval_samples_per_second": 438.835,
320
+ "eval_steps_per_second": 3.461,
321
+ "step": 19000
322
+ },
323
+ {
324
+ "epoch": 3.634381246592768,
325
+ "grad_norm": 5.234416484832764,
326
+ "learning_rate": 1.708840632382337e-05,
327
+ "loss": 0.8008,
328
+ "step": 20000
329
+ },
330
+ {
331
+ "epoch": 3.634381246592768,
332
+ "eval_cosine_accuracy": 0.959406852722168,
333
+ "eval_loss": 1.4878684282302856,
334
+ "eval_runtime": 21.9193,
335
+ "eval_samples_per_second": 433.818,
336
+ "eval_steps_per_second": 3.422,
337
+ "step": 20000
338
+ },
339
+ {
340
+ "epoch": 3.816100308922406,
341
+ "grad_norm": 6.247635841369629,
342
+ "learning_rate": 1.481691804470289e-05,
343
+ "loss": 0.5084,
344
+ "step": 21000
345
+ },
346
+ {
347
+ "epoch": 3.816100308922406,
348
+ "eval_cosine_accuracy": 0.9579346179962158,
349
+ "eval_loss": 1.4949491024017334,
350
+ "eval_runtime": 21.7468,
351
+ "eval_samples_per_second": 437.26,
352
+ "eval_steps_per_second": 3.449,
353
+ "step": 21000
354
+ },
355
+ {
356
+ "epoch": 3.9978193712520445,
357
+ "grad_norm": 9.596100807189941,
358
+ "learning_rate": 1.2545429765582411e-05,
359
+ "loss": 0.3552,
360
+ "step": 22000
361
+ },
362
+ {
363
+ "epoch": 3.9978193712520445,
364
+ "eval_cosine_accuracy": 0.956777811050415,
365
+ "eval_loss": 1.5567395687103271,
366
+ "eval_runtime": 21.5594,
367
+ "eval_samples_per_second": 441.06,
368
+ "eval_steps_per_second": 3.479,
369
+ "step": 22000
370
+ },
371
+ {
372
+ "epoch": 4.1795384335816825,
373
+ "grad_norm": 4.127788543701172,
374
+ "learning_rate": 1.0276212974741051e-05,
375
+ "loss": 0.8254,
376
+ "step": 23000
377
+ },
378
+ {
379
+ "epoch": 4.1795384335816825,
380
+ "eval_cosine_accuracy": 0.9650856852531433,
381
+ "eval_loss": 1.4608709812164307,
382
+ "eval_runtime": 21.9371,
383
+ "eval_samples_per_second": 433.468,
384
+ "eval_steps_per_second": 3.419,
385
+ "step": 23000
386
+ },
387
+ {
388
+ "epoch": 4.361257495911321,
389
+ "grad_norm": 3.8654837608337402,
390
+ "learning_rate": 8.00472469562057e-06,
391
+ "loss": 0.8164,
392
+ "step": 24000
393
+ },
394
+ {
395
+ "epoch": 4.361257495911321,
396
+ "eval_cosine_accuracy": 0.9641392230987549,
397
+ "eval_loss": 1.4704478979110718,
398
+ "eval_runtime": 21.88,
399
+ "eval_samples_per_second": 434.597,
400
+ "eval_steps_per_second": 3.428,
401
+ "step": 24000
402
+ },
403
+ {
404
+ "epoch": 4.54297655824096,
405
+ "grad_norm": 3.9021806716918945,
406
+ "learning_rate": 5.733236416500091e-06,
407
+ "loss": 0.8078,
408
+ "step": 25000
409
+ },
410
+ {
411
+ "epoch": 4.54297655824096,
412
+ "eval_cosine_accuracy": 0.9635082483291626,
413
+ "eval_loss": 1.4598076343536377,
414
+ "eval_runtime": 21.8521,
415
+ "eval_samples_per_second": 435.152,
416
+ "eval_steps_per_second": 3.432,
417
+ "step": 25000
418
+ },
419
+ {
420
+ "epoch": 4.724695620570598,
421
+ "grad_norm": 5.6821770668029785,
422
+ "learning_rate": 3.4617481373796112e-06,
423
+ "loss": 0.6181,
424
+ "step": 26000
425
+ },
426
+ {
427
+ "epoch": 4.724695620570598,
428
+ "eval_cosine_accuracy": 0.9602481722831726,
429
+ "eval_loss": 1.4891104698181152,
430
+ "eval_runtime": 21.4818,
431
+ "eval_samples_per_second": 442.653,
432
+ "eval_steps_per_second": 3.491,
433
+ "step": 26000
434
+ },
435
+ {
436
+ "epoch": 4.906414682900236,
437
+ "grad_norm": 5.397206783294678,
438
+ "learning_rate": 1.1948028348173725e-06,
439
+ "loss": 0.3932,
440
+ "step": 27000
441
+ },
442
+ {
443
+ "epoch": 4.906414682900236,
444
+ "eval_cosine_accuracy": 0.9601430296897888,
445
+ "eval_loss": 1.4990485906600952,
446
+ "eval_runtime": 21.8259,
447
+ "eval_samples_per_second": 435.675,
448
+ "eval_steps_per_second": 3.436,
449
+ "step": 27000
450
+ }
451
+ ],
452
+ "logging_steps": 1000,
453
+ "max_steps": 27515,
454
+ "num_input_tokens_seen": 0,
455
+ "num_train_epochs": 5,
456
+ "save_steps": 500,
457
+ "stateful_callbacks": {
458
+ "TrainerControl": {
459
+ "args": {
460
+ "should_epoch_stop": false,
461
+ "should_evaluate": false,
462
+ "should_log": false,
463
+ "should_save": true,
464
+ "should_training_stop": true
465
+ },
466
+ "attributes": {}
467
+ }
468
+ },
469
+ "total_flos": 0.0,
470
+ "train_batch_size": 128,
471
+ "trial_name": null,
472
+ "trial_params": null
473
+ }
checkpoint-27515/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a02e63b4d31c0aa3508c3cf9011e9ebaa0b41b9da92d5e690573acf95e805b18
3
+ size 5752
checkpoint-27515/vocab.txt ADDED
The diff for this file is too large to render. See raw diff