rsajja commited on
Commit
ee03e18
·
verified ·
1 Parent(s): 09d8fa4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +386 -380
README.md CHANGED
@@ -1,381 +1,387 @@
1
- ---
2
- base_model: sentence-transformers/all-MiniLM-L6-v2
3
- library_name: sentence-transformers
4
- pipeline_tag: sentence-similarity
5
- tags:
6
- - sentence-transformers
7
- - sentence-similarity
8
- - feature-extraction
9
- - generated_from_trainer
10
- - dataset_size:4828
11
- - loss:MultipleNegativesRankingLoss
12
- - loss:CosineSimilarityLoss
13
- widget:
14
- - source_sentence: What is the credit hour load?
15
- sentences:
16
- - 'Received: 4 credit hours.'
17
- - 'Part-time: typically 6–8 credits per term, this class is 2.'
18
- - 'Credit hour load: 1.5 hours.'
19
- - source_sentence: Who has authority for final grade requests?
20
- sentences:
21
- - What is the attendance policy?
22
- - 'Audit code: R, always equivalent to 0 credit, but fees may still apply.'
23
- - 'Grade authority: Prof. Sun-Jung Kim, Dr. Avery McGregor, Dr. Fahad Jameel (jameel@northarea.edu).
24
- All appeals must be emailed.'
25
- - source_sentence: Who is the secondary TA?
26
- sentences:
27
- - 'Secondary TAs: Lena Müller and Chayan Biswas.'
28
- - 'Supervising faculty: Prof. Taylor Moya, Dr. Seung Ahn.'
29
- - 'Credits per class: 3.'
30
- - source_sentence: Who is the grading TA?
31
- sentences:
32
- - 'Weekly pollsters: Dr. Anna Massoud, Prof. Darius Okoye.'
33
- - 'Course assistants this term: Ximena Michel and Seth Downey.'
34
- - 'Grading TAs: Miho Saito and Victor Ortiz.'
35
- - source_sentence: Who will support main lecture slide posting as TA?
36
- sentences:
37
- - 'Slide uploads: Brittany Stewart and Daniel Simmel.'
38
- - 'This course''s teachers: Dr. Paula Gillespie and Dr. Nikola Petrovic.'
39
- - Canvas communications are handled equally by Ewan McLean (ewanm@coll.edu), Maryse
40
- Rodrigues, and Soonjin Ko (skoon@coll.edu). Use '@TA' to get a quick response.
41
- ---
42
-
43
- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
44
-
45
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
46
-
47
- ## Model Details
48
-
49
- ### Model Description
50
- - **Model Type:** Sentence Transformer
51
- - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
52
- - **Maximum Sequence Length:** 256 tokens
53
- - **Output Dimensionality:** 384 dimensions
54
- - **Similarity Function:** Cosine Similarity
55
- <!-- - **Training Dataset:** Unknown -->
56
- <!-- - **Language:** Unknown -->
57
- <!-- - **License:** Unknown -->
58
-
59
- ### Model Sources
60
-
61
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
62
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
63
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
64
-
65
- ### Full Model Architecture
66
-
67
- ```
68
- SentenceTransformer(
69
- (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
70
- (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
71
- (2): Normalize()
72
- )
73
- ```
74
-
75
- ## Usage
76
-
77
- ### Direct Usage (Sentence Transformers)
78
-
79
- First install the Sentence Transformers library:
80
-
81
- ```bash
82
- pip install -U sentence-transformers
83
- ```
84
-
85
- Then you can load this model and run inference.
86
- ```python
87
- from sentence_transformers import SentenceTransformer
88
-
89
- # Download from the 🤗 Hub
90
- model = SentenceTransformer("rsajja/Fine-tuned-Educational-Model-Dual-Loss")
91
- # Run inference
92
- sentences = [
93
- 'Who will support main lecture slide posting as TA?',
94
- 'Slide uploads: Brittany Stewart and Daniel Simmel.',
95
- "This course's teachers: Dr. Paula Gillespie and Dr. Nikola Petrovic.",
96
- ]
97
- embeddings = model.encode(sentences)
98
- print(embeddings.shape)
99
- # [3, 384]
100
-
101
- # Get the similarity scores for the embeddings
102
- similarities = model.similarity(embeddings, embeddings)
103
- print(similarities.shape)
104
- # [3, 3]
105
- ```
106
-
107
- <!--
108
- ### Direct Usage (Transformers)
109
-
110
- <details><summary>Click to see the direct usage in Transformers</summary>
111
-
112
- </details>
113
- -->
114
-
115
- <!--
116
- ### Downstream Usage (Sentence Transformers)
117
-
118
- You can finetune this model on your own dataset.
119
-
120
- <details><summary>Click to expand</summary>
121
-
122
- </details>
123
- -->
124
-
125
- <!--
126
- ### Out-of-Scope Use
127
-
128
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
129
- -->
130
-
131
- <!--
132
- ## Bias, Risks and Limitations
133
-
134
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
135
- -->
136
-
137
- <!--
138
- ### Recommendations
139
-
140
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
141
- -->
142
-
143
- ## Training Details
144
-
145
- ### Training Datasets
146
-
147
- #### Unnamed Dataset
148
-
149
- * Size: 2,193 training samples
150
- * Columns: <code>sentence_0</code> and <code>sentence_1</code>
151
- * Approximate statistics based on the first 1000 samples:
152
- | | sentence_0 | sentence_1 |
153
- |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
154
- | type | string | string |
155
- | details | <ul><li>min: 3 tokens</li><li>mean: 9.79 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 17.04 tokens</li><li>max: 95 tokens</li></ul> |
156
- * Samples:
157
- | sentence_0 | sentence_1 |
158
- |:------------------------------------------------|:--------------------------------------------------------------------------|
159
- | <code>Who teaches this section?</code> | <code>Section teachers: Dr. Sandy Rivera and Dr. Ludovic Tremblay.</code> |
160
- | <code>How many hours of academic credit?</code> | <code>This course grants 2 academic hours.</code> |
161
- | <code>Who guides this class?</code> | <code>Class guides: Dr. Marina Sokolov, Dr. Andre Williams.</code> |
162
- * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
163
- ```json
164
- {
165
- "scale": 20.0,
166
- "similarity_fct": "cos_sim"
167
- }
168
- ```
169
-
170
- #### Unnamed Dataset
171
-
172
- * Size: 2,635 training samples
173
- * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
174
- * Approximate statistics based on the first 1000 samples:
175
- | | sentence_0 | sentence_1 | label |
176
- |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
177
- | type | string | string | float |
178
- | details | <ul><li>min: 3 tokens</li><li>mean: 9.72 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 16.62 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.84</li><li>max: 1.0</li></ul> |
179
- * Samples:
180
- | sentence_0 | sentence_1 | label |
181
- |:--------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------|
182
- | <code>Which TAs are supporting the class?</code> | <code>Supporting TAs: Steve Bernstein, Rina Nanji.</code> | <code>1.0</code> |
183
- | <code>For how many credit hours?</code> | <code>The course is for 6 credit hours.</code> | <code>1.0</code> |
184
- | <code>Who are the teaching assistants for this course?</code> | <code>This course's teaching assistants include Troy Kim and Deborah Lee.</code> | <code>1.0</code> |
185
- * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
186
- ```json
187
- {
188
- "loss_fct": "torch.nn.modules.loss.MSELoss"
189
- }
190
- ```
191
-
192
- ### Training Hyperparameters
193
- #### Non-Default Hyperparameters
194
-
195
- - `per_device_train_batch_size`: 64
196
- - `per_device_eval_batch_size`: 64
197
- - `num_train_epochs`: 25
198
- - `multi_dataset_batch_sampler`: round_robin
199
-
200
- #### All Hyperparameters
201
- <details><summary>Click to expand</summary>
202
-
203
- - `overwrite_output_dir`: False
204
- - `do_predict`: False
205
- - `eval_strategy`: no
206
- - `prediction_loss_only`: True
207
- - `per_device_train_batch_size`: 64
208
- - `per_device_eval_batch_size`: 64
209
- - `per_gpu_train_batch_size`: None
210
- - `per_gpu_eval_batch_size`: None
211
- - `gradient_accumulation_steps`: 1
212
- - `eval_accumulation_steps`: None
213
- - `torch_empty_cache_steps`: None
214
- - `learning_rate`: 5e-05
215
- - `weight_decay`: 0.0
216
- - `adam_beta1`: 0.9
217
- - `adam_beta2`: 0.999
218
- - `adam_epsilon`: 1e-08
219
- - `max_grad_norm`: 1
220
- - `num_train_epochs`: 25
221
- - `max_steps`: -1
222
- - `lr_scheduler_type`: linear
223
- - `lr_scheduler_kwargs`: {}
224
- - `warmup_ratio`: 0.0
225
- - `warmup_steps`: 0
226
- - `log_level`: passive
227
- - `log_level_replica`: warning
228
- - `log_on_each_node`: True
229
- - `logging_nan_inf_filter`: True
230
- - `save_safetensors`: True
231
- - `save_on_each_node`: False
232
- - `save_only_model`: False
233
- - `restore_callback_states_from_checkpoint`: False
234
- - `no_cuda`: False
235
- - `use_cpu`: False
236
- - `use_mps_device`: False
237
- - `seed`: 42
238
- - `data_seed`: None
239
- - `jit_mode_eval`: False
240
- - `use_ipex`: False
241
- - `bf16`: False
242
- - `fp16`: False
243
- - `fp16_opt_level`: O1
244
- - `half_precision_backend`: auto
245
- - `bf16_full_eval`: False
246
- - `fp16_full_eval`: False
247
- - `tf32`: None
248
- - `local_rank`: 0
249
- - `ddp_backend`: None
250
- - `tpu_num_cores`: None
251
- - `tpu_metrics_debug`: False
252
- - `debug`: []
253
- - `dataloader_drop_last`: False
254
- - `dataloader_num_workers`: 0
255
- - `dataloader_prefetch_factor`: None
256
- - `past_index`: -1
257
- - `disable_tqdm`: False
258
- - `remove_unused_columns`: True
259
- - `label_names`: None
260
- - `load_best_model_at_end`: False
261
- - `ignore_data_skip`: False
262
- - `fsdp`: []
263
- - `fsdp_min_num_params`: 0
264
- - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
265
- - `fsdp_transformer_layer_cls_to_wrap`: None
266
- - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
267
- - `deepspeed`: None
268
- - `label_smoothing_factor`: 0.0
269
- - `optim`: adamw_torch
270
- - `optim_args`: None
271
- - `adafactor`: False
272
- - `group_by_length`: False
273
- - `length_column_name`: length
274
- - `ddp_find_unused_parameters`: None
275
- - `ddp_bucket_cap_mb`: None
276
- - `ddp_broadcast_buffers`: False
277
- - `dataloader_pin_memory`: True
278
- - `dataloader_persistent_workers`: False
279
- - `skip_memory_metrics`: True
280
- - `use_legacy_prediction_loop`: False
281
- - `push_to_hub`: False
282
- - `resume_from_checkpoint`: None
283
- - `hub_model_id`: None
284
- - `hub_strategy`: every_save
285
- - `hub_private_repo`: False
286
- - `hub_always_push`: False
287
- - `gradient_checkpointing`: False
288
- - `gradient_checkpointing_kwargs`: None
289
- - `include_inputs_for_metrics`: False
290
- - `eval_do_concat_batches`: True
291
- - `fp16_backend`: auto
292
- - `push_to_hub_model_id`: None
293
- - `push_to_hub_organization`: None
294
- - `mp_parameters`:
295
- - `auto_find_batch_size`: False
296
- - `full_determinism`: False
297
- - `torchdynamo`: None
298
- - `ray_scope`: last
299
- - `ddp_timeout`: 1800
300
- - `torch_compile`: False
301
- - `torch_compile_backend`: None
302
- - `torch_compile_mode`: None
303
- - `dispatch_batches`: None
304
- - `split_batches`: None
305
- - `include_tokens_per_second`: False
306
- - `include_num_input_tokens_seen`: False
307
- - `neftune_noise_alpha`: None
308
- - `optim_target_modules`: None
309
- - `batch_eval_metrics`: False
310
- - `eval_on_start`: False
311
- - `use_liger_kernel`: False
312
- - `eval_use_gather_object`: False
313
- - `prompts`: None
314
- - `batch_sampler`: batch_sampler
315
- - `multi_dataset_batch_sampler`: round_robin
316
-
317
- </details>
318
-
319
- ### Training Logs
320
- | Epoch | Step | Training Loss |
321
- |:-------:|:----:|:-------------:|
322
- | 7.1429 | 500 | 0.3755 |
323
- | 14.2857 | 1000 | 0.169 |
324
- | 21.4286 | 1500 | 0.1504 |
325
-
326
-
327
- ### Framework Versions
328
- - Python: 3.9.13
329
- - Sentence Transformers: 4.1.0
330
- - Transformers: 4.45.1
331
- - PyTorch: 2.0.1+cpu
332
- - Accelerate: 0.34.2
333
- - Datasets: 3.0.1
334
- - Tokenizers: 0.20.0
335
-
336
- ## Citation
337
-
338
- ### BibTeX
339
-
340
- #### Sentence Transformers
341
- ```bibtex
342
- @inproceedings{reimers-2019-sentence-bert,
343
- title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
344
- author = "Reimers, Nils and Gurevych, Iryna",
345
- booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
346
- month = "11",
347
- year = "2019",
348
- publisher = "Association for Computational Linguistics",
349
- url = "https://arxiv.org/abs/1908.10084",
350
- }
351
- ```
352
-
353
- #### MultipleNegativesRankingLoss
354
- ```bibtex
355
- @misc{henderson2017efficient,
356
- title={Efficient Natural Language Response Suggestion for Smart Reply},
357
- author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
358
- year={2017},
359
- eprint={1705.00652},
360
- archivePrefix={arXiv},
361
- primaryClass={cs.CL}
362
- }
363
- ```
364
-
365
- <!--
366
- ## Glossary
367
-
368
- *Clearly define terms in order to be accessible across audiences.*
369
- -->
370
-
371
- <!--
372
- ## Model Card Authors
373
-
374
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
375
- -->
376
-
377
- <!--
378
- ## Model Card Contact
379
-
380
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
 
 
 
 
 
 
381
  -->
 
1
+ ---
2
+ base_model: sentence-transformers/all-MiniLM-L6-v2
3
+ library_name: sentence-transformers
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:4828
11
+ - loss:MultipleNegativesRankingLoss
12
+ - loss:CosineSimilarityLoss
13
+ - Education
14
+ - Retrieval
15
+ - Syllabus
16
+ widget:
17
+ - source_sentence: What is the credit hour load?
18
+ sentences:
19
+ - 'Received: 4 credit hours.'
20
+ - 'Part-time: typically 6–8 credits per term, this class is 2.'
21
+ - 'Credit hour load: 1.5 hours.'
22
+ - source_sentence: Who has authority for final grade requests?
23
+ sentences:
24
+ - What is the attendance policy?
25
+ - 'Audit code: R, always equivalent to 0 credit, but fees may still apply.'
26
+ - >-
27
+ Grade authority: Prof. Sun-Jung Kim, Dr. Avery McGregor, Dr. Fahad Jameel
28
+ (jameel@northarea.edu). All appeals must be emailed.
29
+ - source_sentence: Who is the secondary TA?
30
+ sentences:
31
+ - 'Secondary TAs: Lena Müller and Chayan Biswas.'
32
+ - 'Supervising faculty: Prof. Taylor Moya, Dr. Seung Ahn.'
33
+ - 'Credits per class: 3.'
34
+ - source_sentence: Who is the grading TA?
35
+ sentences:
36
+ - 'Weekly pollsters: Dr. Anna Massoud, Prof. Darius Okoye.'
37
+ - 'Course assistants this term: Ximena Michel and Seth Downey.'
38
+ - 'Grading TAs: Miho Saito and Victor Ortiz.'
39
+ - source_sentence: Who will support main lecture slide posting as TA?
40
+ sentences:
41
+ - 'Slide uploads: Brittany Stewart and Daniel Simmel.'
42
+ - 'This course''s teachers: Dr. Paula Gillespie and Dr. Nikola Petrovic.'
43
+ - >-
44
+ Canvas communications are handled equally by Ewan McLean (ewanm@coll.edu),
45
+ Maryse Rodrigues, and Soonjin Ko (skoon@coll.edu). Use '@TA' to get a quick
46
+ response.
47
+ ---
48
+
49
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
50
+
51
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
52
+
53
+ ## Model Details
54
+
55
+ ### Model Description
56
+ - **Model Type:** Sentence Transformer
57
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
58
+ - **Maximum Sequence Length:** 256 tokens
59
+ - **Output Dimensionality:** 384 dimensions
60
+ - **Similarity Function:** Cosine Similarity
61
+ <!-- - **Training Dataset:** Unknown -->
62
+ <!-- - **Language:** Unknown -->
63
+ <!-- - **License:** Unknown -->
64
+
65
+ ### Model Sources
66
+
67
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
68
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
69
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
70
+
71
+ ### Full Model Architecture
72
+
73
+ ```
74
+ SentenceTransformer(
75
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
76
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
77
+ (2): Normalize()
78
+ )
79
+ ```
80
+
81
+ ## Usage
82
+
83
+ ### Direct Usage (Sentence Transformers)
84
+
85
+ First install the Sentence Transformers library:
86
+
87
+ ```bash
88
+ pip install -U sentence-transformers
89
+ ```
90
+
91
+ Then you can load this model and run inference.
92
+ ```python
93
+ from sentence_transformers import SentenceTransformer
94
+
95
+ # Download from the 🤗 Hub
96
+ model = SentenceTransformer("rsajja/Fine-tuned-Educational-Model-Dual-Loss")
97
+ # Run inference
98
+ sentences = [
99
+ 'Who will support main lecture slide posting as TA?',
100
+ 'Slide uploads: Brittany Stewart and Daniel Simmel.',
101
+ "This course's teachers: Dr. Paula Gillespie and Dr. Nikola Petrovic.",
102
+ ]
103
+ embeddings = model.encode(sentences)
104
+ print(embeddings.shape)
105
+ # [3, 384]
106
+
107
+ # Get the similarity scores for the embeddings
108
+ similarities = model.similarity(embeddings, embeddings)
109
+ print(similarities.shape)
110
+ # [3, 3]
111
+ ```
112
+
113
+ <!--
114
+ ### Direct Usage (Transformers)
115
+
116
+ <details><summary>Click to see the direct usage in Transformers</summary>
117
+
118
+ </details>
119
+ -->
120
+
121
+ <!--
122
+ ### Downstream Usage (Sentence Transformers)
123
+
124
+ You can finetune this model on your own dataset.
125
+
126
+ <details><summary>Click to expand</summary>
127
+
128
+ </details>
129
+ -->
130
+
131
+ <!--
132
+ ### Out-of-Scope Use
133
+
134
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
135
+ -->
136
+
137
+ <!--
138
+ ## Bias, Risks and Limitations
139
+
140
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
141
+ -->
142
+
143
+ <!--
144
+ ### Recommendations
145
+
146
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
147
+ -->
148
+
149
+ ## Training Details
150
+
151
+ ### Training Datasets
152
+
153
+ #### Unnamed Dataset
154
+
155
+ * Size: 2,193 training samples
156
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
157
+ * Approximate statistics based on the first 1000 samples:
158
+ | | sentence_0 | sentence_1 |
159
+ |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
160
+ | type | string | string |
161
+ | details | <ul><li>min: 3 tokens</li><li>mean: 9.79 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 17.04 tokens</li><li>max: 95 tokens</li></ul> |
162
+ * Samples:
163
+ | sentence_0 | sentence_1 |
164
+ |:------------------------------------------------|:--------------------------------------------------------------------------|
165
+ | <code>Who teaches this section?</code> | <code>Section teachers: Dr. Sandy Rivera and Dr. Ludovic Tremblay.</code> |
166
+ | <code>How many hours of academic credit?</code> | <code>This course grants 2 academic hours.</code> |
167
+ | <code>Who guides this class?</code> | <code>Class guides: Dr. Marina Sokolov, Dr. Andre Williams.</code> |
168
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
169
+ ```json
170
+ {
171
+ "scale": 20.0,
172
+ "similarity_fct": "cos_sim"
173
+ }
174
+ ```
175
+
176
+ #### Unnamed Dataset
177
+
178
+ * Size: 2,635 training samples
179
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
180
+ * Approximate statistics based on the first 1000 samples:
181
+ | | sentence_0 | sentence_1 | label |
182
+ |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
183
+ | type | string | string | float |
184
+ | details | <ul><li>min: 3 tokens</li><li>mean: 9.72 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 16.62 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.84</li><li>max: 1.0</li></ul> |
185
+ * Samples:
186
+ | sentence_0 | sentence_1 | label |
187
+ |:--------------------------------------------------------------|:---------------------------------------------------------------------------------|:-----------------|
188
+ | <code>Which TAs are supporting the class?</code> | <code>Supporting TAs: Steve Bernstein, Rina Nanji.</code> | <code>1.0</code> |
189
+ | <code>For how many credit hours?</code> | <code>The course is for 6 credit hours.</code> | <code>1.0</code> |
190
+ | <code>Who are the teaching assistants for this course?</code> | <code>This course's teaching assistants include Troy Kim and Deborah Lee.</code> | <code>1.0</code> |
191
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
192
+ ```json
193
+ {
194
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
195
+ }
196
+ ```
197
+
198
+ ### Training Hyperparameters
199
+ #### Non-Default Hyperparameters
200
+
201
+ - `per_device_train_batch_size`: 64
202
+ - `per_device_eval_batch_size`: 64
203
+ - `num_train_epochs`: 25
204
+ - `multi_dataset_batch_sampler`: round_robin
205
+
206
+ #### All Hyperparameters
207
+ <details><summary>Click to expand</summary>
208
+
209
+ - `overwrite_output_dir`: False
210
+ - `do_predict`: False
211
+ - `eval_strategy`: no
212
+ - `prediction_loss_only`: True
213
+ - `per_device_train_batch_size`: 64
214
+ - `per_device_eval_batch_size`: 64
215
+ - `per_gpu_train_batch_size`: None
216
+ - `per_gpu_eval_batch_size`: None
217
+ - `gradient_accumulation_steps`: 1
218
+ - `eval_accumulation_steps`: None
219
+ - `torch_empty_cache_steps`: None
220
+ - `learning_rate`: 5e-05
221
+ - `weight_decay`: 0.0
222
+ - `adam_beta1`: 0.9
223
+ - `adam_beta2`: 0.999
224
+ - `adam_epsilon`: 1e-08
225
+ - `max_grad_norm`: 1
226
+ - `num_train_epochs`: 25
227
+ - `max_steps`: -1
228
+ - `lr_scheduler_type`: linear
229
+ - `lr_scheduler_kwargs`: {}
230
+ - `warmup_ratio`: 0.0
231
+ - `warmup_steps`: 0
232
+ - `log_level`: passive
233
+ - `log_level_replica`: warning
234
+ - `log_on_each_node`: True
235
+ - `logging_nan_inf_filter`: True
236
+ - `save_safetensors`: True
237
+ - `save_on_each_node`: False
238
+ - `save_only_model`: False
239
+ - `restore_callback_states_from_checkpoint`: False
240
+ - `no_cuda`: False
241
+ - `use_cpu`: False
242
+ - `use_mps_device`: False
243
+ - `seed`: 42
244
+ - `data_seed`: None
245
+ - `jit_mode_eval`: False
246
+ - `use_ipex`: False
247
+ - `bf16`: False
248
+ - `fp16`: False
249
+ - `fp16_opt_level`: O1
250
+ - `half_precision_backend`: auto
251
+ - `bf16_full_eval`: False
252
+ - `fp16_full_eval`: False
253
+ - `tf32`: None
254
+ - `local_rank`: 0
255
+ - `ddp_backend`: None
256
+ - `tpu_num_cores`: None
257
+ - `tpu_metrics_debug`: False
258
+ - `debug`: []
259
+ - `dataloader_drop_last`: False
260
+ - `dataloader_num_workers`: 0
261
+ - `dataloader_prefetch_factor`: None
262
+ - `past_index`: -1
263
+ - `disable_tqdm`: False
264
+ - `remove_unused_columns`: True
265
+ - `label_names`: None
266
+ - `load_best_model_at_end`: False
267
+ - `ignore_data_skip`: False
268
+ - `fsdp`: []
269
+ - `fsdp_min_num_params`: 0
270
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
271
+ - `fsdp_transformer_layer_cls_to_wrap`: None
272
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
273
+ - `deepspeed`: None
274
+ - `label_smoothing_factor`: 0.0
275
+ - `optim`: adamw_torch
276
+ - `optim_args`: None
277
+ - `adafactor`: False
278
+ - `group_by_length`: False
279
+ - `length_column_name`: length
280
+ - `ddp_find_unused_parameters`: None
281
+ - `ddp_bucket_cap_mb`: None
282
+ - `ddp_broadcast_buffers`: False
283
+ - `dataloader_pin_memory`: True
284
+ - `dataloader_persistent_workers`: False
285
+ - `skip_memory_metrics`: True
286
+ - `use_legacy_prediction_loop`: False
287
+ - `push_to_hub`: False
288
+ - `resume_from_checkpoint`: None
289
+ - `hub_model_id`: None
290
+ - `hub_strategy`: every_save
291
+ - `hub_private_repo`: False
292
+ - `hub_always_push`: False
293
+ - `gradient_checkpointing`: False
294
+ - `gradient_checkpointing_kwargs`: None
295
+ - `include_inputs_for_metrics`: False
296
+ - `eval_do_concat_batches`: True
297
+ - `fp16_backend`: auto
298
+ - `push_to_hub_model_id`: None
299
+ - `push_to_hub_organization`: None
300
+ - `mp_parameters`:
301
+ - `auto_find_batch_size`: False
302
+ - `full_determinism`: False
303
+ - `torchdynamo`: None
304
+ - `ray_scope`: last
305
+ - `ddp_timeout`: 1800
306
+ - `torch_compile`: False
307
+ - `torch_compile_backend`: None
308
+ - `torch_compile_mode`: None
309
+ - `dispatch_batches`: None
310
+ - `split_batches`: None
311
+ - `include_tokens_per_second`: False
312
+ - `include_num_input_tokens_seen`: False
313
+ - `neftune_noise_alpha`: None
314
+ - `optim_target_modules`: None
315
+ - `batch_eval_metrics`: False
316
+ - `eval_on_start`: False
317
+ - `use_liger_kernel`: False
318
+ - `eval_use_gather_object`: False
319
+ - `prompts`: None
320
+ - `batch_sampler`: batch_sampler
321
+ - `multi_dataset_batch_sampler`: round_robin
322
+
323
+ </details>
324
+
325
+ ### Training Logs
326
+ | Epoch | Step | Training Loss |
327
+ |:-------:|:----:|:-------------:|
328
+ | 7.1429 | 500 | 0.3755 |
329
+ | 14.2857 | 1000 | 0.169 |
330
+ | 21.4286 | 1500 | 0.1504 |
331
+
332
+
333
+ ### Framework Versions
334
+ - Python: 3.9.13
335
+ - Sentence Transformers: 4.1.0
336
+ - Transformers: 4.45.1
337
+ - PyTorch: 2.0.1+cpu
338
+ - Accelerate: 0.34.2
339
+ - Datasets: 3.0.1
340
+ - Tokenizers: 0.20.0
341
+
342
+ ## Citation
343
+
344
+ ### BibTeX
345
+
346
+ #### Sentence Transformers
347
+ ```bibtex
348
+ @inproceedings{reimers-2019-sentence-bert,
349
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
350
+ author = "Reimers, Nils and Gurevych, Iryna",
351
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
352
+ month = "11",
353
+ year = "2019",
354
+ publisher = "Association for Computational Linguistics",
355
+ url = "https://arxiv.org/abs/1908.10084",
356
+ }
357
+ ```
358
+
359
+ #### MultipleNegativesRankingLoss
360
+ ```bibtex
361
+ @misc{henderson2017efficient,
362
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
363
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
364
+ year={2017},
365
+ eprint={1705.00652},
366
+ archivePrefix={arXiv},
367
+ primaryClass={cs.CL}
368
+ }
369
+ ```
370
+
371
+ <!--
372
+ ## Glossary
373
+
374
+ *Clearly define terms in order to be accessible across audiences.*
375
+ -->
376
+
377
+ <!--
378
+ ## Model Card Authors
379
+
380
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
381
+ -->
382
+
383
+ <!--
384
+ ## Model Card Contact
385
+
386
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
387
  -->