Culture-and-Morality-Lab commited on
Commit
b50e937
·
verified ·
1 Parent(s): 0b022d3

Upload 11 files

Browse files
README.md ADDED
@@ -0,0 +1,523 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:11180
9
+ - loss:CosineSimilarityLoss
10
+ widget:
11
+ - source_sentence: So when are they leaving? I saw the protest had smaller amounts
12
+ of protestors today as opposed to Friday and Saturday
13
+ sentences:
14
+ - Hillary's great, but center-leftists seem to do better with younger candidates.
15
+ Bill, Blair, Obama, Trudeau, and now Macron. She'll be 72 in 2020.
16
+ - . I felt very sorry for you during your meltdown on He drove you insane but,
17
+ of course, Piers is a lot smarter than you
18
+ - As a kid, my friends and I all believed that Gymkata was the most violent, bloody
19
+ movie ever made. I'm not sure who started that rumor. It was probably born out
20
+ of the frustration of 10 year olds who weren't allowed to see it for one reason
21
+ or other. Years after Gymkata was released, it became a perennial late night cable
22
+ movie, and as a result, I've been able to make up for lost time. I must have seen
23
+ scenes from this dreadful excuse for a film over a dozen times, and I can always
24
+ spot it from 1-2 seconds of screen time. However, aside from the forced coupling
25
+ of gymnastics and martial arts, the bad dubbing, the stiff dialog, and the outrageously
26
+ difficult story-line, the film has some things going for it. With all that's bad
27
+ about the movie visually, the sound is actually pretty entertaining. Never before
28
+ has a punch or kick landed with so little force and so much volume! The canned
29
+ kung-fu sounds are cheeky, but the slowed and pitched-down music, and the nearly
30
+ 5 minute slow motion scene are truly weird. The chase through the city of demented,
31
+ blood-thirsty villagers isn't really tense as much as it is irritating, and there
32
+ are enough bad wigs and extras who all but look into the camera and wave to make
33
+ this train-wreck a little fun. Could it be headed for cult-classic status? Where
34
+ is MST3K when we need it?
35
+ - source_sentence: Seriously, 3 things that really get my blood boiling is hearing
36
+ about child, animal and/or elder abuse. There aint much worse than worthless fucks
37
+ who prey on those in our society who cannot defend themselves. Worthless people
38
+ deserve nothing more than a rope around the neck or life in prison at the very
39
+ least.
40
+ sentences:
41
+ - As elsewhere, we see polling places and campaign offices getting attacked by&
42
+ checks notes again& oh yeah, also republicans. Yeah, that checks out.
43
+ - Biometric ID System Said to Delay Venezuela Recall By CHRISTOPHER TOOTHAKER CARACAS,
44
+ Venezuela (AP) -- The high-tech thumbprint devices were meant to keep people from
45
+ voting more than once in the recall ballot against President Hugo Chavez. Instead,
46
+ they often wound up working fitfully - even when Chavez himself voted - contributing
47
+ to huge delays in Sunday's historic referendum...
48
+ - Assuming a republican controlled senate (likely), he will replace Alito and Thomas.
49
+ It won't necessarily move *further* to the right, but we'll be stuck with at least
50
+ 5 conservative justices for the next 40 years.
51
+ - source_sentence: i should love this movie . the acting is very good and Barbara
52
+ Stanwyck is great but the the movie has always seemed very trite to me . the movie
53
+ makes working class people look low and cheap .the fact that the daughter is ashamed
54
+ of her mother and that the daughter does not rise above it has always made me
55
+ a bit uneasy . Barbara Stanwyck as the mother worships the daughter but the daughter
56
+ forgoes a mothers love to find happiness with her well to do fathers family .
57
+ i wonder how many others who have seen this film feel this way about it.again
58
+ the acting was very very good and worth watching . i really don't like the story
59
+ line . just a personal preference .thank you
60
+ sentences:
61
+ - god ..it takes me back...rolling skating at roller gardens,,,,,you cant top old
62
+ school...the beats back then were so much better...
63
+ - 'I''m glad I love my military and the 2nd amendment. #2A #Marines #tlot #USA'
64
+ - We definitely need someone better than Trump in 2024, but for now he's all we
65
+ got..
66
+ - source_sentence: That's just because his right arm is on the inside. Trump knows
67
+ there's nothing he can do to win this round, and he's okay with that. Trump is
68
+ well versed in handshake game strategy, as is Macron clearly.
69
+ sentences:
70
+ - "By Theo Burman - Live News Reporter: \n\nFormer President Donald Trump and Vice\
71
+ \ President Kamala Harris are in the final sprint to the finishing line in their\
72
+ \ race to the White House. There are 12 days until Election Day and both campaigns\
73
+ \ are working flat-out to win over voters in what is shaping up to be one of the\
74
+ \ closest presidential elections in modern history.\n\nAfter events in North Carolina\
75
+ \ and Georgia earlier this week, Trump is continuing his focus on the Sun Belt\
76
+ \ by heading to Arizona today, while Harris is hosting a rally alongside former\
77
+ \ President Barack Obama and Bruce Springsteen in Georgia.\n\nRead more: ["
78
+ - '"😂😂😂😂😂😂😂😂😂😂
79
+ Gay niggas couldn''t wait to act like bitches tonight"'
80
+ - When the regressive figure has tremendous power (such as a head of state) it's
81
+ usually not worth risking diplomatic friendship to refuse a rather small thing
82
+ such as wearing a hijab. Le Pen is making wearing a hijab a big thing to appeal
83
+ to both anti-Islam and feminism emotions, in other words, risking diplomatic friendship
84
+ to boost her own popularity. Nice move overall.
85
+ - source_sentence: Unfortunately, the angry masses demand what's not in their best
86
+ interest because of brown people
87
+ sentences:
88
+ - 'If Le Pen is perceived to be a US-puppet, wouldn''t that rub a lot of patriotic/nationalistic
89
+ voters the wrong way?
90
+
91
+
92
+ It doesn''t seem to be a problem for Trumpists that acknowledge his close ties
93
+ (sic) with Putin.'
94
+ - 'I made it 22 years. #metoo'
95
+ - 'Secondly, every major support has been leaving the boat during the campaign to
96
+ be in Macron team, thus leaving Hamon alone in an already very fragile party.
97
+
98
+
99
+ Sounds like they made quite a ripple'
100
+ pipeline_tag: sentence-similarity
101
+ library_name: sentence-transformers
102
+ metrics:
103
+ - pearson_cosine
104
+ - spearman_cosine
105
+ model-index:
106
+ - name: SentenceTransformer
107
+ results:
108
+ - task:
109
+ type: semantic-similarity
110
+ name: Semantic Similarity
111
+ dataset:
112
+ name: similarity
113
+ type: similarity
114
+ metrics:
115
+ - type: pearson_cosine
116
+ value: 0.3952284283585713
117
+ name: Pearson Cosine
118
+ - type: spearman_cosine
119
+ value: 0.41014481263817126
120
+ name: Spearman Cosine
121
+ ---
122
+
123
+ # SentenceTransformer
124
+
125
+ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
126
+
127
+ ## Model Details
128
+
129
+ ### Model Description
130
+ - **Model Type:** Sentence Transformer
131
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
132
+ - **Maximum Sequence Length:** 512 tokens
133
+ - **Output Dimensionality:** 1024 dimensions
134
+ - **Similarity Function:** Cosine Similarity
135
+ <!-- - **Training Dataset:** Unknown -->
136
+ <!-- - **Language:** Unknown -->
137
+ <!-- - **License:** Unknown -->
138
+
139
+ ### Model Sources
140
+
141
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
142
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
143
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
144
+
145
+ ### Full Model Architecture
146
+
147
+ ```
148
+ SentenceTransformer(
149
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
150
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
151
+ )
152
+ ```
153
+
154
+ ## Usage
155
+
156
+ ### Direct Usage (Sentence Transformers)
157
+
158
+ First install the Sentence Transformers library:
159
+
160
+ ```bash
161
+ pip install -U sentence-transformers
162
+ ```
163
+
164
+ Then you can load this model and run inference.
165
+ ```python
166
+ from sentence_transformers import SentenceTransformer
167
+
168
+ # Download from the 🤗 Hub
169
+ model = SentenceTransformer("sentence_transformers_model_id")
170
+ # Run inference
171
+ sentences = [
172
+ "Unfortunately, the angry masses demand what's not in their best interest because of brown people",
173
+ 'I made it 22 years. #metoo',
174
+ "If Le Pen is perceived to be a US-puppet, wouldn't that rub a lot of patriotic/nationalistic voters the wrong way?\n\nIt doesn't seem to be a problem for Trumpists that acknowledge his close ties (sic) with Putin.",
175
+ ]
176
+ embeddings = model.encode(sentences)
177
+ print(embeddings.shape)
178
+ # [3, 1024]
179
+
180
+ # Get the similarity scores for the embeddings
181
+ similarities = model.similarity(embeddings, embeddings)
182
+ print(similarities)
183
+ # tensor([[1.0000, 0.6501, 0.5940],
184
+ # [0.6501, 1.0000, 0.5664],
185
+ # [0.5940, 0.5664, 1.0000]])
186
+ ```
187
+
188
+ <!--
189
+ ### Direct Usage (Transformers)
190
+
191
+ <details><summary>Click to see the direct usage in Transformers</summary>
192
+
193
+ </details>
194
+ -->
195
+
196
+ <!--
197
+ ### Downstream Usage (Sentence Transformers)
198
+
199
+ You can finetune this model on your own dataset.
200
+
201
+ <details><summary>Click to expand</summary>
202
+
203
+ </details>
204
+ -->
205
+
206
+ <!--
207
+ ### Out-of-Scope Use
208
+
209
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
210
+ -->
211
+
212
+ ## Evaluation
213
+
214
+ ### Metrics
215
+
216
+ #### Semantic Similarity
217
+
218
+ * Dataset: `similarity`
219
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
220
+
221
+ | Metric | Value |
222
+ |:--------------------|:-----------|
223
+ | pearson_cosine | 0.3952 |
224
+ | **spearman_cosine** | **0.4101** |
225
+
226
+ <!--
227
+ ## Bias, Risks and Limitations
228
+
229
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
230
+ -->
231
+
232
+ <!--
233
+ ### Recommendations
234
+
235
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
236
+ -->
237
+
238
+ ## Training Details
239
+
240
+ ### Training Dataset
241
+
242
+ #### Unnamed Dataset
243
+
244
+ * Size: 11,180 training samples
245
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
246
+ * Approximate statistics based on the first 1000 samples:
247
+ | | sentence_0 | sentence_1 | label |
248
+ |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------|
249
+ | type | string | string | float |
250
+ | details | <ul><li>min: 5 tokens</li><li>mean: 102.41 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 111.27 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.53</li><li>max: 1.0</li></ul> |
251
+ * Samples:
252
+ | sentence_0 | sentence_1 | label |
253
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|
254
+ | <code>We love peace, but not peace at any price.</code> | <code>That's totally not corrupt whatsoever. Also why the hell is a state attorney general meddling in federal government?</code> | <code>0.7071067811865475</code> |
255
+ | <code>Am not from America, I usually watch this show on AXN channel, I don't know why this respected channel air such sucking program in prime time slot. Creation of Hollywood's Money Bank Jerry Bruckheimer, this time he is spending a big load of cash in the small screen. In each episode a bunch of peoples having two team members travels from on country to another for a great sum of money; where the camera crews shoot their travels. I don't know who the hell gave this stupid idea for the show. It has nothing to watch for, in all episodes we see people ran like beggars, some times shouting, crying, beeping, jerky camera works..huh it's harmful to both eyes and ears. The most disgusting part in the race is the viewers finally knows each of the team members can't enjoy their race/traveling experience. Even though, to add up the ratings the producers came up with the ideas of including Gays in one shows, sucking American reality show.It's nothing to watch for, better switch to another channels.T...</code> | <code>Background: Last year my [41F] brother, Gabe [36M] came to visit around my bday. There is a nice restaurant my family goes to for special occasions, and since Gabe is a chef, I was excited to take him. I made a rez for me, my SO, my kids [23NB, 21F], Gabe, and my sister, Ronnie [35F]. We had a great time. It was "adults only," so my nephews [15, 13] did not come. Since I invited them, we paid; the bill was about $400.<br><br>Gabe came to visit again in Sept, only stopping for a few days (arrived Sun eve, leaving early Wed am), on his way back home across the country. Asking if he wanted to do anything while in town, he said he'd like to go to that restaurant again. When we saw Ronnie (Sunday), I told her we were going "and you are coming with us."<br><br>Monday, I took the day off to hang out with Gabe, my sis had to work, but she didn't come over when she got off at 7pm.<br><br>Tuesday she came over with my nephews around 11am, with dinner rez for 6 ppl (same as last time) at 8pm. We hung out and as th...</code> | <code>0.3535533905932737</code> |
256
+ | <code>I (M29) am trans. My girlfriend (F28, GF) is totally cool with it, always has been, we've been dating since college, 8 years in March. <br><br>GF's dad was abusive, so she left home at 18 and had to leave her baby sister behind.<br><br>2015, we're 24/23, in grad school, living together. GF gets some news: her dad died and, long story short, nobody can take her sister in.<br><br>We hire a lawyer to try for custody. I quit school to work fulltime so we can afford it. It takes a lot of time and work, but we get to take her home.<br><br>Fast forward to now. Kid (12, S) has school in person on Tu/Th, virtual learning the rest. Friday the 11th, while S was out walking the dog, I grabbed the hamper out of their room to do the laundry. The pocket of the hoodie they just wore to school was bunched up weird, so I checked it and pulled out a binder. <br><br>For those who don't know, a binder is usually used by trans people to flatten their chests so they can pass easier. The only other reason I could think of for someone to o...</code> | <code>Scores plan to leave Mormon church over its policy on same-sex couples - Gay Star News</code> | <code>0.4082482904638631</code> |
257
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
258
+ ```json
259
+ {
260
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
261
+ }
262
+ ```
263
+
264
+ ### Training Hyperparameters
265
+ #### Non-Default Hyperparameters
266
+
267
+ - `eval_strategy`: steps
268
+ - `per_device_train_batch_size`: 32
269
+ - `per_device_eval_batch_size`: 32
270
+ - `fp16`: True
271
+ - `multi_dataset_batch_sampler`: round_robin
272
+
273
+ #### All Hyperparameters
274
+ <details><summary>Click to expand</summary>
275
+
276
+ - `overwrite_output_dir`: False
277
+ - `do_predict`: False
278
+ - `eval_strategy`: steps
279
+ - `prediction_loss_only`: True
280
+ - `per_device_train_batch_size`: 32
281
+ - `per_device_eval_batch_size`: 32
282
+ - `per_gpu_train_batch_size`: None
283
+ - `per_gpu_eval_batch_size`: None
284
+ - `gradient_accumulation_steps`: 1
285
+ - `eval_accumulation_steps`: None
286
+ - `torch_empty_cache_steps`: None
287
+ - `learning_rate`: 5e-05
288
+ - `weight_decay`: 0.0
289
+ - `adam_beta1`: 0.9
290
+ - `adam_beta2`: 0.999
291
+ - `adam_epsilon`: 1e-08
292
+ - `max_grad_norm`: 1
293
+ - `num_train_epochs`: 3
294
+ - `max_steps`: -1
295
+ - `lr_scheduler_type`: linear
296
+ - `lr_scheduler_kwargs`: {}
297
+ - `warmup_ratio`: 0.0
298
+ - `warmup_steps`: 0
299
+ - `log_level`: passive
300
+ - `log_level_replica`: warning
301
+ - `log_on_each_node`: True
302
+ - `logging_nan_inf_filter`: True
303
+ - `save_safetensors`: True
304
+ - `save_on_each_node`: False
305
+ - `save_only_model`: False
306
+ - `restore_callback_states_from_checkpoint`: False
307
+ - `no_cuda`: False
308
+ - `use_cpu`: False
309
+ - `use_mps_device`: False
310
+ - `seed`: 42
311
+ - `data_seed`: None
312
+ - `jit_mode_eval`: False
313
+ - `use_ipex`: False
314
+ - `bf16`: False
315
+ - `fp16`: True
316
+ - `fp16_opt_level`: O1
317
+ - `half_precision_backend`: auto
318
+ - `bf16_full_eval`: False
319
+ - `fp16_full_eval`: False
320
+ - `tf32`: None
321
+ - `local_rank`: 0
322
+ - `ddp_backend`: None
323
+ - `tpu_num_cores`: None
324
+ - `tpu_metrics_debug`: False
325
+ - `debug`: []
326
+ - `dataloader_drop_last`: False
327
+ - `dataloader_num_workers`: 0
328
+ - `dataloader_prefetch_factor`: None
329
+ - `past_index`: -1
330
+ - `disable_tqdm`: False
331
+ - `remove_unused_columns`: True
332
+ - `label_names`: None
333
+ - `load_best_model_at_end`: False
334
+ - `ignore_data_skip`: False
335
+ - `fsdp`: []
336
+ - `fsdp_min_num_params`: 0
337
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
338
+ - `fsdp_transformer_layer_cls_to_wrap`: None
339
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
340
+ - `deepspeed`: None
341
+ - `label_smoothing_factor`: 0.0
342
+ - `optim`: adamw_torch
343
+ - `optim_args`: None
344
+ - `adafactor`: False
345
+ - `group_by_length`: False
346
+ - `length_column_name`: length
347
+ - `ddp_find_unused_parameters`: None
348
+ - `ddp_bucket_cap_mb`: None
349
+ - `ddp_broadcast_buffers`: False
350
+ - `dataloader_pin_memory`: True
351
+ - `dataloader_persistent_workers`: False
352
+ - `skip_memory_metrics`: True
353
+ - `use_legacy_prediction_loop`: False
354
+ - `push_to_hub`: False
355
+ - `resume_from_checkpoint`: None
356
+ - `hub_model_id`: None
357
+ - `hub_strategy`: every_save
358
+ - `hub_private_repo`: None
359
+ - `hub_always_push`: False
360
+ - `hub_revision`: None
361
+ - `gradient_checkpointing`: False
362
+ - `gradient_checkpointing_kwargs`: None
363
+ - `include_inputs_for_metrics`: False
364
+ - `include_for_metrics`: []
365
+ - `eval_do_concat_batches`: True
366
+ - `fp16_backend`: auto
367
+ - `push_to_hub_model_id`: None
368
+ - `push_to_hub_organization`: None
369
+ - `mp_parameters`:
370
+ - `auto_find_batch_size`: False
371
+ - `full_determinism`: False
372
+ - `torchdynamo`: None
373
+ - `ray_scope`: last
374
+ - `ddp_timeout`: 1800
375
+ - `torch_compile`: False
376
+ - `torch_compile_backend`: None
377
+ - `torch_compile_mode`: None
378
+ - `include_tokens_per_second`: False
379
+ - `include_num_input_tokens_seen`: False
380
+ - `neftune_noise_alpha`: None
381
+ - `optim_target_modules`: None
382
+ - `batch_eval_metrics`: False
383
+ - `eval_on_start`: False
384
+ - `use_liger_kernel`: False
385
+ - `liger_kernel_config`: None
386
+ - `eval_use_gather_object`: False
387
+ - `average_tokens_across_devices`: False
388
+ - `prompts`: None
389
+ - `batch_sampler`: batch_sampler
390
+ - `multi_dataset_batch_sampler`: round_robin
391
+ - `router_mapping`: {}
392
+ - `learning_rate_mapping`: {}
393
+
394
+ </details>
395
+
396
+ ### Training Logs
397
+ | Epoch | Step | Training Loss | similarity_spearman_cosine |
398
+ |:------:|:----:|:-------------:|:--------------------------:|
399
+ | 0.0286 | 10 | - | 0.0535 |
400
+ | 0.0571 | 20 | - | 0.0570 |
401
+ | 0.0857 | 30 | - | 0.0681 |
402
+ | 0.1143 | 40 | - | 0.0739 |
403
+ | 0.1429 | 50 | - | 0.0572 |
404
+ | 0.1714 | 60 | - | 0.0250 |
405
+ | 0.2 | 70 | - | 0.0230 |
406
+ | 0.2286 | 80 | - | 0.0726 |
407
+ | 0.2571 | 90 | - | 0.0548 |
408
+ | 0.2857 | 100 | - | 0.0451 |
409
+ | 0.3143 | 110 | - | 0.0067 |
410
+ | 0.3429 | 120 | - | 0.0425 |
411
+ | 0.3714 | 130 | - | 0.0920 |
412
+ | 0.4 | 140 | - | 0.0823 |
413
+ | 0.4286 | 150 | - | 0.1165 |
414
+ | 0.4571 | 160 | - | 0.1405 |
415
+ | 0.4857 | 170 | - | 0.1661 |
416
+ | 0.5143 | 180 | - | 0.1657 |
417
+ | 0.5429 | 190 | - | 0.1832 |
418
+ | 0.5714 | 200 | - | 0.0056 |
419
+ | 0.6 | 210 | - | 0.1209 |
420
+ | 0.6286 | 220 | - | 0.1280 |
421
+ | 0.6571 | 230 | - | 0.1902 |
422
+ | 0.6857 | 240 | - | 0.2111 |
423
+ | 0.7143 | 250 | - | 0.2717 |
424
+ | 0.7429 | 260 | - | 0.2716 |
425
+ | 0.7714 | 270 | - | 0.2629 |
426
+ | 0.8 | 280 | - | 0.2171 |
427
+ | 0.8286 | 290 | - | 0.2742 |
428
+ | 0.8571 | 300 | - | 0.2913 |
429
+ | 0.8857 | 310 | - | 0.2813 |
430
+ | 0.9143 | 320 | - | 0.2863 |
431
+ | 0.9429 | 330 | - | 0.2918 |
432
+ | 0.9714 | 340 | - | 0.2951 |
433
+ | 1.0 | 350 | - | 0.3198 |
434
+ | 1.0286 | 360 | - | 0.3145 |
435
+ | 1.0571 | 370 | - | 0.3148 |
436
+ | 1.0857 | 380 | - | 0.2907 |
437
+ | 1.1143 | 390 | - | 0.3267 |
438
+ | 1.1429 | 400 | - | 0.3246 |
439
+ | 1.1714 | 410 | - | 0.3351 |
440
+ | 1.2 | 420 | - | 0.3463 |
441
+ | 1.2286 | 430 | - | 0.3531 |
442
+ | 1.2571 | 440 | - | 0.3398 |
443
+ | 1.2857 | 450 | - | 0.3169 |
444
+ | 1.3143 | 460 | - | 0.3304 |
445
+ | 1.3429 | 470 | - | 0.3315 |
446
+ | 1.3714 | 480 | - | 0.3684 |
447
+ | 1.4 | 490 | - | 0.3499 |
448
+ | 1.4286 | 500 | 0.1429 | 0.3438 |
449
+ | 1.4571 | 510 | - | 0.3362 |
450
+ | 1.4857 | 520 | - | 0.3130 |
451
+ | 1.5143 | 530 | - | 0.3445 |
452
+ | 1.5429 | 540 | - | 0.3464 |
453
+ | 1.5714 | 550 | - | 0.3499 |
454
+ | 1.6 | 560 | - | 0.3626 |
455
+ | 1.6286 | 570 | - | 0.3743 |
456
+ | 1.6571 | 580 | - | 0.3714 |
457
+ | 1.6857 | 590 | - | 0.3774 |
458
+ | 1.7143 | 600 | - | 0.3624 |
459
+ | 1.7429 | 610 | - | 0.3861 |
460
+ | 1.7714 | 620 | - | 0.3925 |
461
+ | 1.8 | 630 | - | 0.3763 |
462
+ | 1.8286 | 640 | - | 0.3906 |
463
+ | 1.8571 | 650 | - | 0.4034 |
464
+ | 1.8857 | 660 | - | 0.3887 |
465
+ | 1.9143 | 670 | - | 0.3970 |
466
+ | 1.9429 | 680 | - | 0.3787 |
467
+ | 1.9714 | 690 | - | 0.3958 |
468
+ | 2.0 | 700 | - | 0.3812 |
469
+ | 2.0286 | 710 | - | 0.3951 |
470
+ | 2.0571 | 720 | - | 0.4066 |
471
+ | 2.0857 | 730 | - | 0.4030 |
472
+ | 2.1143 | 740 | - | 0.4029 |
473
+ | 2.1429 | 750 | - | 0.3899 |
474
+ | 2.1714 | 760 | - | 0.3898 |
475
+ | 2.2 | 770 | - | 0.3987 |
476
+ | 2.2286 | 780 | - | 0.4007 |
477
+ | 2.2571 | 790 | - | 0.4040 |
478
+ | 2.2857 | 800 | - | 0.4101 |
479
+
480
+
481
+ ### Framework Versions
482
+ - Python: 3.11.9
483
+ - Sentence Transformers: 5.1.0
484
+ - Transformers: 4.53.3
485
+ - PyTorch: 2.5.1
486
+ - Accelerate: 1.10.0
487
+ - Datasets: 2.14.4
488
+ - Tokenizers: 0.21.0
489
+
490
+ ## Citation
491
+
492
+ ### BibTeX
493
+
494
+ #### Sentence Transformers
495
+ ```bibtex
496
+ @inproceedings{reimers-2019-sentence-bert,
497
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
498
+ author = "Reimers, Nils and Gurevych, Iryna",
499
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
500
+ month = "11",
501
+ year = "2019",
502
+ publisher = "Association for Computational Linguistics",
503
+ url = "https://arxiv.org/abs/1908.10084",
504
+ }
505
+ ```
506
+
507
+ <!--
508
+ ## Glossary
509
+
510
+ *Clearly define terms in order to be accessible across audiences.*
511
+ -->
512
+
513
+ <!--
514
+ ## Model Card Authors
515
+
516
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
517
+ -->
518
+
519
+ <!--
520
+ ## Model Card Contact
521
+
522
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
523
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 1024,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 4096,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "roberta",
17
+ "num_attention_heads": 16,
18
+ "num_hidden_layers": 24,
19
+ "pad_token_id": 1,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.53.3",
23
+ "type_vocab_size": 1,
24
+ "use_cache": true,
25
+ "vocab_size": 50265
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.1.0",
5
+ "transformers": "4.53.3",
6
+ "pytorch": "2.5.1"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b10019cfc5f6104c62914a36b8dcca56975dd6cb1efdb34f50ed1bc0702e31b1
3
+ size 1421483904
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "model_max_length": 512,
53
+ "pad_token": "<pad>",
54
+ "sep_token": "</s>",
55
+ "tokenizer_class": "RobertaTokenizer",
56
+ "trim_offsets": true,
57
+ "unk_token": "<unk>"
58
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff