| | --- |
| | tags: |
| | - sentence-transformers |
| | - sentence-similarity |
| | - feature-extraction |
| | - generated_from_trainer |
| | - dataset_size:65698 |
| | - loss:ContrastiveLoss |
| | base_model: B0ketto/tmp_trainer |
| | widget: |
| | - source_sentence: Enforcement of minor traffic offenses leads to the discovery of |
| | more serious crimes. |
| | sentences: |
| | - Western culture has created independent women who are strong on their own and |
| | do not need the protection or support of their husband. This reduces the subjugation |
| | of women. |
| | - Philando Castile, stopped for a broken tailight, was shot seven times and killed |
| | trying to comply with the officer's request for identification. |
| | - The children will have several older / more mature stepmothers. |
| | - source_sentence: Women and men can always file for divorce. |
| | sentences: |
| | - A partner having multiple partners is taken care of enough. There is probably |
| | less need to find even more partners. This is also a matter of free time, when |
| | having multiple partners free time is probably rare. |
| | - The power relations in polygamous marriages should be even more favorable to female |
| | sponsored divorce as it is more likely that women can keep their children while |
| | at the same time the man becomes less dependent on one woman emotionally. |
| | - People close to the individual who commits suicide may feel that they could and |
| | should have done more to prevent it, thus leaving them with intense feelings of |
| | guilt. |
| | - source_sentence: 'It''s okay that specific groups of people are not allowed to vote. |
| | For example: children aren''t usually allowed to vote, because they are considered |
| | too young - too inexperienced. The same kind of logic could be used to "filter |
| | out" people who have very little knowledge of the world or terrible analytical |
| | capabilities.' |
| | sentences: |
| | - Those who have a medically diagnosed incapacity for voting should not be allowed |
| | to vote, because they may be far more easily swayed to vote one way or another. |
| | However, this must be regulated to medically diagnosed conditions on a mental |
| | level. |
| | - Representation is foundational to the American DNA. "No taxation without representation" |
| | is one of our oldest grievance slogans. Removing the ability of any group to vote |
| | reinstates this 400-year old injustice. |
| | - Retailers would supposedly be able to sell the discarded bottles on, thereby making |
| | a profit after the initial investment into the necessary infrastructure. |
| | - source_sentence: 'It''s okay that specific groups of people are not allowed to vote. |
| | For example: children aren''t usually allowed to vote, because they are considered |
| | too young - too inexperienced. The same kind of logic could be used to "filter |
| | out" people who have very little knowledge of the world or terrible analytical |
| | capabilities.' |
| | sentences: |
| | - Planned Parenthood is not only offering abortions but a host of other services, |
| | such as clinical breast examination. |
| | - Some budgetary problems for local law enforcement would be alleviated by removing |
| | proactive policing duties from the officer's mission. |
| | - The benefit is to keep those who you do not wish to vote, unable to pass the test. |
| | This can lead to education suppression, as an example. There are vast amounts |
| | of education imbalance which can be furthered to suppress votes from those who |
| | wish to change the system-- ergo, suppressing those who would wrest power from |
| | those who wish to maintain it through unfair means. |
| | - source_sentence: For children, it is bad to grow up in a polygamous family. |
| | sentences: |
| | - Polygamous families tend to have more children. |
| | - The right of adults to marry should not be precluded by a person's distaste for |
| | their marital structure. The same argument is used against same-sex marriage, |
| | and it is invariably irrelevant. |
| | - This threatens the idea of true democracy. |
| | pipeline_tag: sentence-similarity |
| | library_name: sentence-transformers |
| | --- |
| | |
| | # SentenceTransformer based on B0ketto/tmp_trainer |
| | |
| | This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [B0ketto/tmp_trainer](https://huggingface.co/B0ketto/tmp_trainer). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
| | |
| | ## Model Details |
| | |
| | ### Model Description |
| | - **Model Type:** Sentence Transformer |
| | - **Base model:** [B0ketto/tmp_trainer](https://huggingface.co/B0ketto/tmp_trainer) <!-- at revision 3ac152b5b7c2227049ce77084d6de8c3b57acc4a --> |
| | - **Maximum Sequence Length:** 384 tokens |
| | - **Output Dimensionality:** 768 dimensions |
| | - **Similarity Function:** Cosine Similarity |
| | <!-- - **Training Dataset:** Unknown --> |
| | <!-- - **Language:** Unknown --> |
| | <!-- - **License:** Unknown --> |
| | |
| | ### Model Sources |
| | |
| | - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
| | - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
| | - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
| | |
| | ### Full Model Architecture |
| | |
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel |
| | (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
| | (2): Normalize() |
| | ) |
| | ``` |
| | |
| | ## Usage |
| | |
| | ### Direct Usage (Sentence Transformers) |
| | |
| | First install the Sentence Transformers library: |
| | |
| | ```bash |
| | pip install -U sentence-transformers |
| | ``` |
| | |
| | Then you can load this model and run inference. |
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| |
|
| | # Download from the 🤗 Hub |
| | model = SentenceTransformer("sentence_transformers_model_id") |
| | # Run inference |
| | sentences = [ |
| | 'For children, it is bad to grow up in a polygamous family.', |
| | 'Polygamous families tend to have more children.', |
| | 'This threatens the idea of true democracy.', |
| | ] |
| | embeddings = model.encode(sentences) |
| | print(embeddings.shape) |
| | # [3, 768] |
| | |
| | # Get the similarity scores for the embeddings |
| | similarities = model.similarity(embeddings, embeddings) |
| | print(similarities.shape) |
| | # [3, 3] |
| | ``` |
| | |
| | <!-- |
| | ### Direct Usage (Transformers) |
| | |
| | <details><summary>Click to see the direct usage in Transformers</summary> |
| | |
| | </details> |
| | --> |
| | |
| | <!-- |
| | ### Downstream Usage (Sentence Transformers) |
| | |
| | You can finetune this model on your own dataset. |
| | |
| | <details><summary>Click to expand</summary> |
| | |
| | </details> |
| | --> |
| | |
| | <!-- |
| | ### Out-of-Scope Use |
| | |
| | *List how the model may foreseeably be misused and address what users ought not to do with the model.* |
| | --> |
| | |
| | <!-- |
| | ## Bias, Risks and Limitations |
| | |
| | *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
| | --> |
| | |
| | <!-- |
| | ### Recommendations |
| | |
| | *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
| | --> |
| | |
| | ## Training Details |
| | |
| | ### Training Dataset |
| | |
| | #### Unnamed Dataset |
| | |
| | * Size: 65,698 training samples |
| | * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> |
| | * Approximate statistics based on the first 1000 samples: |
| | | | sentence1 | sentence2 | label | |
| | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------| |
| | | type | string | string | int | |
| | | details | <ul><li>min: 7 tokens</li><li>mean: 25.0 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 31.05 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>0: ~55.50%</li><li>1: ~44.50%</li></ul> | |
| | * Samples: |
| | | sentence1 | sentence2 | label | |
| | |:----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| |
| | | <code>Public opinion favors euthanasia which suggests some support for a right to die.</code> | <code>Europeans generally support euthanasia. For example, more than 70% of citizens of Spain, Germany, France and Britain are in favor.</code> | <code>1</code> | |
| | | <code>Public opinion favors euthanasia which suggests some support for a right to die.</code> | <code>In the US, support for assisted suicide has risen to 69% acceptance rate in the last few decades.</code> | <code>1</code> | |
| | | <code>Public opinion favors euthanasia which suggests some support for a right to die.</code> | <code>The young and healthy that are asked in polls cannot imagine a situation of disability. This, so the criticism goes, blurs their image of euthanasia.</code> | <code>0</code> | |
| | * Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters: |
| | ```json |
| | { |
| | "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE", |
| | "margin": 0.5, |
| | "size_average": true |
| | } |
| | ``` |
| | |
| | ### Training Hyperparameters |
| | |
| | #### All Hyperparameters |
| | <details><summary>Click to expand</summary> |
| | |
| | - `overwrite_output_dir`: False |
| | - `do_predict`: False |
| | - `eval_strategy`: no |
| | - `prediction_loss_only`: True |
| | - `per_device_train_batch_size`: 8 |
| | - `per_device_eval_batch_size`: 8 |
| | - `per_gpu_train_batch_size`: None |
| | - `per_gpu_eval_batch_size`: None |
| | - `gradient_accumulation_steps`: 1 |
| | - `eval_accumulation_steps`: None |
| | - `torch_empty_cache_steps`: None |
| | - `learning_rate`: 5e-05 |
| | - `weight_decay`: 0.0 |
| | - `adam_beta1`: 0.9 |
| | - `adam_beta2`: 0.999 |
| | - `adam_epsilon`: 1e-08 |
| | - `max_grad_norm`: 1.0 |
| | - `num_train_epochs`: 3.0 |
| | - `max_steps`: -1 |
| | - `lr_scheduler_type`: linear |
| | - `lr_scheduler_kwargs`: {} |
| | - `warmup_ratio`: 0.0 |
| | - `warmup_steps`: 0 |
| | - `log_level`: passive |
| | - `log_level_replica`: warning |
| | - `log_on_each_node`: True |
| | - `logging_nan_inf_filter`: True |
| | - `save_safetensors`: True |
| | - `save_on_each_node`: False |
| | - `save_only_model`: False |
| | - `restore_callback_states_from_checkpoint`: False |
| | - `no_cuda`: False |
| | - `use_cpu`: False |
| | - `use_mps_device`: False |
| | - `seed`: 42 |
| | - `data_seed`: None |
| | - `jit_mode_eval`: False |
| | - `use_ipex`: False |
| | - `bf16`: False |
| | - `fp16`: False |
| | - `fp16_opt_level`: O1 |
| | - `half_precision_backend`: auto |
| | - `bf16_full_eval`: False |
| | - `fp16_full_eval`: False |
| | - `tf32`: None |
| | - `local_rank`: 0 |
| | - `ddp_backend`: None |
| | - `tpu_num_cores`: None |
| | - `tpu_metrics_debug`: False |
| | - `debug`: [] |
| | - `dataloader_drop_last`: False |
| | - `dataloader_num_workers`: 0 |
| | - `dataloader_prefetch_factor`: None |
| | - `past_index`: -1 |
| | - `disable_tqdm`: False |
| | - `remove_unused_columns`: True |
| | - `label_names`: None |
| | - `load_best_model_at_end`: False |
| | - `ignore_data_skip`: False |
| | - `fsdp`: [] |
| | - `fsdp_min_num_params`: 0 |
| | - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
| | - `fsdp_transformer_layer_cls_to_wrap`: None |
| | - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
| | - `deepspeed`: None |
| | - `label_smoothing_factor`: 0.0 |
| | - `optim`: adamw_torch |
| | - `optim_args`: None |
| | - `adafactor`: False |
| | - `group_by_length`: False |
| | - `length_column_name`: length |
| | - `ddp_find_unused_parameters`: None |
| | - `ddp_bucket_cap_mb`: None |
| | - `ddp_broadcast_buffers`: False |
| | - `dataloader_pin_memory`: True |
| | - `dataloader_persistent_workers`: False |
| | - `skip_memory_metrics`: True |
| | - `use_legacy_prediction_loop`: False |
| | - `push_to_hub`: False |
| | - `resume_from_checkpoint`: None |
| | - `hub_model_id`: None |
| | - `hub_strategy`: every_save |
| | - `hub_private_repo`: None |
| | - `hub_always_push`: False |
| | - `gradient_checkpointing`: False |
| | - `gradient_checkpointing_kwargs`: None |
| | - `include_inputs_for_metrics`: False |
| | - `include_for_metrics`: [] |
| | - `eval_do_concat_batches`: True |
| | - `fp16_backend`: auto |
| | - `push_to_hub_model_id`: None |
| | - `push_to_hub_organization`: None |
| | - `mp_parameters`: |
| | - `auto_find_batch_size`: False |
| | - `full_determinism`: False |
| | - `torchdynamo`: None |
| | - `ray_scope`: last |
| | - `ddp_timeout`: 1800 |
| | - `torch_compile`: False |
| | - `torch_compile_backend`: None |
| | - `torch_compile_mode`: None |
| | - `dispatch_batches`: None |
| | - `split_batches`: None |
| | - `include_tokens_per_second`: False |
| | - `include_num_input_tokens_seen`: False |
| | - `neftune_noise_alpha`: None |
| | - `optim_target_modules`: None |
| | - `batch_eval_metrics`: False |
| | - `eval_on_start`: False |
| | - `use_liger_kernel`: False |
| | - `eval_use_gather_object`: False |
| | - `average_tokens_across_devices`: False |
| | - `prompts`: None |
| | - `batch_sampler`: batch_sampler |
| | - `multi_dataset_batch_sampler`: proportional |
| | |
| | </details> |
| | |
| | ### Training Logs |
| | | Epoch | Step | Training Loss | |
| | |:------:|:-----:|:-------------:| |
| | | 0.0609 | 500 | 0.0256 | |
| | | 0.1218 | 1000 | 0.0257 | |
| | | 0.1826 | 1500 | 0.0263 | |
| | | 0.2435 | 2000 | 0.0291 | |
| | | 0.3044 | 2500 | 0.0276 | |
| | | 0.3653 | 3000 | 0.0304 | |
| | | 0.4262 | 3500 | 0.0297 | |
| | | 0.4870 | 4000 | 0.0332 | |
| | | 0.5479 | 4500 | 0.033 | |
| | | 0.6088 | 5000 | 0.0328 | |
| | | 0.6697 | 5500 | 0.0328 | |
| | | 0.7305 | 6000 | 0.0331 | |
| | | 0.7914 | 6500 | 0.0321 | |
| | | 0.8523 | 7000 | 0.0326 | |
| | | 0.9132 | 7500 | 0.0329 | |
| | | 0.9741 | 8000 | 0.0318 | |
| | | 1.0349 | 8500 | 0.0323 | |
| | | 1.0958 | 9000 | 0.0321 | |
| | | 1.1567 | 9500 | 0.0321 | |
| | | 1.2176 | 10000 | 0.0322 | |
| | | 1.2785 | 10500 | 0.0321 | |
| | | 1.3393 | 11000 | 0.0317 | |
| | | 1.4002 | 11500 | 0.0317 | |
| | | 1.4611 | 12000 | 0.0315 | |
| | | 1.5220 | 12500 | 0.0318 | |
| | | 1.5829 | 13000 | 0.0319 | |
| | | 1.6437 | 13500 | 0.0315 | |
| | | 1.7046 | 14000 | 0.0313 | |
| | | 1.7655 | 14500 | 0.0294 | |
| | | 1.8264 | 15000 | 0.0292 | |
| | | 1.8873 | 15500 | 0.0278 | |
| | | 1.9481 | 16000 | 0.0286 | |
| | | 2.0090 | 16500 | 0.0274 | |
| | | 2.0699 | 17000 | 0.0273 | |
| | | 2.1308 | 17500 | 0.027 | |
| | | 2.1916 | 18000 | 0.0271 | |
| | | 2.2525 | 18500 | 0.0265 | |
| | | 2.3134 | 19000 | 0.0262 | |
| | | 2.3743 | 19500 | 0.0254 | |
| | | 2.4352 | 20000 | 0.0255 | |
| | | 2.4960 | 20500 | 0.0256 | |
| | | 2.5569 | 21000 | 0.0252 | |
| | | 2.6178 | 21500 | 0.0246 | |
| | | 2.6787 | 22000 | 0.0251 | |
| | | 2.7396 | 22500 | 0.0238 | |
| | | 2.8004 | 23000 | 0.025 | |
| | | 2.8613 | 23500 | 0.0247 | |
| | | 2.9222 | 24000 | 0.0252 | |
| | | 2.9831 | 24500 | 0.0237 | |
| | |
| | |
| | ### Framework Versions |
| | - Python: 3.11.11 |
| | - Sentence Transformers: 3.4.1 |
| | - Transformers: 4.48.3 |
| | - PyTorch: 2.5.1+cu124 |
| | - Accelerate: 1.3.0 |
| | - Datasets: 3.3.1 |
| | - Tokenizers: 0.21.0 |
| | |
| | ## Citation |
| | |
| | ### BibTeX |
| | |
| | #### Sentence Transformers |
| | ```bibtex |
| | @inproceedings{reimers-2019-sentence-bert, |
| | title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
| | author = "Reimers, Nils and Gurevych, Iryna", |
| | booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
| | month = "11", |
| | year = "2019", |
| | publisher = "Association for Computational Linguistics", |
| | url = "https://arxiv.org/abs/1908.10084", |
| | } |
| | ``` |
| | |
| | #### ContrastiveLoss |
| | ```bibtex |
| | @inproceedings{hadsell2006dimensionality, |
| | author={Hadsell, R. and Chopra, S. and LeCun, Y.}, |
| | booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)}, |
| | title={Dimensionality Reduction by Learning an Invariant Mapping}, |
| | year={2006}, |
| | volume={2}, |
| | number={}, |
| | pages={1735-1742}, |
| | doi={10.1109/CVPR.2006.100} |
| | } |
| | ``` |
| |
|
| | <!-- |
| | ## Glossary |
| |
|
| | *Clearly define terms in order to be accessible across audiences.* |
| | --> |
| |
|
| | <!-- |
| | ## Model Card Authors |
| |
|
| | *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
| | --> |
| |
|
| | <!-- |
| | ## Model Card Contact |
| |
|
| | *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
| | --> |