SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    "Unfortunately, the angry masses demand what's not in their best interest because of brown people",
    'I made it 22 years. #metoo',
    "If Le Pen is perceived to be a US-puppet, wouldn't that rub a lot of patriotic/nationalistic voters the wrong way?\n\nIt doesn't seem to be a problem for Trumpists that acknowledge his close ties (sic) with Putin.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6501, 0.5940],
#         [0.6501, 1.0000, 0.5664],
#         [0.5940, 0.5664, 1.0000]])

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.3952
spearman_cosine 0.4101

Training Details

Training Dataset

Unnamed Dataset

  • Size: 11,180 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 5 tokens
    • mean: 102.41 tokens
    • max: 512 tokens
    • min: 5 tokens
    • mean: 111.27 tokens
    • max: 512 tokens
    • min: 0.0
    • mean: 0.53
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    We love peace, but not peace at any price. That's totally not corrupt whatsoever. Also why the hell is a state attorney general meddling in federal government? 0.7071067811865475
    Am not from America, I usually watch this show on AXN channel, I don't know why this respected channel air such sucking program in prime time slot. Creation of Hollywood's Money Bank Jerry Bruckheimer, this time he is spending a big load of cash in the small screen. In each episode a bunch of peoples having two team members travels from on country to another for a great sum of money; where the camera crews shoot their travels. I don't know who the hell gave this stupid idea for the show. It has nothing to watch for, in all episodes we see people ran like beggars, some times shouting, crying, beeping, jerky camera works..huh it's harmful to both eyes and ears. The most disgusting part in the race is the viewers finally knows each of the team members can't enjoy their race/traveling experience. Even though, to add up the ratings the producers came up with the ideas of including Gays in one shows, sucking American reality show.It's nothing to watch for, better switch to another channels.T... Background: Last year my [41F] brother, Gabe [36M] came to visit around my bday. There is a nice restaurant my family goes to for special occasions, and since Gabe is a chef, I was excited to take him. I made a rez for me, my SO, my kids [23NB, 21F], Gabe, and my sister, Ronnie [35F]. We had a great time. It was "adults only," so my nephews [15, 13] did not come. Since I invited them, we paid; the bill was about $400.

    Gabe came to visit again in Sept, only stopping for a few days (arrived Sun eve, leaving early Wed am), on his way back home across the country. Asking if he wanted to do anything while in town, he said he'd like to go to that restaurant again. When we saw Ronnie (Sunday), I told her we were going "and you are coming with us."

    Monday, I took the day off to hang out with Gabe, my sis had to work, but she didn't come over when she got off at 7pm.

    Tuesday she came over with my nephews around 11am, with dinner rez for 6 ppl (same as last time) at 8pm. We hung out and as th...
    0.3535533905932737
    I (M29) am trans. My girlfriend (F28, GF) is totally cool with it, always has been, we've been dating since college, 8 years in March.

    GF's dad was abusive, so she left home at 18 and had to leave her baby sister behind.

    2015, we're 24/23, in grad school, living together. GF gets some news: her dad died and, long story short, nobody can take her sister in.

    We hire a lawyer to try for custody. I quit school to work fulltime so we can afford it. It takes a lot of time and work, but we get to take her home.

    Fast forward to now. Kid (12, S) has school in person on Tu/Th, virtual learning the rest. Friday the 11th, while S was out walking the dog, I grabbed the hamper out of their room to do the laundry. The pocket of the hoodie they just wore to school was bunched up weird, so I checked it and pulled out a binder.

    For those who don't know, a binder is usually used by trans people to flatten their chests so they can pass easier. The only other reason I could think of for someone to o...
    Scores plan to leave Mormon church over its policy on same-sex couples - Gay Star News 0.4082482904638631
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss similarity_spearman_cosine
0.0286 10 - 0.0535
0.0571 20 - 0.0570
0.0857 30 - 0.0681
0.1143 40 - 0.0739
0.1429 50 - 0.0572
0.1714 60 - 0.0250
0.2 70 - 0.0230
0.2286 80 - 0.0726
0.2571 90 - 0.0548
0.2857 100 - 0.0451
0.3143 110 - 0.0067
0.3429 120 - 0.0425
0.3714 130 - 0.0920
0.4 140 - 0.0823
0.4286 150 - 0.1165
0.4571 160 - 0.1405
0.4857 170 - 0.1661
0.5143 180 - 0.1657
0.5429 190 - 0.1832
0.5714 200 - 0.0056
0.6 210 - 0.1209
0.6286 220 - 0.1280
0.6571 230 - 0.1902
0.6857 240 - 0.2111
0.7143 250 - 0.2717
0.7429 260 - 0.2716
0.7714 270 - 0.2629
0.8 280 - 0.2171
0.8286 290 - 0.2742
0.8571 300 - 0.2913
0.8857 310 - 0.2813
0.9143 320 - 0.2863
0.9429 330 - 0.2918
0.9714 340 - 0.2951
1.0 350 - 0.3198
1.0286 360 - 0.3145
1.0571 370 - 0.3148
1.0857 380 - 0.2907
1.1143 390 - 0.3267
1.1429 400 - 0.3246
1.1714 410 - 0.3351
1.2 420 - 0.3463
1.2286 430 - 0.3531
1.2571 440 - 0.3398
1.2857 450 - 0.3169
1.3143 460 - 0.3304
1.3429 470 - 0.3315
1.3714 480 - 0.3684
1.4 490 - 0.3499
1.4286 500 0.1429 0.3438
1.4571 510 - 0.3362
1.4857 520 - 0.3130
1.5143 530 - 0.3445
1.5429 540 - 0.3464
1.5714 550 - 0.3499
1.6 560 - 0.3626
1.6286 570 - 0.3743
1.6571 580 - 0.3714
1.6857 590 - 0.3774
1.7143 600 - 0.3624
1.7429 610 - 0.3861
1.7714 620 - 0.3925
1.8 630 - 0.3763
1.8286 640 - 0.3906
1.8571 650 - 0.4034
1.8857 660 - 0.3887
1.9143 670 - 0.3970
1.9429 680 - 0.3787
1.9714 690 - 0.3958
2.0 700 - 0.3812
2.0286 710 - 0.3951
2.0571 720 - 0.4066
2.0857 730 - 0.4030
2.1143 740 - 0.4029
2.1429 750 - 0.3899
2.1714 760 - 0.3898
2.2 770 - 0.3987
2.2286 780 - 0.4007
2.2571 790 - 0.4040
2.2857 800 - 0.4101

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 5.1.0
  • Transformers: 4.53.3
  • PyTorch: 2.5.1
  • Accelerate: 1.10.0
  • Datasets: 2.14.4
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
-
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Culture-and-Morality-Lab/psyembedding-roberta-large

Paper for Culture-and-Morality-Lab/psyembedding-roberta-large

Evaluation results