SentenceTransformer based on PaDaS-Lab/xlm-roberta-base-msmarco

This is a sentence-transformers model finetuned from PaDaS-Lab/xlm-roberta-base-msmarco. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: PaDaS-Lab/xlm-roberta-base-msmarco
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Dove è già attivo il 5G in Italia?',
    'La connettività 5G è già arrivata in Italia: secondo i dati dell’Osservatorio 5G della Commissione europea, la copertura nel nostro Paese è del 99,7%.\nIl 5G è attivo e già funzionante in 38 paesi dell’Europa tra cui l’Italia e proposto da circa 50 operatori. Si prevede che entro il 2025 coprirà un terzo dell’Europa con 232 milioni di connessioni.\nLa connettività 5G risolve molti problemi consentendo di connettere molto più dispositivi e nel mondo ce ne sono già più di 5 miliardi. I vantaggi sono dati da una minore latenza e dall’ampliamento della larghezza di banda.\nLa latenza o tempo di risposta riguarda quel tempo che intercorre tra l’invio e la ricezione dei dati tra un dispositivo e l’altro: più è minore più avremo un servizio veloce.\nLa larghezza di banda ampliata invece, ci darà la possibilità di usufruire di una velocità di download e di upload molto elevata, quindi potremo vedere video o trasmissioni in streaming con una migliore qualità video.\nIl 5G attualmente è attivo in Italia e copre oramai quasi tutto il territorio nazionale escludendo alcune zone alpine e lungo l’Appennino da Nord a Sud.',
    'LiberoGioco possiede licenza ADM, Agenzia Dogane Monopoli ex AAMS, dunque è perfettamente legale e attivo regolarmente in Italia.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6971, 0.4033],
#         [0.6971, 1.0000, 0.2877],
#         [0.4033, 0.2877, 1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,275,683 training samples
  • Columns: sentence_0, sentence_1, sentence_2, sentence_3, sentence_4, and sentence_5
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2 sentence_3 sentence_4 sentence_5
    type string string string string string string
    details
    • min: 6 tokens
    • mean: 14.96 tokens
    • max: 115 tokens
    • min: 10 tokens
    • mean: 75.86 tokens
    • max: 512 tokens
    • min: 12 tokens
    • mean: 99.18 tokens
    • max: 512 tokens
    • min: 12 tokens
    • mean: 100.11 tokens
    • max: 512 tokens
    • min: 11 tokens
    • mean: 96.4 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 99.25 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2 sentence_3 sentence_4 sentence_5
    Czy mus nie pozostawia tłustej warstwy na skórze? Nasz mus do ciała - Len-Konopie - skomponowany jest w oparciu o oleje i masła roślinne - otula on skórę natłuszczającą warstwą ochronną, która potrzebuje czasu, aby się wchłonąć. Aplikowanie musu na nieco wilgotną (np. po kąpieli/prysznicu) skórę sprawi, że mus wchłonie się szybciej, pozostawiając skórę nawilżoną i miękką w dotyku. Masła do ciała mają gęstą i cięższą konsystencję. Zawierają zazwyczaj sporą ilość naturalnych składników odżywczych (olejków i ekstraktów) oraz głęboko wnikają w skórę, odżywiając ją i nawilżając. To dobry wybór, gdy Twoje ciało potrzebuje dogłębnej regeneracji. Krem do ciała ma z kolei najlżejszą konsystencję: nawilża, a jednocześnie szybko się wchłania i nie pozostawia na skórze tłustej warstwy. Najlżejszą konsystencję ze wszystkich kosmetyków ma mleczko do ciała. Tak! Dzięki zawartości oleju z konopi, który zawiera ok. 75% niezbędnych nienasyconych kwasów tłuszczowych mus wyróżnia się właściwościami kojącymi i łagodzącymi podrażnienia dla skór suchych, szorstkich czy atopowych właśnie :) Z uwagi na zawartość olejków eterycznych w składzie, tym z Państwa, którzy borykają się z atopią, zalecałybyśmy wcześniejsze skonsultowanie składu z lekarzem dermatologiem. Wiotkość skóry to utrata elastyczności i jędrności, wynikająca z naturalnego procesu starzenia, ekspozycji na słońce czy nieodpowiedniej pielęgnacji. Skóra staje się cienka, mniej sprężysta i opada, zwłaszcza na twarzy, szyi oraz dekolcie. Laser frakcyjny CO2 działa na głębsze warstwy skóry, stymulując produkcję kolagenu, co skutecznie ujędrnia skórę i przywraca jej młody wygląd. Gazetka LIDL 2023 to świetny sposób na sprawdzenie, czy musy truskawkowe LIDL są aktualnie dostępne oraz czy musy truskawkowe z Lidla są w promocji, czy też nie. Jeżeli nie wiesz, jakie ma opinie mus truskawkowy LIDL, przeczytaj co napisali inni użytkownicy.
    É precisa de se cadastrar e-mail em Eletro Angeloni? Sim, quando fazam comprars em Eletro Angeloni pode se registrar na página de venda.Eletro Angeloni queria oferecer aos clientes uma melhor experiência de compra e serviços, lançou benefícios de associação especialmente. Para obter benefícios específicos para membros, você pode se registrar como um membro Eletro Angeloni através do seguinte endereço de e-mail. Sim, é importante se cadastrar por e-mail quando fazam comprars em Stocklots24. Stocklots24 tem um sistema exclusivo de descontos para membros. Você pode se tornar um membro registrando um e-mail em stocklots24.fr. Depois de se tornar um membro, você pode aproveitar diferentes benefícios de Stocklots24. Você deve ter pelo menos 18 anos de idade para se cadastrar na Sportsbet.io. Além disso, é necessário fornecer informações precisas, como nome, data de nascimento, e-mail e criar uma senha segura. Infelizamente Mario Eletro não suporta o uso de cupons em pilha. O uso de Cupom Mario Eletro é claramente estipulado e Mario Eletro não pode se sobrepor a Cupom de Desconto Mario Eletro. Mas, para obter mais descontos, os clientes podem preferir usar Cupom Desconto Mario Eletro com o maior desconto. Sim, você precisa. Os clientes podem se cadastrar em PneuStore account por e-mail. Desta forma, os clientes podem entender o último PneuStore código do cupom. A entrada de registro normalmente está localizada na parte inferior da página inicial de PneuStore, e você pode cancelar a inscrição PneuStore deste serviço a qualquer momento.
    Does anyone at Squlpt speak Spanish? Yes, we have Spanish-speaking team members who will be more than happy to communicate with you in Spanish or English. Portuguese and Spanish are both Romance languages and share many similarities. If you already speak Spanish or have knowledge of Spanish, it can make learning Portuguese easier due to the similarities in vocabulary and grammar. The majority of locals in Cuba only speak Spanish, but areas that are popular with tourists are likely to have some people who do speak English well. Learning some basic Spanish phrases and words before you travel is a good idea in case you need help at any point, and also to better immerse yourself in Cuban culture. Yes, we have opticians and eye doctors who speak Spanish. Please contact a store for an appointment so we can make sure a Spanish-speaking staff member is available. Language barriers in the workplace can have a direct impact on the effectiveness of safety programs. The first step to an effective safety program that accommodates Spanish-speaking workers is to add bilingual employees to the safety committee. By having Spanish-speaking individuals on the committee, you can ensure Hispanic employees are comfortable with talking to the safety committee, asking them questions,
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "mini_batch_size": 32,
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 1
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.0502 500 1.4085
0.1003 1000 0.3267
0.1505 1500 0.2822
0.2007 2000 0.2655
0.2508 2500 0.2463
0.3010 3000 0.2391
0.3512 3500 0.2322
0.4013 4000 0.2252
0.4515 4500 0.2172
0.5017 5000 0.2162
0.5518 5500 0.2101
0.6020 6000 0.2099
0.6522 6500 0.2007
0.7023 7000 0.2043
0.7525 7500 0.1987
0.8026 8000 0.1985
0.8528 8500 0.1948
0.9030 9000 0.1973
0.9531 9500 0.1978

Framework Versions

  • Python: 3.10.4
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 2.21.0
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CachedMultipleNegativesRankingLoss

@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
15
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for IrvinTopi/mnrl-hardnegatives14

Finetuned
(4)
this model

Papers for IrvinTopi/mnrl-hardnegatives14