metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:29248
- loss:CosineSimilarityLoss
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
widget:
- source_sentence: Cómo prevenir hackeos en redes sociales
sentences:
- Me preocupa que UTE me cobre de más, estafas posibles?
- ¿Cuán seguro es almacenarlo todo en la nube?
- Cómo prevenir hackeos en redes sociales
- source_sentence: ¿Qué es un ataque de spoofing?
sentences:
- Me hackearon Instagram, qué hago ahora
- ¿Qué tanto extracto de datos hace una app común?
- ¿Qué es un ataque de spoofing?
- source_sentence: ¿Qué cuidados debo tener al realizar compras online?
sentences:
- ¿Qué cuidados debo tener al realizar compras online?
- ¿Qué beneficios ofrece el yoga para mejorar la flexibilidad?
- ¿Cómo proteger mis datos personales en el DGI?
- source_sentence: >-
- ¿En qué sectores es fundamental el comercio exterior en la economía
uruguaya?
sentences:
- ¿Qué tanto afecta el malware a las empresas?
- >-
- ¿En qué sectores es fundamental el comercio exterior en la economía
uruguaya?
- '- ¿Qué cautiva a los turistas del Parque Nacional Santa Teresa?'
- source_sentence: ¿Qué implicancias tiene un ataque DDoS en una empresa uruguaya?
sentences:
- '- ¿Cuáles son los secretos para un flan con dulce de leche perfecto?'
- ¿Uso un segundo factor de autenticación?
- ¿Qué implicancias tiene un ataque DDoS en una empresa uruguaya?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'¿Qué implicancias tiene un ataque DDoS en una empresa uruguaya?',
'¿Qué implicancias tiene un ataque DDoS en una empresa uruguaya?',
'¿Uso un segundo factor de autenticación?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 1.0000, 0.9875],
# [1.0000, 1.0000, 0.9875],
# [0.9875, 0.9875, 1.0000]])
Training Details
Training Dataset
Unnamed Dataset
- Size: 29,248 training samples
- Columns:
sentence_0,sentence_1, andlabel - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 3 tokens
- mean: 16.39 tokens
- max: 128 tokens
- min: 3 tokens
- mean: 16.39 tokens
- max: 128 tokens
- min: 0.0
- mean: 0.67
- max: 1.0
- Samples:
sentence_0 sentence_1 label Cuántos routers tienen hoy en casa?Cuántos routers tienen hoy en casa?1.0¿Cómo verificar si un archivo es seguro antes de abrirlo?¿Cómo verificar si un archivo es seguro antes de abrirlo?1.0Me apareció un banner de la UTE en un sitio web, ¿es legítimo?Me apareció un banner de la UTE en un sitio web, ¿es legítimo?1.0 - Loss:
CosineSimilarityLosswith these parameters:{ "loss_fct": "torch.nn.modules.loss.MSELoss" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 4multi_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 4max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robinrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.2735 | 500 | 0.2642 |
| 0.5470 | 1000 | 0.077 |
| 0.8206 | 1500 | 0.0135 |
| 1.0941 | 2000 | 0.0103 |
| 1.3676 | 2500 | 0.009 |
| 1.6411 | 3000 | 0.0091 |
| 1.9147 | 3500 | 0.0084 |
| 2.1882 | 4000 | 0.0058 |
| 2.4617 | 4500 | 0.0046 |
| 2.7352 | 5000 | 0.0051 |
| 3.0088 | 5500 | 0.0054 |
| 3.2823 | 6000 | 0.0037 |
| 3.5558 | 6500 | 0.003 |
| 3.8293 | 7000 | 0.0024 |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.1.2
- Transformers: 4.57.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.11.0
- Datasets: 4.0.0
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}