LauraLaureus's picture
Upload folder using huggingface_hub
cd00809 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:290
  - loss:OnlineContrastiveLoss
base_model: intfloat/multilingual-e5-large
widget:
  - source_sentence: Antes se coge al mentiroso que al cojo
    sentences:
      - A escudero pobre, taza de plata y cántaro de cobre
      - En río revuelto, pesca abundante
      - Se ayuda primero al necesitado que al engañador.
  - source_sentence: Asno de muchos, lobos lo comen
    sentences:
      - Sabio entre sabios, amigos lo respetan.
      - El que mucho madruga más hace que el que Dios ayuda.
      - Se pilla antes a un mentiroso que a un cojo
  - source_sentence: Al buey por el asta, y al hombre por la palabra
    sentences:
      - Si no quieres arroz con leche, toma tres tazas
      - Al hombre por la palabra, y al buey por el cuerno ata
      - >-
        Ese no es tu amigo, sino alguien que siempre busca estar rodeado de
        bullicio y actividad.
  - source_sentence: Al médico, confesor y letrado, hablarles claro
    sentences:
      - Al médico, confesor y letrado, no le hayas engañado
      - Más vale a quien Dios ayuda que quien mucho madruga
      - Al que anda entre la miel, algo se le pega
  - source_sentence: A muertos y a idos, no hay amigos
    sentences:
      - Al buen callar llaman santo
      - A los vivos y presentes, siempre hay amigos.
      - Al que de prestado se viste, en la calle lo desnudan
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
model-index:
  - name: SentenceTransformer based on intfloat/multilingual-e5-large
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: pearson_cosine
            value: 0.8334934833047165
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.8261353280714282
            name: Spearman Cosine

SentenceTransformer based on intfloat/multilingual-e5-large

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large on the csv dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-large
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • csv

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'A muertos y a idos, no hay amigos',
    'A los vivos y presentes, siempre hay amigos.',
    'Al buen callar llaman santo',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8335
spearman_cosine 0.8261

Training Details

Training Dataset

csv

  • Dataset: csv
  • Size: 290 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 290 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 7 tokens
    • mean: 11.68 tokens
    • max: 22 tokens
    • min: 7 tokens
    • mean: 17.01 tokens
    • max: 44 tokens
    • 0: ~50.00%
    • 1: ~50.00%
  • Samples:
    sentence1 sentence2 label
    Gota a gota, la mar se agota. Con el pasar del tiempo se llega a alcanzar cualquier meta. 1
    Dime de qué presumes y te diré de qué careces. Dime de qué careces y te diré de qué dispones. 0
    Cómo se vive, se muere. De aquella forma que hemos vivido nuestra vida será de la forma en la que moriremos. 1
  • Loss: OnlineContrastiveLoss

Evaluation Dataset

Unnamed Dataset

  • Size: 1,006 evaluation samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 7 tokens
    • mean: 12.51 tokens
    • max: 25 tokens
    • min: 6 tokens
    • mean: 14.82 tokens
    • max: 38 tokens
    • 0: ~49.70%
    • 1: ~50.30%
  • Samples:
    sentence1 sentence2 label
    ¿Adónde irá el buey que no are? ¿A dó irá el buey que no are? 1
    ¿Adónde irá el buey que no are? ¿Adónde irá el buey que no are ni la mula que no cargue? 1
    ¿Adónde irá el buey que no are? ¿Adónde irá el buey que no are, sino al matadero? 1
  • Loss: OnlineContrastiveLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • learning_rate: 1e-05
  • num_train_epochs: 4
  • lr_scheduler_type: constant
  • load_best_model_at_end: True
  • eval_on_start: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: constant
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: True
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss spearman_cosine
0 0 - 0.1095 0.7843
0.1351 5 0.6784 0.0765 0.8123
0.2703 10 0.5088 0.0533 0.8303
0.4054 15 0.4364 0.0475 0.8339
0.5405 20 0.3456 0.0435 0.8345
0.6757 25 0.1423 0.0424 0.8324
0.8108 30 0.2852 0.0443 0.8271
0.9459 35 0.2616 0.0514 0.8262
1.0811 40 0.1451 0.0521 0.8232
1.2162 45 0.2046 0.0496 0.8221
1.3514 50 0.055 0.0516 0.8197
1.4865 55 0.0956 0.0545 0.8190
1.6216 60 0.1213 0.0533 0.8213
1.7568 65 0.2378 0.0464 0.8253
1.8919 70 0.2723 0.0458 0.8249
2.0270 75 0.0603 0.0467 0.8226
2.1622 80 0.1089 0.0415 0.8263
2.2973 85 0.0813 0.0417 0.8270
2.4324 90 0.0 0.0437 0.8250
2.5676 95 0.0436 0.0467 0.8242
2.7027 100 0.0 0.0451 0.8242
2.8378 105 0.0 0.0451 0.8243
2.9730 110 0.0271 0.0433 0.8243
3.1081 115 0.007 0.0502 0.8195
3.2432 120 0.1025 0.0523 0.8195
3.3784 125 0.1244 0.0527 0.8251
3.5135 130 0.0 0.0534 0.8262
3.6486 135 0.0259 0.0571 0.8262
3.7838 140 0.0939 0.0526 0.8273
3.9189 145 0.1038 0.0527 0.8261

Framework Versions

  • Python: 3.12.9
  • Sentence Transformers: 3.4.1
  • Transformers: 4.50.0
  • PyTorch: 2.6.0+cpu
  • Accelerate: 1.6.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}