Eval_02_Final_5 / README.md
Mr-FineTuner's picture
Fine-tuned paraphrase-MiniLM-L6-v2 for CEFR classification
6702f4c verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:14356
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/paraphrase-MiniLM-L6-v2
widget:
  - source_sentence: >-
      Pear trees are usually productive for 50 to 75 years though some still
      produce fruit after 100 years .
    sentences:
      - In the late 1950s , he studied cinema in France .
      - >-
        Pear trees are usually productive for 50 to 75 years though some still
        produce fruit after 100 years .
      - >-
        A recording medium is a physical material that holds data expressed in
        any of the existing recording formats .
  - source_sentence: On poor , dry soils there are tropical heathlands .
    sentences:
      - On poor , dry soils there are tropical heathlands .
      - There are plans to build a new library at my school .
      - >-
        These are forest birds that tend to feed on insects at or near the
        ground .
  - source_sentence: >-
      According to Statistics Canada , the county has a total area of 2004.44
      km2 .
    sentences:
      - >-
        In 2018 , there are eleven senators holding ministerial positions and
        the head of state , the First mayor .
      - >-
        According to Statistics Canada , the county has a total area of 2004.44
        km2 .
      - >-
        There are some common ways used to stretch piercings , of different
        origins and useful for different people .
  - source_sentence: Oll , who was married , fell into severe depressions after he divorced .
    sentences:
      - >-
        Tide pools are a home for hardy organisms such as sea stars , mussels
        and clams .
      - >-
        Endgames can be studied according to the types of pieces that remain on
        board .
      - Oll , who was married , fell into severe depressions after he divorced .
  - source_sentence: She often shared her boots with her sister .
    sentences:
      - She often shared her boots with her sister .
      - >-
        Many of the Greek city -states also had a god or goddess associated with
        that city .
      - Until 1 April 2010 the Departments were as follows .
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
model-index:
  - name: SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L6-v2
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: dev
          type: dev
        metrics:
          - type: pearson_cosine
            value: 0.17226242076011888
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.15567680488974325
            name: Spearman Cosine

SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Mr-FineTuner/Eval_02_Final_5")
# Run inference
sentences = [
    'She often shared her boots with her sister .',
    'She often shared her boots with her sister .',
    'Until 1 April 2010 the Departments were as follows .',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.1723
spearman_cosine 0.1557

Training Details

Training Dataset

Unnamed Dataset

  • Size: 14,356 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 7 tokens
    • mean: 18.64 tokens
    • max: 42 tokens
    • min: 7 tokens
    • mean: 18.64 tokens
    • max: 42 tokens
    • min: 1.0
    • mean: 3.32
    • max: 6.0
  • Samples:
    sentence_0 sentence_1 label
    Construction of the temple complex started in approximately 1264 BC and lasted for about 20 years , until 1244 BC . Construction of the temple complex started in approximately 1264 BC and lasted for about 20 years , until 1244 BC . 3.0
    He knew which bag to buy for his older sister 's birthday . He knew which bag to buy for his older sister 's birthday . 3.0
    The precise origin of absinthe is unclear . The precise origin of absinthe is unclear . 4.0
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss dev cosine similarity loss dev_spearman_cosine
0.5568 500 0.0017 0.4277 0.1245
1.0 898 - 0.4394 0.1433
1.1136 1000 0.001 0.4388 0.1568
1.6704 1500 0.0006 0.4518 0.1503
2.0 1796 - 0.4394 0.1487
2.2272 2000 0.0008 0.4454 0.1471
2.7840 2500 0.0009 0.4532 0.1542
3.0 2694 - 0.4534 0.1557

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}