SentenceTransformer based on BAAI/bge-base-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • csv

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Gurveer05/bge-base-eedi-2024")
# Run inference
sentences = [
    'Question: Solve quadratic equations using the quadratic formula where the coefficient of x² is 1. Vera wants to solve this equation using the quadratic formula.\n(\nh^2+4=5 h\n)\n\nWhat should replace the triangle? The image shows the structure of the quadratic formula. It says plus or minus the square root, and the triangle is the first thing after the square root sign, with a minus sign after it.\n\nOptions:\nA. 8\nB. -10\nC. 16\nD. 25\n\nAnswer: -10',
    'Mixes up squaring and multiplying by 2 or doubling',
    'Does not realise a quadratic must be in the form ax^2+bx+c=0 to find the values for the quadratic formula',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

csv

  • Dataset: csv
  • Size: 2,940 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 33 tokens
    • mean: 90.0 tokens
    • max: 512 tokens
    • min: 4 tokens
    • mean: 14.71 tokens
    • max: 38 tokens
    • min: 7 tokens
    • mean: 16.48 tokens
    • max: 39 tokens
  • Samples:
    anchor positive negative
    Question: Add algebraic fractions where the denominators are single terms and are not multiples of each other. Express the following as a single fraction, writing your answer as simply as possible: (t / s)+(2 s / t).

    Options:
    A. (t^2+4 s^2 / s t)
    B. (t+2 s / s+t)
    C. (2 s t / s+t)
    D. (t^2+2 s^2 / s t)

    Answer: (2 s t / s+t)
    When adding/subtracting fractions, adds/subtracts the denominators and multiplies the numerators Thinks can combine the numerator and denominator after simplifying an algebraic fraction
    Question: Calculate the volume of a cone where the dimensions are all given in the same units. STEP 2

    Jessica is trying to work out the volume of this cone. A cone with the slant height labelled 9cm, the perpendicular height labelled h and half the cone's base (forming a right angled triangle with the slant and perpendicular heights) is labelled 6cm. First she needs the perpendicular height.

    Which of the following equations is true?

    Options:
    A. h^2=9^2+6^2
    B. h^2=9^2-6^2
    C. h^2=12^2+9^2
    D. h^2=12^2-9^2

    Answer: h^2=12^2-9^2
    When using Pythagoras to find the height of an isosceles triangle, uses the whole base instead of half Has multiplied base by slant height and perpendicular height to find area of a triangle
    Question: Convert from hours to minutes. 3 hours is the same as ___________ minutes.

    Options:
    A. 180
    B. 90
    C. 30
    D. 300

    Answer: 90
    Thinks there are 30 minutes in a hour Answers as if there are 100 minutes in an hour when changing from hours to minutes
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • learning_rate: 2e-05
  • weight_decay: 0.01
  • num_train_epochs: 20
  • lr_scheduler_type: cosine_with_restarts
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
0.2581 16 3.3202
0.5 31 -
0.5161 32 2.9432
0.7742 48 2.6014
1.0 62 -
1.0323 64 2.1029
1.1613 80 1.5757
1.3710 93 -
1.4194 96 2.0139
1.6774 112 1.8208
1.8710 124 -
1.9355 128 1.6599
2.0645 144 0.7017
2.2419 155 -
2.3226 160 1.4833
2.5806 176 1.3274
2.7419 186 -
2.8387 192 1.1951
3.0968 208 0.5799
3.1129 217 -
3.2258 224 0.9517
3.4839 240 1.0177
3.6129 248 -
3.7419 256 0.8864
4.0 272 0.7591
4.1129 279 -
4.1290 288 0.4319
4.3871 304 0.7878
4.4839 310 -
4.6452 320 0.7483
4.9032 336 0.6432
4.9839 341 -
5.0323 352 0.2496
5.2903 368 0.6689
5.3548 372 -
5.5484 384 0.628
5.8065 400 0.4981
5.8548 403 -
6.0645 416 0.3208
6.1935 432 0.4169
6.2258 434 -
6.4516 448 0.5049
6.7097 464 0.4402
6.7258 465 -
6.9677 480 0.3819
7.0968 496 0.1854
7.3548 512 0.4292
7.5968 527 -
7.6129 528 0.4171
7.8710 544 0.318
8.0968 558 -
8.1290 560 0.1318
8.2581 576 0.3829
8.4677 589 -
8.5161 592 0.4097
8.7742 608 0.2676
8.9677 620 -
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0
  • Accelerate: 0.33.0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Gurveer05/bge-base-eedi-2024

Finetuned
(457)
this model

Papers for Gurveer05/bge-base-eedi-2024