BGE Small English v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("MistyDragon/bge-small-finetuned")
# Run inference
sentences = [
    'search_document: 386\u2003 ◾\u2003 Production and Operations Management Systems\n10.8  Location Decisions Using the Transportation \nModel\nTransportation costs are a primary concern for a new start-up company or division. \nThis also applies to an existing company that intends to relocate. Finally, it should \nbe common practice to reevaluate the current location of an ongoing business so \nthat the impact of changing conditions and new opportunities are not overlooked. \nWhen shipping costs are critical for the location decision, the transportation model \n(TM) can determine minimum cost or maximum profit solutions that specify opti-\nmal shipping patterns between many locations.\nTransportation costs include the combined costs of moving raw materials to \nthe plant and of transporting finished goods from the plant to one or more ware -\nhouses. It is easier to explain the TM with the following numerical example than \nwith abstract math equations. A doll manufacturer has decided to build a fac -\ntory in the center of the United States. More specifically, Missouri and Ohio are \nidentified as the potential states. Several sites in the two regions have been identi -\nfied. Two cities have been chosen as candidates. These are St Louis, Missouri, and \nColumbus, Ohio. Real-estate costs are about equal in both. The problem is to \nselect one of the two cities. The decision will be based on the shipping (transporta -\ntion) costs.\n10.8.1 Shipping (Transportation or Distribution) Costs\nThe average cost of shipping (also known as the cost of distribution or cost of trans-\nportation) the components that the company uses to the Columbus, Ohio, location \nis $6 per production unit. Shipping costs average only $3 per unit to St Louis, \nMissouri. In TM terminology, shippers (suppliers, in this case) are called sources or \norigins. Those receiving shipments (producers, in this case) are called destinations.\nThe average cost of shipping from the Columbus, Ohio, location to the \n market—distributor’s warehouse is $2 per unit. The average cost of shipping from \nSt Louis, Missouri, to the market—distributor’s warehouse is $4 per unit. The same \nterminology applies. The shipper is the producer (source or origin) and the receivers \nare the distributors or customers (destinations). The configuration of origins and \ndestinations are shown in Figure 10.1.\nTotal transportation costs to and from the Columbus, Ohio, plant are \n$6 + $2 = $8 per unit; for St Louis, Missouri, they are $3 + $4 = $7. Other things \nbeing equal, the company should choose St Louis, Missouri. However, the real \nworld is not as simple as this.\nThe problem becomes more complex when there are a number of origins com -\npeting for shipments to a number of destinations. We will illustrate the com -\nplexity of the problem and its solution using the example of Rukna Auto Parts \nManufacturing Company.',
    'search_query: In the context of the Transportation Model (TM), what are the primary considerations for a company when deciding on a new location for its operations?',
    'search_query: What is the primary objective of loading in the production scheduling process?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.7613, 0.4329],
#         [0.7613, 1.0000, 0.4239],
#         [0.4329, 0.4239, 1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6894
cosine_accuracy@3 0.803
cosine_accuracy@5 0.8485
cosine_accuracy@10 0.8864
cosine_precision@1 0.6894
cosine_precision@3 0.2677
cosine_precision@5 0.1697
cosine_precision@10 0.0886
cosine_recall@1 0.6894
cosine_recall@3 0.803
cosine_recall@5 0.8485
cosine_recall@10 0.8864
cosine_ndcg@10 0.7854
cosine_mrr@10 0.7531
cosine_map@100 0.7569

Training Details

Training Dataset

Unnamed Dataset

  • Size: 525 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 525 samples:
    positive anchor
    type string string
    details
    • min: 11 tokens
    • mean: 432.37 tokens
    • max: 512 tokens
    • min: 7 tokens
    • mean: 30.53 tokens
    • max: 103 tokens
  • Samples:
    positive anchor
    search_document: 9192 0.9207 0.9222 0.9236 0.9215 0.9265 0.9279 0.9292 0.9306 0.9319
    1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9492 0.9441
    1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
    1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
    search_query: What is the value of the function at x = 1.5?
    search_document: 72  •  Quality   Management:   Theory   and   Applicatio n
    secondary school, or gymnasium. Tertiary education normally includes
    undergraduate and postgraduate education, as well as vocational educa -
    tion and training. Colleges and universities are the main institutions that
    provide tertiary education. Tertiary education generally results in the
    receipt of certificates, diplomas, or academic degrees.
    Higher education includes the teaching, research, and social services
    activities of universities, and within the realm of teaching, it includes
    both

    the undergraduate level (sometimes referred to as tertiary education)
    and the graduate (or postgraduate) level (sometimes referred to as gradu-
    ate school). Higher education in the United States and Canada generally
    involves work toward a degree-level or foundation degree qualification.
    In most developed countries, a high proportion of the population (up to
    50

    percent) now enters higher education at some time in t...
    search_query: What is the primary difference between tertiary and higher education as described in the document?
    search_document: 273
    Chapter 8
    Quality Management
    Readers’ Choice—“Quality means doing it
    right when no one is looking.”—Henry Ford
    Apte, U.M., and Reynolds, C.C., Quality Management at
    Kentucky Fried Chicken, Interfaces, 25(3), 1995, p. 6. The pro-
    gram developed by Kentucky Fried Chicken (KFC) Corp. to
    improve service quality is used as a benchmark for continuous
    process improvement by all KFC stores. The reduced service
    time as a result of this program is one of the measurements of
    quality.
    Crosby, P.B., Quality is Free (The Art of Making Quality
    Certain). McGraw-Hill, 1979. Crosby (1979) demanded a zero-
    defects goal which treats any failures as intolerable.
    Harris, C.R., and Yit, W., Successfully Implementing Statistical
    Process Control in Integrated Steel Companies, Interfaces, 24(5),
    1994, p. 49. Implementation processes of statistical process con-
    trol (SPC) projects were analyzed at 12 integrated steel compa-
    nies to identify key success (and failure) factors.
    Hossein...
    search_query: In the context of the document, which company developed a program to improve service quality that is used as a benchmark for continuous process improvement by all KFC stores?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 8
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • push_to_hub: True
  • hub_model_id: MistyDragon/bge-small-finetuned
  • push_to_hub_model_id: bge-small-finetuned
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: MistyDragon/bge-small-finetuned
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: bge-small-finetuned
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dim_384_cosine_ndcg@10
-1 -1 - 0.7432
1.0 9 - 0.7747
1.1212 10 0.5749 -
2.0 18 - 0.7759
2.2424 20 0.3087 -
3.0 27 - 0.7814
3.3636 30 0.2328 -
4.0 36 - 0.7854
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.1
  • PyTorch: 2.7.1+cu126
  • Accelerate: 1.8.1
  • Datasets: 3.6.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
-
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MistyDragon/bge-small-finetuned

Finetuned
(287)
this model

Papers for MistyDragon/bge-small-finetuned

Evaluation results