BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("saeoshi/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'The total lease payments of our operating lease liabilities as of December 31, 2023, were $1,392.1 million, from which $284.8 million of imputed interest was subtracted, resulting in operating lease liabilities amounting to $1,107.4 million.',
    'How much did the operating lease liabilities for 2023 amount to after subtracting imputed interest?',
    'What are the environmental compliance requirements faced by FedEx Express regarding noise and emissions?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7314
cosine_accuracy@3 0.86
cosine_accuracy@5 0.8929
cosine_accuracy@10 0.9314
cosine_precision@1 0.7314
cosine_precision@3 0.2867
cosine_precision@5 0.1786
cosine_precision@10 0.0931
cosine_recall@1 0.7314
cosine_recall@3 0.86
cosine_recall@5 0.8929
cosine_recall@10 0.9314
cosine_ndcg@10 0.8334
cosine_mrr@10 0.8018
cosine_map@100 0.8044

Information Retrieval

Metric Value
cosine_accuracy@1 0.7286
cosine_accuracy@3 0.8586
cosine_accuracy@5 0.8857
cosine_accuracy@10 0.9229
cosine_precision@1 0.7286
cosine_precision@3 0.2862
cosine_precision@5 0.1771
cosine_precision@10 0.0923
cosine_recall@1 0.7286
cosine_recall@3 0.8586
cosine_recall@5 0.8857
cosine_recall@10 0.9229
cosine_ndcg@10 0.829
cosine_mrr@10 0.7985
cosine_map@100 0.8019

Information Retrieval

Metric Value
cosine_accuracy@1 0.72
cosine_accuracy@3 0.8486
cosine_accuracy@5 0.8814
cosine_accuracy@10 0.9186
cosine_precision@1 0.72
cosine_precision@3 0.2829
cosine_precision@5 0.1763
cosine_precision@10 0.0919
cosine_recall@1 0.72
cosine_recall@3 0.8486
cosine_recall@5 0.8814
cosine_recall@10 0.9186
cosine_ndcg@10 0.8214
cosine_mrr@10 0.79
cosine_map@100 0.7933

Information Retrieval

Metric Value
cosine_accuracy@1 0.7186
cosine_accuracy@3 0.8357
cosine_accuracy@5 0.8686
cosine_accuracy@10 0.9086
cosine_precision@1 0.7186
cosine_precision@3 0.2786
cosine_precision@5 0.1737
cosine_precision@10 0.0909
cosine_recall@1 0.7186
cosine_recall@3 0.8357
cosine_recall@5 0.8686
cosine_recall@10 0.9086
cosine_ndcg@10 0.8138
cosine_mrr@10 0.7835
cosine_map@100 0.7871

Information Retrieval

Metric Value
cosine_accuracy@1 0.6814
cosine_accuracy@3 0.8029
cosine_accuracy@5 0.8386
cosine_accuracy@10 0.8857
cosine_precision@1 0.6814
cosine_precision@3 0.2676
cosine_precision@5 0.1677
cosine_precision@10 0.0886
cosine_recall@1 0.6814
cosine_recall@3 0.8029
cosine_recall@5 0.8386
cosine_recall@10 0.8857
cosine_ndcg@10 0.7832
cosine_mrr@10 0.7505
cosine_map@100 0.7546

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 8 tokens
    • mean: 45.94 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 20.39 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    The VOBA is amortized as a component of Policy acquisition costs in the financial statements in relation to the profit emergence of the underlying contracts, which is generally in proportion to premium revenue recognized based on the same assumptions used at the time of the acquisition. How is the VOBA asset amortized in relation to policy costs?
    Item 8 is titled Financial Statements and Supplementary Data in the financial document. What is the title of Item 8 in the financial document?
    For a discussion of legal and other proceedings in which the entity is involved, see Note 13 - Commitments and Contingencies in the Notes to Consolidated Financial Statements in Part II, Item 8 of this Annual Report on Form 10-K. Where can more detailed information regarding the legal proceedings be found?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.8122 10 25.4662 - - - - -
1.0 13 - 0.8204 0.8194 0.8108 0.7975 0.7577
1.5685 20 9.9283 - - - - -
2.0 26 - 0.8288 0.8254 0.8209 0.8113 0.7743
2.3249 30 7.5236 - - - - -
3.0 39 - 0.8326 0.8278 0.8221 0.8122 0.7822
3.0812 40 6.8831 - - - - -
3.8934 50 6.7696 - - - - -
4.0 52 - 0.8334 0.829 0.8214 0.8138 0.7832
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 2.19.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for saeoshi/bge-base-financial-matryoshka

Finetuned
(442)
this model

Papers for saeoshi/bge-base-financial-matryoshka

Evaluation results