Matryoshka Representation Learning
Paper
• 2205.13147 • Published
• 25
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("aired/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Structural costs typically do not have a directly proportionate relationship to production volume and include costs such as manufacturing, engineering, and administrative expenses. These costs can be adjusted over time in response to external factors.',
'How does Ford Motor Company handle its structural costs in relation to production volume changes?',
'What were the total future minimum lease payments under all non-cancelable operating leases for the company as of December 31, 2023?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
dim_768, dim_512, dim_256, dim_128 and dim_64InformationRetrievalEvaluator| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|---|---|---|---|---|---|
| cosine_accuracy@1 | 0.72 | 0.7157 | 0.7029 | 0.6786 | 0.6486 |
| cosine_accuracy@3 | 0.8257 | 0.8243 | 0.8171 | 0.8029 | 0.77 |
| cosine_accuracy@5 | 0.8586 | 0.8643 | 0.8543 | 0.8543 | 0.8143 |
| cosine_accuracy@10 | 0.8943 | 0.8914 | 0.8814 | 0.8814 | 0.8657 |
| cosine_precision@1 | 0.72 | 0.7157 | 0.7029 | 0.6786 | 0.6486 |
| cosine_precision@3 | 0.2752 | 0.2748 | 0.2724 | 0.2676 | 0.2567 |
| cosine_precision@5 | 0.1717 | 0.1729 | 0.1709 | 0.1709 | 0.1629 |
| cosine_precision@10 | 0.0894 | 0.0891 | 0.0881 | 0.0881 | 0.0866 |
| cosine_recall@1 | 0.72 | 0.7157 | 0.7029 | 0.6786 | 0.6486 |
| cosine_recall@3 | 0.8257 | 0.8243 | 0.8171 | 0.8029 | 0.77 |
| cosine_recall@5 | 0.8586 | 0.8643 | 0.8543 | 0.8543 | 0.8143 |
| cosine_recall@10 | 0.8943 | 0.8914 | 0.8814 | 0.8814 | 0.8657 |
| cosine_ndcg@10 | 0.8078 | 0.8053 | 0.7946 | 0.7829 | 0.7555 |
| cosine_mrr@10 | 0.78 | 0.7774 | 0.7664 | 0.751 | 0.7204 |
| cosine_map@100 | 0.7838 | 0.7813 | 0.771 | 0.7549 | 0.7248 |
positive and anchor| positive | anchor | |
|---|---|---|
| type | string | string |
| details |
|
|
| positive | anchor |
|---|---|
GEICO markets its policies mainly by direct response methods where most customers apply for coverage directly to the company via the Internet or over the telephone. |
What are the primary marketing methods used by GEICO? |
In addition, most group health plans and issuers of group or individual health insurance coverage are required to disclose personalized pricing information to their participants, beneficiaries, and enrollees through an online consumer tool, by phone, or in paper form, upon request. Cost estimates must be provided in real-time based on cost-sharing information that is accurate at the time of the request. |
What are the requirements for health insurers and group health plans in providing cost estimates to consumers? |
Fair values of indefinite-lived intangible assets are determined based on the income approach. |
What method is used to determine the fair value of indefinite-lived intangible assets? |
MatryoshkaLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
eval_strategy: epochper_device_train_batch_size: 32per_device_eval_batch_size: 16gradient_accumulation_steps: 16learning_rate: 2e-05num_train_epochs: 4lr_scheduler_type: cosinewarmup_ratio: 0.1fp16: Trueload_best_model_at_end: Trueoptim: adamw_torch_fusedbatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 16eval_accumulation_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 4max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|---|---|---|---|---|---|---|---|
| 0.8122 | 10 | 1.6045 | - | - | - | - | - |
| 0.9746 | 12 | - | 0.7895 | 0.7895 | 0.7764 | 0.7680 | 0.7277 |
| 1.6244 | 20 | 0.6975 | - | - | - | - | - |
| 1.9492 | 24 | - | 0.8044 | 0.8026 | 0.7924 | 0.7819 | 0.7515 |
| 2.4365 | 30 | 0.4732 | - | - | - | - | - |
| 2.9239 | 36 | - | 0.8064 | 0.8060 | 0.7944 | 0.7825 | 0.7549 |
| 3.2487 | 40 | 0.4182 | - | - | - | - | - |
| 3.8985 | 48 | - | 0.8078 | 0.8053 | 0.7946 | 0.7829 | 0.7555 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
BAAI/bge-base-en-v1.5