Matryoshka Representation Learning
Paper
•
2205.13147
•
Published
•
25
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chcho/bge-base-financial-matryoshka")
# Run inference
sentences = [
'MERS database revenues contain multiple performance obligations related to each new loan registration and future transfers, and the revenues are primarily recorded at the point in time of each transaction.',
'How are revenues from MERS database recognized?',
"How many active sellers and buyers did Etsy's marketplaces connect as of December 31, 2023?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
dim_768, dim_512, dim_256, dim_128 and dim_64InformationRetrievalEvaluator| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|---|---|---|---|---|---|
| cosine_accuracy@1 | 0.7229 | 0.7157 | 0.71 | 0.6871 | 0.6629 |
| cosine_accuracy@3 | 0.8529 | 0.8471 | 0.8429 | 0.8186 | 0.7843 |
| cosine_accuracy@5 | 0.89 | 0.8857 | 0.8757 | 0.8629 | 0.8371 |
| cosine_accuracy@10 | 0.9186 | 0.9186 | 0.9129 | 0.8971 | 0.8771 |
| cosine_precision@1 | 0.7229 | 0.7157 | 0.71 | 0.6871 | 0.6629 |
| cosine_precision@3 | 0.2843 | 0.2824 | 0.281 | 0.2729 | 0.2614 |
| cosine_precision@5 | 0.178 | 0.1771 | 0.1751 | 0.1726 | 0.1674 |
| cosine_precision@10 | 0.0919 | 0.0919 | 0.0913 | 0.0897 | 0.0877 |
| cosine_recall@1 | 0.7229 | 0.7157 | 0.71 | 0.6871 | 0.6629 |
| cosine_recall@3 | 0.8529 | 0.8471 | 0.8429 | 0.8186 | 0.7843 |
| cosine_recall@5 | 0.89 | 0.8857 | 0.8757 | 0.8629 | 0.8371 |
| cosine_recall@10 | 0.9186 | 0.9186 | 0.9129 | 0.8971 | 0.8771 |
| cosine_ndcg@10 | 0.8244 | 0.8209 | 0.8139 | 0.7953 | 0.7701 |
| cosine_mrr@10 | 0.7937 | 0.7891 | 0.7818 | 0.7622 | 0.7358 |
| cosine_map@100 | 0.7972 | 0.7926 | 0.7856 | 0.7667 | 0.7411 |
positive and anchor| positive | anchor | |
|---|---|---|
| type | string | string |
| details |
|
|
| positive | anchor |
|---|---|
We use a variety of methodologies to determine the fair value of these assets, including discounted cash flow models, which include assumptions we believe are consistent with those a market participant would use. |
How is the fair value of intangible assets determined within a company? |
We continue to own a 35% minority ownership in Gentiva Hospice operations after it was restructured into a new stand-alone company. |
What percentage minority ownership does the company retain in Gentiva Hospice after the restructuring? |
The net interest income for the first quarter of 2023 was $14,448 million. |
What was the net interest income for the first quarter of 2023? |
MatryoshkaLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
eval_strategy: epochper_device_train_batch_size: 32per_device_eval_batch_size: 16gradient_accumulation_steps: 16learning_rate: 2e-05num_train_epochs: 4lr_scheduler_type: cosinewarmup_ratio: 0.1bf16: Truetf32: Trueload_best_model_at_end: Trueoptim: adamw_torch_fusedbatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 16eval_accumulation_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 4max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Truelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|---|---|---|---|---|---|---|---|
| 0.8122 | 10 | 1.5791 | - | - | - | - | - |
| 0.9746 | 12 | - | 0.8089 | 0.8028 | 0.7958 | 0.7714 | 0.7428 |
| 1.6244 | 20 | 0.6637 | - | - | - | - | - |
| 1.9492 | 24 | - | 0.8209 | 0.8166 | 0.8109 | 0.7913 | 0.7615 |
| 2.4365 | 30 | 0.5072 | - | - | - | - | - |
| 2.9239 | 36 | - | 0.8229 | 0.82 | 0.8133 | 0.7959 | 0.7704 |
| 3.2487 | 40 | 0.394 | - | - | - | - | - |
| 3.8985 | 48 | - | 0.8244 | 0.8209 | 0.8139 | 0.7953 | 0.7701 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
BAAI/bge-base-en-v1.5