Matryoshka Representation Learning
Paper
•
2205.13147
•
Published
•
25
This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Sorour/modernbert-financial-matryoshka")
# Run inference
sentences = [
"The company's financial report indicates that the pre-tax amounts of gains (losses) from foreign currency forward exchange contracts designated as cash flow hedges were gains of $82 million in 2021, gains of $103 million in 2022, and losses of $2 million in 2023.",
'What were the pre-tax amounts of (gains) losses from foreign currency forward exchange contracts designated as cash flow hedges for the years ended December 31 from 2021 to 2023?',
'What is the projected change in income before income taxes if the 2023 discount rate for the U.S. defined benefit pension and retiree health benefit plans changes by a quarter percentage point?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
dim_768, dim_256 and dim_64InformationRetrievalEvaluator| Metric | dim_768 | dim_256 | dim_64 |
|---|---|---|---|
| cosine_accuracy@1 | 0.6914 | 0.6643 | 0.62 |
| cosine_accuracy@3 | 0.8171 | 0.81 | 0.7671 |
| cosine_accuracy@5 | 0.87 | 0.8557 | 0.8171 |
| cosine_accuracy@10 | 0.9129 | 0.8971 | 0.8743 |
| cosine_precision@1 | 0.6914 | 0.6643 | 0.62 |
| cosine_precision@3 | 0.2724 | 0.27 | 0.2557 |
| cosine_precision@5 | 0.174 | 0.1711 | 0.1634 |
| cosine_precision@10 | 0.0913 | 0.0897 | 0.0874 |
| cosine_recall@1 | 0.6914 | 0.6643 | 0.62 |
| cosine_recall@3 | 0.8171 | 0.81 | 0.7671 |
| cosine_recall@5 | 0.87 | 0.8557 | 0.8171 |
| cosine_recall@10 | 0.9129 | 0.8971 | 0.8743 |
| cosine_ndcg@10 | 0.8015 | 0.7834 | 0.7453 |
| cosine_mrr@10 | 0.7659 | 0.7468 | 0.7043 |
| cosine_map@100 | 0.7695 | 0.7515 | 0.7091 |
positive and anchor| positive | anchor | |
|---|---|---|
| type | string | string |
| details |
|
|
| positive | anchor |
|---|---|
Item 8 includes Financial Statements and Supplementary Data. |
What type of data is found in Item 8 of detailed financial documentation? |
HP records revenue from the sale of equipment under sales-type leases as revenue at the commencement of the lease. This method is applied unless certain conditions such as customer acceptance remain uncertain or significant obligations to the customer remain unfulfilled. |
How does HP recognize revenue from the sale of equipment under sales-type leases? |
The company maintains insurance coverage for general liability, property, business interruption, terrorism, and other risks with respect to their business for all of their owned and leased hotels. |
What types of risks are usually covered by the company's insurance policies? |
MatryoshkaLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
256,
64
],
"matryoshka_weights": [
1,
1,
1
],
"n_dims_per_step": -1
}
eval_strategy: epochper_device_train_batch_size: 32per_device_eval_batch_size: 16gradient_accumulation_steps: 16learning_rate: 2e-05num_train_epochs: 4lr_scheduler_type: cosinewarmup_ratio: 0.1bf16: Truetf32: Trueload_best_model_at_end: Trueoptim: adamw_torch_fusedbatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 16eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 4max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Truelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|---|---|---|---|---|---|
| 0.8122 | 10 | 9.4544 | - | - | - |
| 1.0 | 13 | - | 0.7799 | 0.7650 | 0.7097 |
| 0.8122 | 10 | 3.1908 | - | - | - |
| 1.0 | 13 | - | 0.7952 | 0.7769 | 0.7259 |
| 1.5685 | 20 | 1.8807 | - | - | - |
| 2.0 | 26 | - | 0.8001 | 0.7833 | 0.7409 |
| 2.3249 | 30 | 1.7141 | - | - | - |
| 3.0 | 39 | - | 0.8023 | 0.7819 | 0.7460 |
| 3.0812 | 40 | 1.3672 | - | - | - |
| 3.731 | 48 | - | 0.8015 | 0.7834 | 0.7453 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Base model
answerdotai/ModernBERT-base