metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:3570
- loss:MultipleNegativesRankingLoss
- loss:CosineSimilarityLoss
base_model: jinaai/jina-embedding-b-en-v1
widget:
- source_sentence: How do I change my stocks to mutual funds?
sentences:
- How can I swap my stocks for mutual funds?
- Show my stocks
- What are the profits I have gained in my portfolio
- source_sentence: What percentage of my investments are in large cap?
sentences:
- Show some of my best performing holdings
- Suggest recommendations for me
- Can you show what percentage of my portfolio consists of large cap
- source_sentence: How do I change my risk profile?
sentences:
- What can I do to bring down the volatility in my portfolio?
- I want to change my risk profile
- What is the total value of my portfolio
- source_sentence: >-
Is now a good time to buy energy stocks considering the war in the Middle
East and rising fuel prices?
sentences:
- Am I investing in the small cap market more?
- >-
I saw in the news that there is a war going on in the Middle East and
fuel will be more costly now, should I buy energy sector stocks?
- Are my ETFs giving better returns compare to my mutual funds?
- source_sentence: Look for funds that fit my stock holdings
sentences:
- Can you tell me if my investments will grow well in the long run?
- Do I have any stocks in my portfolio?
- Explore funds that match my stock portfolio
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on jinaai/jina-embedding-b-en-v1
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: test eval
type: test-eval
metrics:
- type: cosine_accuracy@1
value: 0.8659217877094972
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9916201117318436
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9972067039106145
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8659217877094972
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33054003724394787
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1994413407821229
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8659217877094972
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9916201117318436
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9972067039106145
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9460695277624867
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9273743016759775
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9273743016759777
name: Cosine Map@100
SentenceTransformer based on jinaai/jina-embedding-b-en-v1
This is a sentence-transformers model finetuned from jinaai/jina-embedding-b-en-v1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: jinaai/jina-embedding-b-en-v1
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Look for funds that fit my stock holdings',
'Explore funds that match my stock portfolio',
'Can you tell me if my investments will grow well in the long run?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
test-eval - Evaluated with
InformationRetrievalEvaluator
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.8659 |
| cosine_accuracy@3 | 0.9916 |
| cosine_accuracy@5 | 0.9972 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8659 |
| cosine_precision@3 | 0.3305 |
| cosine_precision@5 | 0.1994 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8659 |
| cosine_recall@3 | 0.9916 |
| cosine_recall@5 | 0.9972 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9461 |
| cosine_mrr@10 | 0.9274 |
| cosine_map@100 | 0.9274 |
Training Details
Training Datasets
Unnamed Dataset
- Size: 1,785 training samples
- Columns:
sentence_0,sentence_1, andlabel - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 4 tokens
- mean: 11.4 tokens
- max: 26 tokens
- min: 4 tokens
- mean: 10.11 tokens
- max: 33 tokens
- min: 1.0
- mean: 1.0
- max: 1.0
- Samples:
sentence_0 sentence_1 label How can I lower the risk in my investments?How to reduce my risk1.0How is my asset allocation divided?What is my asset allocation breakdown?1.0Any specific swap recommendations for better returns?What are the specific swap suggestions to improve my returns?1.0 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Unnamed Dataset
- Size: 1,785 training samples
- Columns:
sentence_0,sentence_1, andlabel - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 4 tokens
- mean: 11.28 tokens
- max: 26 tokens
- min: 4 tokens
- mean: 9.98 tokens
- max: 33 tokens
- min: 1.0
- mean: 1.0
- max: 1.0
- Samples:
sentence_0 sentence_1 label What should I do to improve my investment returns?How can I improve my returns?1.0Can you give me an overview of my portfolio?Do you have any insights on my portfolio1.0Reveal my stock assetsShow my stocks1.0 - Loss:
CosineSimilarityLosswith these parameters:{ "loss_fct": "torch.nn.modules.loss.MSELoss" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 32per_device_eval_batch_size: 32num_train_epochs: 20multi_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 32per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 20max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size: 0fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin
Training Logs
| Epoch | Step | Training Loss | test-eval_cosine_ndcg@10 |
|---|---|---|---|
| 1.0 | 112 | - | 0.9013 |
| 2.0 | 224 | - | 0.9112 |
| 3.0 | 336 | - | 0.9250 |
| 4.0 | 448 | - | 0.9307 |
| 4.4643 | 500 | 0.1949 | 0.9337 |
| 5.0 | 560 | - | 0.9342 |
| 6.0 | 672 | - | 0.9381 |
| 7.0 | 784 | - | 0.9423 |
| 8.0 | 896 | - | 0.9426 |
| 8.9286 | 1000 | 0.1347 | 0.9452 |
| 9.0 | 1008 | - | 0.9442 |
| 10.0 | 1120 | - | 0.9461 |
| 11.0 | 1232 | - | 0.9461 |
| 12.0 | 1344 | - | 0.9461 |
| 13.0 | 1456 | - | 0.9461 |
| 13.3929 | 1500 | 0.1193 | 0.9461 |
| 14.0 | 1568 | - | 0.9461 |
| 15.0 | 1680 | - | 0.9461 |
| 16.0 | 1792 | - | 0.9461 |
| 17.0 | 1904 | - | 0.9461 |
| 17.8571 | 2000 | 0.117 | 0.9461 |
| 18.0 | 2016 | - | 0.9461 |
| 19.0 | 2128 | - | 0.9461 |
Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}