metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2640
- loss:MultipleNegativesRankingLoss
- loss:CosineSimilarityLoss
base_model: jinaai/jina-embedding-b-en-v1
widget:
- source_sentence: Can you tell me how my portfolio did last week?
sentences:
- Suggest recommendations for me
- Do you have any insights on my portfolio
- How did my portfolio perform last week ?
- source_sentence: What are my most risky holdings?
sentences:
- View my holdings
- Show my market cap breakdown
- Show my riskiest holdings
- source_sentence: What profits do I have in my portfolio?
sentences:
- How can I swap my stocks for mutual funds?
- Show me the cash in my portfolio?
- What are the profits I have gained in my portfolio
- source_sentence: >-
I'm curious, which investments have the highest volatility in my
portfolio?
sentences:
- Which sector do I invest most in?
- is there anything wrong with my investments?
- Which of my investments have the highest volatility?
- source_sentence: Sort my investment portfolio by ESG rating, please.
sentences:
- What stock makes up the largest percentage of my portfolio?
- Can you show my worst performing holdings
- Show my investments sorted by ESG rating.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on jinaai/jina-embedding-b-en-v1
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: test eval
type: test-eval
metrics:
- type: cosine_accuracy@1
value: 0.8636363636363636
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9924242424242424
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8636363636363636
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3308080808080807
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8636363636363636
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9924242424242424
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9436916551342168
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9242424242424244
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9242424242424242
name: Cosine Map@100
SentenceTransformer based on jinaai/jina-embedding-b-en-v1
This is a sentence-transformers model finetuned from jinaai/jina-embedding-b-en-v1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: jinaai/jina-embedding-b-en-v1
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Sort my investment portfolio by ESG rating, please.',
'Show my investments sorted by ESG rating.',
'Can you show my worst performing holdings',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
test-eval - Evaluated with
InformationRetrievalEvaluator
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.8636 |
| cosine_accuracy@3 | 0.9924 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8636 |
| cosine_precision@3 | 0.3308 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8636 |
| cosine_recall@3 | 0.9924 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9437 |
| cosine_mrr@10 | 0.9242 |
| cosine_map@100 | 0.9242 |
Training Details
Training Datasets
Unnamed Dataset
- Size: 1,320 training samples
- Columns:
sentence_0,sentence_1, andlabel - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 4 tokens
- mean: 10.62 tokens
- max: 20 tokens
- min: 4 tokens
- mean: 9.06 tokens
- max: 17 tokens
- min: 1.0
- mean: 1.0
- max: 1.0
- Samples:
sentence_0 sentence_1 label How does my portfolio score look?What is my portfolio score?1.0Show me the risk profile of my portfolio.Details on my portfolio risk1.0Which of my shares are the most erratic?Which of my stocks are most volatile?1.0 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Unnamed Dataset
- Size: 1,320 training samples
- Columns:
sentence_0,sentence_1, andlabel - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 4 tokens
- mean: 10.62 tokens
- max: 22 tokens
- min: 4 tokens
- mean: 9.05 tokens
- max: 17 tokens
- min: 1.0
- mean: 1.0
- max: 1.0
- Samples:
sentence_0 sentence_1 label What holdings carry the least risk in my portfolio?What are the least risky holdings in my portfolio?1.0How have my investments fared over the previous year?How has my portfolio performed over the last year?1.0How well is my portfolio performing?How is my portfolio performing1.0 - Loss:
CosineSimilarityLosswith these parameters:{ "loss_fct": "torch.nn.modules.loss.MSELoss" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 32per_device_eval_batch_size: 32num_train_epochs: 15multi_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 32per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 15max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size: 0fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin
Training Logs
| Epoch | Step | Training Loss | test-eval_cosine_ndcg@10 |
|---|---|---|---|
| 1.0 | 84 | - | 0.8877 |
| 2.0 | 168 | - | 0.8944 |
| 3.0 | 252 | - | 0.9042 |
| 4.0 | 336 | - | 0.9123 |
| 5.0 | 420 | - | 0.9241 |
| 5.9524 | 500 | 0.2478 | 0.9209 |
| 6.0 | 504 | - | 0.9209 |
| 7.0 | 588 | - | 0.9261 |
| 8.0 | 672 | - | 0.9327 |
| 9.0 | 756 | - | 0.9364 |
| 10.0 | 840 | - | 0.9370 |
| 11.0 | 924 | - | 0.9437 |
Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}