SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-mpnet-base-v2
- Maximum Sequence Length: 384 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Prashasst/anime-recommendation-model")
sentences = [
'I want anime like onepiece.',
'Pirates',
'Action',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Semantic Similarity
| Metric |
anime-recommendation-dev |
anime-recommendation-test |
| pearson_cosine |
0.6145 |
0.6536 |
| spearman_cosine |
0.6215 |
0.6394 |
Training Details
Training Dataset
Unnamed Dataset
Evaluation Dataset
Unnamed Dataset
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 16
learning_rate: 2e-05
num_train_epochs: 1
warmup_ratio: 0.1
fp16: True
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 16
per_device_eval_batch_size: 8
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 2e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: False
fp16: True
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: False
hub_always_push: False
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
dispatch_batches: None
split_batches: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
eval_use_gather_object: False
prompts: None
batch_sampler: batch_sampler
multi_dataset_batch_sampler: proportional
Training Logs
Click to expand
| Epoch |
Step |
Training Loss |
Validation Loss |
anime-recommendation-dev_spearman_cosine |
anime-recommendation-test_spearman_cosine |
| 0.0068 |
1 |
0.3882 |
- |
- |
- |
| 0.0135 |
2 |
0.2697 |
- |
- |
- |
| 0.0203 |
3 |
0.2648 |
- |
- |
- |
| 0.0270 |
4 |
0.3022 |
- |
- |
- |
| 0.0338 |
5 |
0.2665 |
- |
- |
- |
| 0.0405 |
6 |
0.2923 |
- |
- |
- |
| 0.0473 |
7 |
0.3165 |
- |
- |
- |
| 0.0541 |
8 |
0.2069 |
- |
- |
- |
| 0.0608 |
9 |
0.271 |
- |
- |
- |
| 0.0676 |
10 |
0.1974 |
- |
- |
- |
| 0.0743 |
11 |
0.156 |
- |
- |
- |
| 0.0811 |
12 |
0.1035 |
- |
- |
- |
| 0.0878 |
13 |
0.1046 |
- |
- |
- |
| 0.0946 |
14 |
0.0579 |
- |
- |
- |
| 0.1014 |
15 |
0.0904 |
- |
- |
- |
| 0.1081 |
16 |
0.0734 |
- |
- |
- |
| 0.1149 |
17 |
0.0396 |
- |
- |
- |
| 0.1216 |
18 |
0.0219 |
- |
- |
- |
| 0.1284 |
19 |
0.0672 |
- |
- |
- |
| 0.1351 |
20 |
0.0567 |
- |
- |
- |
| 0.1419 |
21 |
0.0969 |
- |
- |
- |
| 0.1486 |
22 |
0.0258 |
- |
- |
- |
| 0.1554 |
23 |
0.1174 |
- |
- |
- |
| 0.1622 |
24 |
0.0334 |
- |
- |
- |
| 0.1689 |
25 |
0.0661 |
- |
- |
- |
| 0.1757 |
26 |
0.0365 |
- |
- |
- |
| 0.1824 |
27 |
0.049 |
- |
- |
- |
| 0.1892 |
28 |
0.0889 |
- |
- |
- |
| 0.1959 |
29 |
0.0179 |
- |
- |
- |
| 0.2027 |
30 |
0.0255 |
- |
- |
- |
| 0.2095 |
31 |
0.0312 |
- |
- |
- |
| 0.2162 |
32 |
0.0312 |
- |
- |
- |
| 0.2230 |
33 |
0.0619 |
- |
- |
- |
| 0.2297 |
34 |
0.0358 |
- |
- |
- |
| 0.2365 |
35 |
0.0468 |
- |
- |
- |
| 0.2432 |
36 |
0.0601 |
- |
- |
- |
| 0.25 |
37 |
0.0546 |
- |
- |
- |
| 0.2568 |
38 |
0.0411 |
- |
- |
- |
| 0.2635 |
39 |
0.0332 |
- |
- |
- |
| 0.2703 |
40 |
0.0479 |
- |
- |
- |
| 0.2770 |
41 |
0.0657 |
- |
- |
- |
| 0.2838 |
42 |
0.0161 |
- |
- |
- |
| 0.2905 |
43 |
0.0323 |
- |
- |
- |
| 0.2973 |
44 |
0.0794 |
- |
- |
- |
| 0.3041 |
45 |
0.0264 |
- |
- |
- |
| 0.3108 |
46 |
0.0391 |
- |
- |
- |
| 0.3176 |
47 |
0.0514 |
- |
- |
- |
| 0.3243 |
48 |
0.0276 |
- |
- |
- |
| 0.3311 |
49 |
0.0653 |
- |
- |
- |
| 0.3378 |
50 |
0.0343 |
- |
- |
- |
| 0.3446 |
51 |
0.0369 |
- |
- |
- |
| 0.3514 |
52 |
0.0336 |
- |
- |
- |
| 0.3581 |
53 |
0.0368 |
- |
- |
- |
| 0.3649 |
54 |
0.0477 |
- |
- |
- |
| 0.3716 |
55 |
0.0358 |
- |
- |
- |
| 0.3784 |
56 |
0.0312 |
- |
- |
- |
| 0.3851 |
57 |
0.0388 |
- |
- |
- |
| 0.3919 |
58 |
0.0415 |
- |
- |
- |
| 0.3986 |
59 |
0.02 |
- |
- |
- |
| 0.4054 |
60 |
0.0459 |
- |
- |
- |
| 0.4122 |
61 |
0.0302 |
- |
- |
- |
| 0.4189 |
62 |
0.0519 |
- |
- |
- |
| 0.4257 |
63 |
0.0283 |
- |
- |
- |
| 0.4324 |
64 |
0.04 |
- |
- |
- |
| 0.4392 |
65 |
0.0146 |
- |
- |
- |
| 0.4459 |
66 |
0.033 |
- |
- |
- |
| 0.4527 |
67 |
0.0365 |
- |
- |
- |
| 0.4595 |
68 |
0.0579 |
- |
- |
- |
| 0.4662 |
69 |
0.0253 |
- |
- |
- |
| 0.4730 |
70 |
0.033 |
- |
- |
- |
| 0.4797 |
71 |
0.0258 |
- |
- |
- |
| 0.4865 |
72 |
0.0181 |
- |
- |
- |
| 0.4932 |
73 |
0.0334 |
- |
- |
- |
| 0.5 |
74 |
0.0415 |
- |
- |
- |
| 0.5068 |
75 |
0.0258 |
- |
- |
- |
| 0.5135 |
76 |
0.0304 |
- |
- |
- |
| 0.5203 |
77 |
0.0211 |
- |
- |
- |
| 0.5270 |
78 |
0.0334 |
- |
- |
- |
| 0.5338 |
79 |
0.0278 |
- |
- |
- |
| 0.5405 |
80 |
0.0209 |
- |
- |
- |
| 0.5473 |
81 |
0.0391 |
- |
- |
- |
| 0.5541 |
82 |
0.0274 |
- |
- |
- |
| 0.5608 |
83 |
0.0213 |
- |
- |
- |
| 0.5676 |
84 |
0.0293 |
- |
- |
- |
| 0.5743 |
85 |
0.0205 |
- |
- |
- |
| 0.5811 |
86 |
0.0258 |
- |
- |
- |
| 0.5878 |
87 |
0.0262 |
- |
- |
- |
| 0.5946 |
88 |
0.0109 |
- |
- |
- |
| 0.6014 |
89 |
0.0268 |
- |
- |
- |
| 0.6081 |
90 |
0.0304 |
- |
- |
- |
| 0.6149 |
91 |
0.0328 |
- |
- |
- |
| 0.6216 |
92 |
0.0173 |
- |
- |
- |
| 0.6284 |
93 |
0.0253 |
- |
- |
- |
| 0.6351 |
94 |
0.0245 |
- |
- |
- |
| 0.6419 |
95 |
0.0232 |
- |
- |
- |
| 0.6486 |
96 |
0.0309 |
- |
- |
- |
| 0.6554 |
97 |
0.0209 |
- |
- |
- |
| 0.6622 |
98 |
0.0169 |
- |
- |
- |
| 0.6689 |
99 |
0.024 |
- |
- |
- |
| 0.6757 |
100 |
0.0166 |
0.0284 |
0.6215 |
- |
| 0.6824 |
101 |
0.0202 |
- |
- |
- |
| 0.6892 |
102 |
0.0181 |
- |
- |
- |
| 0.6959 |
103 |
0.0413 |
- |
- |
- |
| 0.7027 |
104 |
0.0537 |
- |
- |
- |
| 0.7095 |
105 |
0.0241 |
- |
- |
- |
| 0.7162 |
106 |
0.0199 |
- |
- |
- |
| 0.7230 |
107 |
0.0227 |
- |
- |
- |
| 0.7297 |
108 |
0.0283 |
- |
- |
- |
| 0.7365 |
109 |
0.0372 |
- |
- |
- |
| 0.7432 |
110 |
0.0193 |
- |
- |
- |
| 0.75 |
111 |
0.0147 |
- |
- |
- |
| 0.7568 |
112 |
0.0594 |
- |
- |
- |
| 0.7635 |
113 |
0.0185 |
- |
- |
- |
| 0.7703 |
114 |
0.0674 |
- |
- |
- |
| 0.7770 |
115 |
0.0212 |
- |
- |
- |
| 0.7838 |
116 |
0.0268 |
- |
- |
- |
| 0.7905 |
117 |
0.0233 |
- |
- |
- |
| 0.7973 |
118 |
0.0276 |
- |
- |
- |
| 0.8041 |
119 |
0.0242 |
- |
- |
- |
| 0.8108 |
120 |
0.034 |
- |
- |
- |
| 0.8176 |
121 |
0.0231 |
- |
- |
- |
| 0.8243 |
122 |
0.0252 |
- |
- |
- |
| 0.8311 |
123 |
0.0294 |
- |
- |
- |
| 0.8378 |
124 |
0.0205 |
- |
- |
- |
| 0.8446 |
125 |
0.0302 |
- |
- |
- |
| 0.8514 |
126 |
0.0468 |
- |
- |
- |
| 0.8581 |
127 |
0.0311 |
- |
- |
- |
| 0.8649 |
128 |
0.0365 |
- |
- |
- |
| 0.8716 |
129 |
0.0257 |
- |
- |
- |
| 0.8784 |
130 |
0.0339 |
- |
- |
- |
| 0.8851 |
131 |
0.0359 |
- |
- |
- |
| 0.8919 |
132 |
0.0404 |
- |
- |
- |
| 0.8986 |
133 |
0.0223 |
- |
- |
- |
| 0.9054 |
134 |
0.0232 |
- |
- |
- |
| 0.9122 |
135 |
0.0295 |
- |
- |
- |
| 0.9189 |
136 |
0.0244 |
- |
- |
- |
| 0.9257 |
137 |
0.0168 |
- |
- |
- |
| 0.9324 |
138 |
0.0319 |
- |
- |
- |
| 0.9392 |
139 |
0.0328 |
- |
- |
- |
| 0.9459 |
140 |
0.0295 |
- |
- |
- |
| 0.9527 |
141 |
0.0262 |
- |
- |
- |
| 0.9595 |
142 |
0.0238 |
- |
- |
- |
| 0.9662 |
143 |
0.0181 |
- |
- |
- |
| 0.9730 |
144 |
0.017 |
- |
- |
- |
| 0.9797 |
145 |
0.0244 |
- |
- |
- |
| 0.9865 |
146 |
0.0264 |
- |
- |
- |
| 0.9932 |
147 |
0.0194 |
- |
- |
- |
| 1.0 |
148 |
0.0028 |
- |
- |
0.6394 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.2.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}