Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Spaces using EdwardBurgin/paraphrase-multilingual-mpnet-base-v2 10
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported15.582
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported15.695
- euclidean_pearson on MTEB AFQMCvalidation set self-reported16.249
- euclidean_spearman on MTEB AFQMCvalidation set self-reported15.695
- manhattan_pearson on MTEB AFQMCvalidation set self-reported16.110
- manhattan_spearman on MTEB AFQMCvalidation set self-reported15.600
- cos_sim_pearson on MTEB ATECtest set self-reported19.078
- cos_sim_spearman on MTEB ATECtest set self-reported20.267
- euclidean_pearson on MTEB ATECtest set self-reported21.460
- euclidean_spearman on MTEB ATECtest set self-reported20.267
- manhattan_pearson on MTEB ATECtest set self-reported21.211
- manhattan_spearman on MTEB ATECtest set self-reported20.036
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported38.248
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported36.519
- cos_sim_pearson on MTEB BQtest set self-reported36.036
- cos_sim_spearman on MTEB BQtest set self-reported36.329
- euclidean_pearson on MTEB BQtest set self-reported37.020
- euclidean_spearman on MTEB BQtest set self-reported36.329
- manhattan_pearson on MTEB BQtest set self-reported36.826
- manhattan_spearman on MTEB BQtest set self-reported36.145