DIMI-embedding-v3 / README.md
AhmedZaky1's picture
Add new SentenceTransformer model
2eeba00 verified
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:685672
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-base
widget:
- source_sentence: كيف يمكنني اختراق أو التجسس على محادثة WhatsApp لشخص ما عن بعد؟
sentences:
- كيف يمكنني اختراق حساب واتس اب لشخص ما؟
- ما هو معنى الحياة؟
- ولاقت الاتفاقية ترحيب أعضاء آخرين من عصبة الأمم.
- source_sentence: لماذا يتم استيراد الحلوى المالحة في أستراليا؟
sentences:
- ما هي بعض ألعاب حفلات العشاء الممتعة؟
- شخص نائم
- لماذا يتم استيراد الحلوى المالحة في إيطاليا؟
- source_sentence: كثير من الناس يسيرون في الخارج وهم يحملون لافتات احتجاج، أبرز لافتة
تقول "أنقذوا مدارسنا".
sentences:
- رجل نائم يجلس على متن طائرة ويرتدي سماعات رأس
- الناس يحتجون
- «ولو أنا أهلكناهم بعذاب من قبله» قبل محمد الرسول «لقالوا» يوم القيامة «ربنا لولا»
هلا «أرسلت إلينا رسولاً فنتبع آياتك» المرسل بها «من قبل أن نذل» في القيامة «ونخزى»
في جهنم.
- source_sentence: ما هي أفضل دولة للهجرة؟
sentences:
- المدرب يتحدث مع لاعب خلفه خلال فترة التوقف
- أطفال مقيدين بالسيارات يتدربون على الكاراتيه
- ما هي أفضل البلدان للمهاجرين؟
- source_sentence: كيف يمكنني الترويج لموقعك الإلكتروني؟
sentences:
- امرأة ترقص
- كيف يمكنك معرفة كم من الوقت كان شخصان صديقان على الفيسبوك؟
- ما هي أفضل طريقة للترويج لموقعك الإلكتروني؟
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 arabic
type: sts17-arabic
metrics:
- type: pearson_cosine
value: 0.8012327818255766
name: Pearson Cosine
- type: spearman_cosine
value: 0.803049645424906
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts17 arabic final
type: sts17-arabic-final
metrics:
- type: pearson_cosine
value: 0.8012367697375709
name: Pearson Cosine
- type: spearman_cosine
value: 0.8030869428918248
name: Spearman Cosine
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision 835193815a3936a24a0ee7dc9e3d48c1fbb19c55 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("AhmedZaky1/arabic-e5-multilingual-finetuned-20250530")
# Run inference
sentences = [
'كيف يمكنني الترويج لموقعك الإلكتروني؟',
'ما هي أفضل طريقة للترويج لموقعك الإلكتروني؟',
'امرأة ترقص',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `sts17-arabic` and `sts17-arabic-final`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | sts17-arabic | sts17-arabic-final |
|:--------------------|:-------------|:-------------------|
| pearson_cosine | 0.8012 | 0.8012 |
| **spearman_cosine** | **0.803** | **0.8031** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 685,672 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 19.49 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 15.81 tokens</li><li>max: 70 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------|:--------------------------------------------------|
| <code>فتاة في قميص أزرق تمشي مع رجل.</code> | <code>الفتاة ترتدي قميصاً أزرق</code> |
| <code>ما هو أفضل ماجستير في إدارة الأعمال أو كاليفورنيا؟</code> | <code>ما هو أفضل CA أو ماجستير في الإدارة؟</code> |
| <code>الناس يبنيون منزلاً</code> | <code>الأفراد يقومون ببناء منزل.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 15,000 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 19.16 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 15.45 tokens</li><li>max: 85 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| <code>ثلاثة رجال أعمال يسيرون في شارع مزدحم</code> | <code>الناس يتحركون في الشارع</code> |
| <code>أين يمكنني أن أحصل على أفضل نظام رذاذ الحريق في سيدني؟</code> | <code>أين يمكنني الحصول على خدمات رشاشات الحريق ذات الجودة العالية في سيدني؟</code> |
| <code>كم تبلغ مساحة نوفا سكوشا؟</code> | <code>كم تصل المساحة الجغرافية لولاية نوفا سكوشا؟</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `gradient_accumulation_steps`: 4
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_drop_last`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sts17-arabic_spearman_cosine | sts17-arabic-final_spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|:----------------------------:|:----------------------------------:|
| 0.0747 | 100 | 5.7187 | - | - | - |
| 0.1494 | 200 | 1.199 | - | - | - |
| 0.2240 | 300 | 1.0422 | - | - | - |
| 0.2987 | 400 | 0.9514 | - | - | - |
| 0.3734 | 500 | 0.9002 | 0.0478 | 0.8091 | - |
| 0.4481 | 600 | 0.848 | - | - | - |
| 0.5228 | 700 | 0.8298 | - | - | - |
| 0.5975 | 800 | 0.7915 | - | - | - |
| 0.6721 | 900 | 0.7906 | - | - | - |
| 0.7468 | 1000 | 0.7534 | 0.0375 | 0.7950 | - |
| 0.8215 | 1100 | 0.7384 | - | - | - |
| 0.8962 | 1200 | 0.7252 | - | - | - |
| 0.9709 | 1300 | 0.7311 | - | - | - |
| 1.0456 | 1400 | 0.7006 | - | - | - |
| 1.1202 | 1500 | 0.6611 | 0.0334 | 0.8026 | - |
| 1.1949 | 1600 | 0.6279 | - | - | - |
| 1.2696 | 1700 | 0.6072 | - | - | - |
| 1.3443 | 1800 | 0.596 | - | - | - |
| 1.4190 | 1900 | 0.5614 | - | - | - |
| 1.4937 | 2000 | 0.5721 | 0.0300 | 0.8041 | - |
| 1.5683 | 2100 | 0.5681 | - | - | - |
| 1.6430 | 2200 | 0.5531 | - | - | - |
| 1.7177 | 2300 | 0.5564 | - | - | - |
| 1.7924 | 2400 | 0.564 | - | - | - |
| 1.8671 | 2500 | 0.5395 | 0.0288 | 0.8066 | - |
| 1.9417 | 2600 | 0.5729 | - | - | - |
| 2.0164 | 2700 | 0.5436 | - | - | - |
| 2.0911 | 2800 | 0.5365 | - | - | - |
| 2.1658 | 2900 | 0.5087 | - | - | - |
| 2.2405 | 3000 | 0.4991 | 0.0267 | 0.8009 | - |
| 2.3152 | 3100 | 0.4761 | - | - | - |
| 2.3898 | 3200 | 0.4711 | - | - | - |
| 2.4645 | 3300 | 0.4795 | - | - | - |
| 2.5392 | 3400 | 0.4732 | - | - | - |
| 2.6139 | 3500 | 0.4735 | 0.0264 | 0.8029 | - |
| 2.6886 | 3600 | 0.483 | - | - | - |
| 2.7633 | 3700 | 0.4755 | - | - | - |
| 2.8379 | 3800 | 0.4783 | - | - | - |
| 2.9126 | 3900 | 0.4854 | - | - | - |
| 2.9873 | 4000 | 0.4884 | 0.0260 | 0.8030 | - |
| 3.0 | 4017 | - | - | - | 0.8031 |
### Framework Versions
- Python: 3.12.7
- Sentence Transformers: 3.3.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->