Upload folder using huggingface_hub
Browse files- README.md +402 -0
- config.json +40 -0
- model.safetensors +3 -0
- special_tokens_map.json +37 -0
- tokenizer.json +0 -0
- tokenizer_config.json +70 -0
- vocab.txt +0 -0
README.md
ADDED
|
@@ -0,0 +1,402 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- sentence-transformers
|
| 4 |
+
- cross-encoder
|
| 5 |
+
- generated_from_trainer
|
| 6 |
+
- dataset_size:173920
|
| 7 |
+
- loss:BinaryCrossEntropyLoss
|
| 8 |
+
base_model: MatMulMan/araelectra-base-discriminator-tydi-tafseer-pairs
|
| 9 |
+
pipeline_tag: text-ranking
|
| 10 |
+
library_name: sentence-transformers
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# CrossEncoder based on MatMulMan/araelectra-base-discriminator-tydi-tafseer-pairs
|
| 14 |
+
|
| 15 |
+
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [MatMulMan/araelectra-base-discriminator-tydi-tafseer-pairs](https://huggingface.co/MatMulMan/araelectra-base-discriminator-tydi-tafseer-pairs) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
|
| 16 |
+
|
| 17 |
+
## Model Details
|
| 18 |
+
|
| 19 |
+
### Model Description
|
| 20 |
+
- **Model Type:** Cross Encoder
|
| 21 |
+
- **Base model:** [MatMulMan/araelectra-base-discriminator-tydi-tafseer-pairs](https://huggingface.co/MatMulMan/araelectra-base-discriminator-tydi-tafseer-pairs) <!-- at revision 7085ca8be3d1c45e2ce57f3d5dfb4c918ac1a37b -->
|
| 22 |
+
- **Maximum Sequence Length:** 512 tokens
|
| 23 |
+
- **Number of Output Labels:** 1 label
|
| 24 |
+
<!-- - **Training Dataset:** Unknown -->
|
| 25 |
+
<!-- - **Language:** Unknown -->
|
| 26 |
+
<!-- - **License:** Unknown -->
|
| 27 |
+
|
| 28 |
+
### Model Sources
|
| 29 |
+
|
| 30 |
+
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| 31 |
+
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
|
| 32 |
+
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
| 33 |
+
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
|
| 34 |
+
|
| 35 |
+
## Usage
|
| 36 |
+
|
| 37 |
+
### Direct Usage (Sentence Transformers)
|
| 38 |
+
|
| 39 |
+
First install the Sentence Transformers library:
|
| 40 |
+
|
| 41 |
+
```bash
|
| 42 |
+
pip install -U sentence-transformers
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
Then you can load this model and run inference.
|
| 46 |
+
```python
|
| 47 |
+
from sentence_transformers import CrossEncoder
|
| 48 |
+
|
| 49 |
+
# Download from the 🤗 Hub
|
| 50 |
+
model = CrossEncoder("cross_encoder_model_id")
|
| 51 |
+
# Get scores for pairs of texts
|
| 52 |
+
pairs = [
|
| 53 |
+
['يعني يا ترى، الموظفين اللي بيشتغلوا في قسم الامتحانات بالجامعة، ليهم كام يوم إجازة للمذاكرة قبل الامتحانات؟ (تركيز على قسم الامتحانات وتحديد الفترة الزمنية)؟', 'القانون حدد 7 أيام فقط من تقديم الاستقالة علشان العامل يقدر يتراجع عنها. لو عدت المدة دي بدون ما يطلب التراجع، بتعتبر استقالته نهائية.'],
|
| 54 |
+
['ممكن أعرف القانون الجديد بيقول، سنه المعاش في شركات القطاع الخاص بقى كام دلوقتي؟', 'المكافأة هي مبلغ ثابت بياخده العامل عن السنين اللي اشتغلها. أما التعويض، فهو مبلغ إضافي بيتدفع لو حصلت له مشكلة زي فصل تعسفي أو إصابة. الاتنين مختلفين في السبب وطريقة الحساب.'],
|
| 55 |
+
['أقصى مبلغ ممكن يتخصم من المرتب أد إيه؟ (أد إيه = كم)', 'أقصى حد للخصم من المرتب هو 25% من صافي المرتب، زي ما القانون حدد، إلا إذا في حكم قضائي زي النفقة.'],
|
| 56 |
+
['ممكن أعرف ماذا الفرق الجوهري بين عقد الدوام اللي فيه تاريخ نهاية وعقد العمل المفتوح اللي ملوش تاريخ نهاية؟', 'أيوه، الأم المرضعة من حقها يوميًا "فترتين رضاعة" كل واحدة نص ساعة، أو تقدر تدمجهم كساعة كاملة. وده بيستمر لمدة 24 شهر من يوم الولادة.'],
|
| 57 |
+
['بالنسبة للاشتراكات، العامل بيتحمل جزء أد إيه منها وصاحب العمل بيتحمل الجزء الباقي؟ عايزين نعرف توزيع المساهمات بالضبط.', 'أيوه، القانون بيطلب تشكيل لجنة للسلامة والصحة المهنية في المنشآت الكبيرة، خصوصًا اللي فيها أكتر من عدد معين من العمال. اللجنة دي بتتابع تطبيق إجراءات السلامة.'],
|
| 58 |
+
]
|
| 59 |
+
scores = model.predict(pairs)
|
| 60 |
+
print(scores.shape)
|
| 61 |
+
# (5,)
|
| 62 |
+
|
| 63 |
+
# Or rank different texts based on similarity to a single text
|
| 64 |
+
ranks = model.rank(
|
| 65 |
+
'يعني يا ترى، الموظفين اللي بيشتغلوا في قسم الامتحانات بالجامعة، ليهم كام يوم إجازة للمذاكرة قبل الامتحانات؟ (تركيز على قسم الامتحانات وتحديد الفترة الزمنية)؟',
|
| 66 |
+
[
|
| 67 |
+
'القانون حدد 7 أيام فقط من تقديم الاستقالة علشان العامل يقدر يتراجع عنها. لو عدت المدة دي بدون ما يطلب التراجع، بتعتبر استقالته نهائية.',
|
| 68 |
+
'المكافأة هي مبلغ ثابت بياخده العامل عن السنين اللي اشتغلها. أما التعويض، فهو مبلغ إضافي بيتدفع لو حصلت له مشكلة زي فصل تعسفي أو إصابة. الاتنين مختلفين في السبب وطريقة الحساب.',
|
| 69 |
+
'أقصى حد للخصم من المرتب هو 25% من صافي المرتب، زي ما القانون حدد، إلا إذا في حكم قضائي زي النفقة.',
|
| 70 |
+
'أيوه، الأم المرضعة من حقها يوميًا "فترتين رضاعة" كل واحدة نص ساعة، أو تقدر تدمجهم كساعة كاملة. وده بيستمر لمدة 24 شهر من يوم الولادة.',
|
| 71 |
+
'أيوه، القانون بيطلب تشكيل لجنة للسلامة والصحة المهنية في المنشآت الكبيرة، خصوصًا اللي فيها أكتر من عدد معين من العمال. اللجنة دي بتتابع تطبيق إجراءات السلامة.',
|
| 72 |
+
]
|
| 73 |
+
)
|
| 74 |
+
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
<!--
|
| 78 |
+
### Direct Usage (Transformers)
|
| 79 |
+
|
| 80 |
+
<details><summary>Click to see the direct usage in Transformers</summary>
|
| 81 |
+
|
| 82 |
+
</details>
|
| 83 |
+
-->
|
| 84 |
+
|
| 85 |
+
<!--
|
| 86 |
+
### Downstream Usage (Sentence Transformers)
|
| 87 |
+
|
| 88 |
+
You can finetune this model on your own dataset.
|
| 89 |
+
|
| 90 |
+
<details><summary>Click to expand</summary>
|
| 91 |
+
|
| 92 |
+
</details>
|
| 93 |
+
-->
|
| 94 |
+
|
| 95 |
+
<!--
|
| 96 |
+
### Out-of-Scope Use
|
| 97 |
+
|
| 98 |
+
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 99 |
+
-->
|
| 100 |
+
|
| 101 |
+
<!--
|
| 102 |
+
## Bias, Risks and Limitations
|
| 103 |
+
|
| 104 |
+
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
|
| 105 |
+
-->
|
| 106 |
+
|
| 107 |
+
<!--
|
| 108 |
+
### Recommendations
|
| 109 |
+
|
| 110 |
+
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
|
| 111 |
+
-->
|
| 112 |
+
|
| 113 |
+
## Training Details
|
| 114 |
+
|
| 115 |
+
### Training Dataset
|
| 116 |
+
|
| 117 |
+
#### Unnamed Dataset
|
| 118 |
+
|
| 119 |
+
* Size: 173,920 training samples
|
| 120 |
+
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
|
| 121 |
+
* Approximate statistics based on the first 1000 samples:
|
| 122 |
+
| | sentence_0 | sentence_1 | label |
|
| 123 |
+
|:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
|
| 124 |
+
| type | string | string | float |
|
| 125 |
+
| details | <ul><li>min: 36 characters</li><li>mean: 116.64 characters</li><li>max: 320 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 142.71 characters</li><li>max: 399 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.26</li><li>max: 1.0</li></ul> |
|
| 126 |
+
* Samples:
|
| 127 |
+
| sentence_0 | sentence_1 | label |
|
| 128 |
+
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
|
| 129 |
+
| <code>يعني يا ترى، الموظفين اللي بيشتغلوا في قسم الامتحانات بالجامعة، ليهم كام يوم إجازة للمذاكرة قبل الامتحانات؟ (تركيز على قسم الامتحانات وتحديد الفترة الزمنية)؟</code> | <code>القانون حدد 7 أيام فقط من تقديم الاستقالة علشان العامل يقدر يتراجع عنها. لو عدت المدة دي بدون ما يطلب التراجع، بتعتبر استقالته نهائية.</code> | <code>0.0</code> |
|
| 130 |
+
| <code>ممكن أعرف القانون الجديد بيقول، سنه المعاش في شركات القطاع الخاص بقى كام دلوقتي؟</code> | <code>المكافأة هي مبلغ ثابت بياخده العامل عن السنين اللي اشتغلها. أما التعويض، فهو مبلغ إضافي بيتدفع لو حصلت له مشكلة زي فصل تعسفي أو إصابة. الاتنين مختلفين في السبب وطريقة الحساب.</code> | <code>0.0</code> |
|
| 131 |
+
| <code>أقصى مبلغ ممكن يتخصم من المرتب أد إيه؟ (أد إيه = كم)</code> | <code>أقصى حد للخصم من المرتب هو 25% من صافي المرتب، زي ما القانون حدد، إلا إذا في حكم قضائي زي النفقة.</code> | <code>1.0</code> |
|
| 132 |
+
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
|
| 133 |
+
```json
|
| 134 |
+
{
|
| 135 |
+
"activation_fn": "torch.nn.modules.linear.Identity",
|
| 136 |
+
"pos_weight": null
|
| 137 |
+
}
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
### Training Hyperparameters
|
| 141 |
+
#### Non-Default Hyperparameters
|
| 142 |
+
|
| 143 |
+
- `per_device_train_batch_size`: 16
|
| 144 |
+
- `per_device_eval_batch_size`: 16
|
| 145 |
+
- `num_train_epochs`: 4
|
| 146 |
+
- `disable_tqdm`: True
|
| 147 |
+
|
| 148 |
+
#### All Hyperparameters
|
| 149 |
+
<details><summary>Click to expand</summary>
|
| 150 |
+
|
| 151 |
+
- `overwrite_output_dir`: False
|
| 152 |
+
- `do_predict`: False
|
| 153 |
+
- `eval_strategy`: no
|
| 154 |
+
- `prediction_loss_only`: True
|
| 155 |
+
- `per_device_train_batch_size`: 16
|
| 156 |
+
- `per_device_eval_batch_size`: 16
|
| 157 |
+
- `per_gpu_train_batch_size`: None
|
| 158 |
+
- `per_gpu_eval_batch_size`: None
|
| 159 |
+
- `gradient_accumulation_steps`: 1
|
| 160 |
+
- `eval_accumulation_steps`: None
|
| 161 |
+
- `torch_empty_cache_steps`: None
|
| 162 |
+
- `learning_rate`: 5e-05
|
| 163 |
+
- `weight_decay`: 0.0
|
| 164 |
+
- `adam_beta1`: 0.9
|
| 165 |
+
- `adam_beta2`: 0.999
|
| 166 |
+
- `adam_epsilon`: 1e-08
|
| 167 |
+
- `max_grad_norm`: 1
|
| 168 |
+
- `num_train_epochs`: 4
|
| 169 |
+
- `max_steps`: -1
|
| 170 |
+
- `lr_scheduler_type`: linear
|
| 171 |
+
- `lr_scheduler_kwargs`: {}
|
| 172 |
+
- `warmup_ratio`: 0.0
|
| 173 |
+
- `warmup_steps`: 0
|
| 174 |
+
- `log_level`: passive
|
| 175 |
+
- `log_level_replica`: warning
|
| 176 |
+
- `log_on_each_node`: True
|
| 177 |
+
- `logging_nan_inf_filter`: True
|
| 178 |
+
- `save_safetensors`: True
|
| 179 |
+
- `save_on_each_node`: False
|
| 180 |
+
- `save_only_model`: False
|
| 181 |
+
- `restore_callback_states_from_checkpoint`: False
|
| 182 |
+
- `no_cuda`: False
|
| 183 |
+
- `use_cpu`: False
|
| 184 |
+
- `use_mps_device`: False
|
| 185 |
+
- `seed`: 42
|
| 186 |
+
- `data_seed`: None
|
| 187 |
+
- `jit_mode_eval`: False
|
| 188 |
+
- `use_ipex`: False
|
| 189 |
+
- `bf16`: False
|
| 190 |
+
- `fp16`: False
|
| 191 |
+
- `fp16_opt_level`: O1
|
| 192 |
+
- `half_precision_backend`: auto
|
| 193 |
+
- `bf16_full_eval`: False
|
| 194 |
+
- `fp16_full_eval`: False
|
| 195 |
+
- `tf32`: None
|
| 196 |
+
- `local_rank`: 0
|
| 197 |
+
- `ddp_backend`: None
|
| 198 |
+
- `tpu_num_cores`: None
|
| 199 |
+
- `tpu_metrics_debug`: False
|
| 200 |
+
- `debug`: []
|
| 201 |
+
- `dataloader_drop_last`: False
|
| 202 |
+
- `dataloader_num_workers`: 0
|
| 203 |
+
- `dataloader_prefetch_factor`: None
|
| 204 |
+
- `past_index`: -1
|
| 205 |
+
- `disable_tqdm`: True
|
| 206 |
+
- `remove_unused_columns`: True
|
| 207 |
+
- `label_names`: None
|
| 208 |
+
- `load_best_model_at_end`: False
|
| 209 |
+
- `ignore_data_skip`: False
|
| 210 |
+
- `fsdp`: []
|
| 211 |
+
- `fsdp_min_num_params`: 0
|
| 212 |
+
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
|
| 213 |
+
- `fsdp_transformer_layer_cls_to_wrap`: None
|
| 214 |
+
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
|
| 215 |
+
- `deepspeed`: None
|
| 216 |
+
- `label_smoothing_factor`: 0.0
|
| 217 |
+
- `optim`: adamw_torch
|
| 218 |
+
- `optim_args`: None
|
| 219 |
+
- `adafactor`: False
|
| 220 |
+
- `group_by_length`: False
|
| 221 |
+
- `length_column_name`: length
|
| 222 |
+
- `ddp_find_unused_parameters`: None
|
| 223 |
+
- `ddp_bucket_cap_mb`: None
|
| 224 |
+
- `ddp_broadcast_buffers`: False
|
| 225 |
+
- `dataloader_pin_memory`: True
|
| 226 |
+
- `dataloader_persistent_workers`: False
|
| 227 |
+
- `skip_memory_metrics`: True
|
| 228 |
+
- `use_legacy_prediction_loop`: False
|
| 229 |
+
- `push_to_hub`: False
|
| 230 |
+
- `resume_from_checkpoint`: None
|
| 231 |
+
- `hub_model_id`: None
|
| 232 |
+
- `hub_strategy`: every_save
|
| 233 |
+
- `hub_private_repo`: None
|
| 234 |
+
- `hub_always_push`: False
|
| 235 |
+
- `hub_revision`: None
|
| 236 |
+
- `gradient_checkpointing`: False
|
| 237 |
+
- `gradient_checkpointing_kwargs`: None
|
| 238 |
+
- `include_inputs_for_metrics`: False
|
| 239 |
+
- `include_for_metrics`: []
|
| 240 |
+
- `eval_do_concat_batches`: True
|
| 241 |
+
- `fp16_backend`: auto
|
| 242 |
+
- `push_to_hub_model_id`: None
|
| 243 |
+
- `push_to_hub_organization`: None
|
| 244 |
+
- `mp_parameters`:
|
| 245 |
+
- `auto_find_batch_size`: False
|
| 246 |
+
- `full_determinism`: False
|
| 247 |
+
- `torchdynamo`: None
|
| 248 |
+
- `ray_scope`: last
|
| 249 |
+
- `ddp_timeout`: 1800
|
| 250 |
+
- `torch_compile`: False
|
| 251 |
+
- `torch_compile_backend`: None
|
| 252 |
+
- `torch_compile_mode`: None
|
| 253 |
+
- `include_tokens_per_second`: False
|
| 254 |
+
- `include_num_input_tokens_seen`: False
|
| 255 |
+
- `neftune_noise_alpha`: None
|
| 256 |
+
- `optim_target_modules`: None
|
| 257 |
+
- `batch_eval_metrics`: False
|
| 258 |
+
- `eval_on_start`: False
|
| 259 |
+
- `use_liger_kernel`: False
|
| 260 |
+
- `liger_kernel_config`: None
|
| 261 |
+
- `eval_use_gather_object`: False
|
| 262 |
+
- `average_tokens_across_devices`: False
|
| 263 |
+
- `prompts`: None
|
| 264 |
+
- `batch_sampler`: batch_sampler
|
| 265 |
+
- `multi_dataset_batch_sampler`: proportional
|
| 266 |
+
|
| 267 |
+
</details>
|
| 268 |
+
|
| 269 |
+
### Training Logs
|
| 270 |
+
| Epoch | Step | Training Loss |
|
| 271 |
+
|:------:|:-----:|:-------------:|
|
| 272 |
+
| 0.0460 | 500 | 0.5364 |
|
| 273 |
+
| 0.0920 | 1000 | 0.2314 |
|
| 274 |
+
| 0.1380 | 1500 | 0.151 |
|
| 275 |
+
| 0.1840 | 2000 | 0.1318 |
|
| 276 |
+
| 0.2300 | 2500 | 0.1201 |
|
| 277 |
+
| 0.2760 | 3000 | 0.1132 |
|
| 278 |
+
| 0.3220 | 3500 | 0.0935 |
|
| 279 |
+
| 0.3680 | 4000 | 0.082 |
|
| 280 |
+
| 0.4140 | 4500 | 0.0817 |
|
| 281 |
+
| 0.4600 | 5000 | 0.0804 |
|
| 282 |
+
| 0.5060 | 5500 | 0.0726 |
|
| 283 |
+
| 0.5520 | 6000 | 0.0662 |
|
| 284 |
+
| 0.5980 | 6500 | 0.0632 |
|
| 285 |
+
| 0.6440 | 7000 | 0.0579 |
|
| 286 |
+
| 0.6900 | 7500 | 0.0558 |
|
| 287 |
+
| 0.7360 | 8000 | 0.0448 |
|
| 288 |
+
| 0.7820 | 8500 | 0.0626 |
|
| 289 |
+
| 0.8280 | 9000 | 0.0419 |
|
| 290 |
+
| 0.8740 | 9500 | 0.0495 |
|
| 291 |
+
| 0.9200 | 10000 | 0.047 |
|
| 292 |
+
| 0.9660 | 10500 | 0.0447 |
|
| 293 |
+
| 1.0120 | 11000 | 0.0376 |
|
| 294 |
+
| 1.0580 | 11500 | 0.0342 |
|
| 295 |
+
| 1.1040 | 12000 | 0.0404 |
|
| 296 |
+
| 1.1500 | 12500 | 0.0364 |
|
| 297 |
+
| 1.1960 | 13000 | 0.0329 |
|
| 298 |
+
| 1.2420 | 13500 | 0.0373 |
|
| 299 |
+
| 1.2879 | 14000 | 0.0407 |
|
| 300 |
+
| 1.3339 | 14500 | 0.0298 |
|
| 301 |
+
| 1.3799 | 15000 | 0.0319 |
|
| 302 |
+
| 1.4259 | 15500 | 0.0361 |
|
| 303 |
+
| 1.4719 | 16000 | 0.0423 |
|
| 304 |
+
| 1.5179 | 16500 | 0.0349 |
|
| 305 |
+
| 1.5639 | 17000 | 0.0304 |
|
| 306 |
+
| 1.6099 | 17500 | 0.0291 |
|
| 307 |
+
| 1.6559 | 18000 | 0.0277 |
|
| 308 |
+
| 1.7019 | 18500 | 0.0288 |
|
| 309 |
+
| 1.7479 | 19000 | 0.0285 |
|
| 310 |
+
| 1.7939 | 19500 | 0.0288 |
|
| 311 |
+
| 1.8399 | 20000 | 0.0268 |
|
| 312 |
+
| 1.8859 | 20500 | 0.027 |
|
| 313 |
+
| 1.9319 | 21000 | 0.0215 |
|
| 314 |
+
| 1.9779 | 21500 | 0.0214 |
|
| 315 |
+
| 2.0239 | 22000 | 0.0263 |
|
| 316 |
+
| 2.0699 | 22500 | 0.0192 |
|
| 317 |
+
| 2.1159 | 23000 | 0.0242 |
|
| 318 |
+
| 2.1619 | 23500 | 0.0286 |
|
| 319 |
+
| 2.2079 | 24000 | 0.0144 |
|
| 320 |
+
| 2.2539 | 24500 | 0.0283 |
|
| 321 |
+
| 2.2999 | 25000 | 0.0209 |
|
| 322 |
+
| 2.3459 | 25500 | 0.0188 |
|
| 323 |
+
| 2.3919 | 26000 | 0.0211 |
|
| 324 |
+
| 2.4379 | 26500 | 0.0264 |
|
| 325 |
+
| 2.4839 | 27000 | 0.0245 |
|
| 326 |
+
| 2.5299 | 27500 | 0.023 |
|
| 327 |
+
| 2.5759 | 28000 | 0.0211 |
|
| 328 |
+
| 2.6219 | 28500 | 0.0248 |
|
| 329 |
+
| 2.6679 | 29000 | 0.0201 |
|
| 330 |
+
| 2.7139 | 29500 | 0.0194 |
|
| 331 |
+
| 2.7599 | 30000 | 0.0176 |
|
| 332 |
+
| 2.8059 | 30500 | 0.0194 |
|
| 333 |
+
| 2.8519 | 31000 | 0.0165 |
|
| 334 |
+
| 2.8979 | 31500 | 0.0209 |
|
| 335 |
+
| 2.9439 | 32000 | 0.0178 |
|
| 336 |
+
| 2.9899 | 32500 | 0.0166 |
|
| 337 |
+
| 3.0359 | 33000 | 0.0207 |
|
| 338 |
+
| 3.0819 | 33500 | 0.0143 |
|
| 339 |
+
| 3.1279 | 34000 | 0.0114 |
|
| 340 |
+
| 3.1739 | 34500 | 0.0208 |
|
| 341 |
+
| 3.2199 | 35000 | 0.0143 |
|
| 342 |
+
| 3.2659 | 35500 | 0.0221 |
|
| 343 |
+
| 3.3119 | 36000 | 0.0218 |
|
| 344 |
+
| 3.3579 | 36500 | 0.0144 |
|
| 345 |
+
| 3.4039 | 37000 | 0.0201 |
|
| 346 |
+
| 3.4499 | 37500 | 0.0172 |
|
| 347 |
+
| 3.4959 | 38000 | 0.0177 |
|
| 348 |
+
| 3.5419 | 38500 | 0.0129 |
|
| 349 |
+
| 3.5879 | 39000 | 0.013 |
|
| 350 |
+
| 3.6339 | 39500 | 0.016 |
|
| 351 |
+
| 3.6799 | 40000 | 0.0137 |
|
| 352 |
+
| 3.7259 | 40500 | 0.0171 |
|
| 353 |
+
| 3.7718 | 41000 | 0.0201 |
|
| 354 |
+
| 3.8178 | 41500 | 0.0166 |
|
| 355 |
+
| 3.8638 | 42000 | 0.0097 |
|
| 356 |
+
| 3.9098 | 42500 | 0.0146 |
|
| 357 |
+
| 3.9558 | 43000 | 0.0182 |
|
| 358 |
+
|
| 359 |
+
|
| 360 |
+
### Framework Versions
|
| 361 |
+
- Python: 3.11.13
|
| 362 |
+
- Sentence Transformers: 4.1.0
|
| 363 |
+
- Transformers: 4.54.1
|
| 364 |
+
- PyTorch: 2.6.0+cu124
|
| 365 |
+
- Accelerate: 1.9.0
|
| 366 |
+
- Datasets: 4.0.0
|
| 367 |
+
- Tokenizers: 0.21.4
|
| 368 |
+
|
| 369 |
+
## Citation
|
| 370 |
+
|
| 371 |
+
### BibTeX
|
| 372 |
+
|
| 373 |
+
#### Sentence Transformers
|
| 374 |
+
```bibtex
|
| 375 |
+
@inproceedings{reimers-2019-sentence-bert,
|
| 376 |
+
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
| 377 |
+
author = "Reimers, Nils and Gurevych, Iryna",
|
| 378 |
+
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
| 379 |
+
month = "11",
|
| 380 |
+
year = "2019",
|
| 381 |
+
publisher = "Association for Computational Linguistics",
|
| 382 |
+
url = "https://arxiv.org/abs/1908.10084",
|
| 383 |
+
}
|
| 384 |
+
```
|
| 385 |
+
|
| 386 |
+
<!--
|
| 387 |
+
## Glossary
|
| 388 |
+
|
| 389 |
+
*Clearly define terms in order to be accessible across audiences.*
|
| 390 |
+
-->
|
| 391 |
+
|
| 392 |
+
<!--
|
| 393 |
+
## Model Card Authors
|
| 394 |
+
|
| 395 |
+
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
| 396 |
+
-->
|
| 397 |
+
|
| 398 |
+
<!--
|
| 399 |
+
## Model Card Contact
|
| 400 |
+
|
| 401 |
+
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
| 402 |
+
-->
|
config.json
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"ElectraForSequenceClassification"
|
| 4 |
+
],
|
| 5 |
+
"attention_probs_dropout_prob": 0.1,
|
| 6 |
+
"classifier_dropout": null,
|
| 7 |
+
"embedding_size": 768,
|
| 8 |
+
"generator_hidden_size": 0.33333,
|
| 9 |
+
"hidden_act": "gelu",
|
| 10 |
+
"hidden_dropout_prob": 0.1,
|
| 11 |
+
"hidden_size": 768,
|
| 12 |
+
"id2label": {
|
| 13 |
+
"0": "LABEL_0"
|
| 14 |
+
},
|
| 15 |
+
"initializer_range": 0.02,
|
| 16 |
+
"intermediate_size": 3072,
|
| 17 |
+
"label2id": {
|
| 18 |
+
"LABEL_0": 0
|
| 19 |
+
},
|
| 20 |
+
"layer_norm_eps": 1e-12,
|
| 21 |
+
"max_position_embeddings": 512,
|
| 22 |
+
"model_type": "electra",
|
| 23 |
+
"num_attention_heads": 12,
|
| 24 |
+
"num_hidden_layers": 12,
|
| 25 |
+
"pad_token_id": 0,
|
| 26 |
+
"position_embedding_type": "absolute",
|
| 27 |
+
"sentence_transformers": {
|
| 28 |
+
"activation_fn": "torch.nn.modules.activation.Sigmoid",
|
| 29 |
+
"version": "4.1.0"
|
| 30 |
+
},
|
| 31 |
+
"summary_activation": "gelu",
|
| 32 |
+
"summary_last_dropout": 0.1,
|
| 33 |
+
"summary_type": "first",
|
| 34 |
+
"summary_use_proj": true,
|
| 35 |
+
"torch_dtype": "float32",
|
| 36 |
+
"transformers_version": "4.54.1",
|
| 37 |
+
"type_vocab_size": 2,
|
| 38 |
+
"use_cache": true,
|
| 39 |
+
"vocab_size": 64000
|
| 40 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fe5325a3db0e1623e8d79ba256d9285bbeac0a926575fb93abcfb70001cd0c8d
|
| 3 |
+
size 540800596
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cls_token": {
|
| 3 |
+
"content": "[CLS]",
|
| 4 |
+
"lstrip": false,
|
| 5 |
+
"normalized": false,
|
| 6 |
+
"rstrip": false,
|
| 7 |
+
"single_word": false
|
| 8 |
+
},
|
| 9 |
+
"mask_token": {
|
| 10 |
+
"content": "[MASK]",
|
| 11 |
+
"lstrip": false,
|
| 12 |
+
"normalized": false,
|
| 13 |
+
"rstrip": false,
|
| 14 |
+
"single_word": false
|
| 15 |
+
},
|
| 16 |
+
"pad_token": {
|
| 17 |
+
"content": "[PAD]",
|
| 18 |
+
"lstrip": false,
|
| 19 |
+
"normalized": false,
|
| 20 |
+
"rstrip": false,
|
| 21 |
+
"single_word": false
|
| 22 |
+
},
|
| 23 |
+
"sep_token": {
|
| 24 |
+
"content": "[SEP]",
|
| 25 |
+
"lstrip": false,
|
| 26 |
+
"normalized": false,
|
| 27 |
+
"rstrip": false,
|
| 28 |
+
"single_word": false
|
| 29 |
+
},
|
| 30 |
+
"unk_token": {
|
| 31 |
+
"content": "[UNK]",
|
| 32 |
+
"lstrip": false,
|
| 33 |
+
"normalized": false,
|
| 34 |
+
"rstrip": false,
|
| 35 |
+
"single_word": false
|
| 36 |
+
}
|
| 37 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"added_tokens_decoder": {
|
| 3 |
+
"0": {
|
| 4 |
+
"content": "[PAD]",
|
| 5 |
+
"lstrip": false,
|
| 6 |
+
"normalized": false,
|
| 7 |
+
"rstrip": false,
|
| 8 |
+
"single_word": false,
|
| 9 |
+
"special": true
|
| 10 |
+
},
|
| 11 |
+
"1": {
|
| 12 |
+
"content": "[UNK]",
|
| 13 |
+
"lstrip": false,
|
| 14 |
+
"normalized": false,
|
| 15 |
+
"rstrip": false,
|
| 16 |
+
"single_word": false,
|
| 17 |
+
"special": true
|
| 18 |
+
},
|
| 19 |
+
"2": {
|
| 20 |
+
"content": "[CLS]",
|
| 21 |
+
"lstrip": false,
|
| 22 |
+
"normalized": false,
|
| 23 |
+
"rstrip": false,
|
| 24 |
+
"single_word": false,
|
| 25 |
+
"special": true
|
| 26 |
+
},
|
| 27 |
+
"3": {
|
| 28 |
+
"content": "[SEP]",
|
| 29 |
+
"lstrip": false,
|
| 30 |
+
"normalized": false,
|
| 31 |
+
"rstrip": false,
|
| 32 |
+
"single_word": false,
|
| 33 |
+
"special": true
|
| 34 |
+
},
|
| 35 |
+
"4": {
|
| 36 |
+
"content": "[MASK]",
|
| 37 |
+
"lstrip": false,
|
| 38 |
+
"normalized": false,
|
| 39 |
+
"rstrip": false,
|
| 40 |
+
"single_word": false,
|
| 41 |
+
"special": true
|
| 42 |
+
}
|
| 43 |
+
},
|
| 44 |
+
"clean_up_tokenization_spaces": true,
|
| 45 |
+
"cls_token": "[CLS]",
|
| 46 |
+
"do_basic_tokenize": true,
|
| 47 |
+
"do_lower_case": false,
|
| 48 |
+
"extra_special_tokens": {},
|
| 49 |
+
"mask_token": "[MASK]",
|
| 50 |
+
"max_len": 512,
|
| 51 |
+
"max_length": 512,
|
| 52 |
+
"model_max_length": 512,
|
| 53 |
+
"never_split": [
|
| 54 |
+
"[بريد]",
|
| 55 |
+
"[مستخدم]",
|
| 56 |
+
"[رابط]"
|
| 57 |
+
],
|
| 58 |
+
"pad_to_multiple_of": null,
|
| 59 |
+
"pad_token": "[PAD]",
|
| 60 |
+
"pad_token_type_id": 0,
|
| 61 |
+
"padding_side": "right",
|
| 62 |
+
"sep_token": "[SEP]",
|
| 63 |
+
"stride": 0,
|
| 64 |
+
"strip_accents": null,
|
| 65 |
+
"tokenize_chinese_chars": true,
|
| 66 |
+
"tokenizer_class": "ElectraTokenizer",
|
| 67 |
+
"truncation_side": "right",
|
| 68 |
+
"truncation_strategy": "longest_first",
|
| 69 |
+
"unk_token": "[UNK]"
|
| 70 |
+
}
|
vocab.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|