End of training
Browse files- README.md +102 -3
- config.json +18 -0
- model.safetensors +3 -0
- training_args.bin +3 -0
README.md
CHANGED
|
@@ -1,3 +1,102 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
language:
|
| 4 |
+
- ar
|
| 5 |
+
license: mit
|
| 6 |
+
base_model: pyannote/speaker-diarization-3.1
|
| 7 |
+
tags:
|
| 8 |
+
- speaker-diarization
|
| 9 |
+
- speaker-segmentation
|
| 10 |
+
- generated_from_trainer
|
| 11 |
+
datasets:
|
| 12 |
+
- igitsml/darija-synthetic-calls
|
| 13 |
+
model-index:
|
| 14 |
+
- name: speaker-segmentation-fine-tuned-darija
|
| 15 |
+
results: []
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 19 |
+
should probably proofread and complete it, then remove this comment. -->
|
| 20 |
+
|
| 21 |
+
# speaker-segmentation-fine-tuned-darija
|
| 22 |
+
|
| 23 |
+
This model is a fine-tuned version of [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1) on the igitsml/darija-synthetic-calls dataset.
|
| 24 |
+
It achieves the following results on the evaluation set:
|
| 25 |
+
- Loss: 0.3338
|
| 26 |
+
- Model Preparation Time: 0.0061
|
| 27 |
+
- Der: 0.1220
|
| 28 |
+
- False Alarm: 0.0235
|
| 29 |
+
- Missed Detection: 0.0296
|
| 30 |
+
- Confusion: 0.0688
|
| 31 |
+
|
| 32 |
+
## Model description
|
| 33 |
+
|
| 34 |
+
More information needed
|
| 35 |
+
|
| 36 |
+
## Intended uses & limitations
|
| 37 |
+
|
| 38 |
+
More information needed
|
| 39 |
+
|
| 40 |
+
## Training and evaluation data
|
| 41 |
+
|
| 42 |
+
More information needed
|
| 43 |
+
|
| 44 |
+
## Training procedure
|
| 45 |
+
|
| 46 |
+
### Training hyperparameters
|
| 47 |
+
|
| 48 |
+
The following hyperparameters were used during training:
|
| 49 |
+
- learning_rate: 1e-05
|
| 50 |
+
- train_batch_size: 16
|
| 51 |
+
- eval_batch_size: 8
|
| 52 |
+
- seed: 42
|
| 53 |
+
- gradient_accumulation_steps: 2
|
| 54 |
+
- total_train_batch_size: 32
|
| 55 |
+
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 56 |
+
- lr_scheduler_type: cosine
|
| 57 |
+
- lr_scheduler_warmup_ratio: 0.1
|
| 58 |
+
- num_epochs: 30
|
| 59 |
+
- mixed_precision_training: Native AMP
|
| 60 |
+
|
| 61 |
+
### Training results
|
| 62 |
+
|
| 63 |
+
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion |
|
| 64 |
+
|:-------------:|:-----:|:-----:|:---------------:|:----------------------:|:------:|:-----------:|:----------------:|:---------:|
|
| 65 |
+
| 0.7164 | 1.0 | 683 | 0.8500 | 0.0061 | 0.2406 | 0.0365 | 0.0432 | 0.1609 |
|
| 66 |
+
| 0.6075 | 2.0 | 1366 | 0.6868 | 0.0061 | 0.2182 | 0.0361 | 0.0408 | 0.1413 |
|
| 67 |
+
| 0.5213 | 3.0 | 2049 | 0.5659 | 0.0061 | 0.1947 | 0.0329 | 0.0395 | 0.1224 |
|
| 68 |
+
| 0.4664 | 4.0 | 2732 | 0.5040 | 0.0061 | 0.1821 | 0.0306 | 0.0372 | 0.1143 |
|
| 69 |
+
| 0.411 | 5.0 | 3415 | 0.4678 | 0.0061 | 0.1738 | 0.0297 | 0.0355 | 0.1086 |
|
| 70 |
+
| 0.4205 | 6.0 | 4098 | 0.4503 | 0.0061 | 0.1682 | 0.0286 | 0.0348 | 0.1048 |
|
| 71 |
+
| 0.4133 | 7.0 | 4781 | 0.4330 | 0.0061 | 0.1629 | 0.0285 | 0.0336 | 0.1009 |
|
| 72 |
+
| 0.3936 | 8.0 | 5464 | 0.4191 | 0.0061 | 0.1579 | 0.0278 | 0.0329 | 0.0972 |
|
| 73 |
+
| 0.3799 | 9.0 | 6147 | 0.4080 | 0.0061 | 0.1529 | 0.0276 | 0.0323 | 0.0931 |
|
| 74 |
+
| 0.3557 | 10.0 | 6830 | 0.4007 | 0.0061 | 0.1500 | 0.0269 | 0.0317 | 0.0914 |
|
| 75 |
+
| 0.3564 | 11.0 | 7513 | 0.3915 | 0.0061 | 0.1465 | 0.0258 | 0.0319 | 0.0888 |
|
| 76 |
+
| 0.3658 | 12.0 | 8196 | 0.3853 | 0.0061 | 0.1433 | 0.0258 | 0.0314 | 0.0861 |
|
| 77 |
+
| 0.3606 | 13.0 | 8879 | 0.3784 | 0.0061 | 0.1408 | 0.0255 | 0.0311 | 0.0842 |
|
| 78 |
+
| 0.3685 | 14.0 | 9562 | 0.3739 | 0.0061 | 0.1390 | 0.0255 | 0.0308 | 0.0827 |
|
| 79 |
+
| 0.3364 | 15.0 | 10245 | 0.3706 | 0.0061 | 0.1378 | 0.0253 | 0.0306 | 0.0818 |
|
| 80 |
+
| 0.3436 | 16.0 | 10928 | 0.3698 | 0.0061 | 0.1369 | 0.0248 | 0.0307 | 0.0814 |
|
| 81 |
+
| 0.3339 | 17.0 | 11611 | 0.3636 | 0.0061 | 0.1353 | 0.0249 | 0.0304 | 0.0799 |
|
| 82 |
+
| 0.3416 | 18.0 | 12294 | 0.3615 | 0.0061 | 0.1343 | 0.0246 | 0.0304 | 0.0792 |
|
| 83 |
+
| 0.3396 | 19.0 | 12977 | 0.3593 | 0.0061 | 0.1337 | 0.0243 | 0.0305 | 0.0789 |
|
| 84 |
+
| 0.344 | 20.0 | 13660 | 0.3572 | 0.0061 | 0.1330 | 0.0243 | 0.0305 | 0.0782 |
|
| 85 |
+
| 0.3372 | 21.0 | 14343 | 0.3541 | 0.0061 | 0.1320 | 0.0245 | 0.0302 | 0.0773 |
|
| 86 |
+
| 0.3271 | 22.0 | 15026 | 0.3549 | 0.0061 | 0.1313 | 0.0242 | 0.0302 | 0.0768 |
|
| 87 |
+
| 0.3206 | 23.0 | 15709 | 0.3516 | 0.0061 | 0.1310 | 0.0243 | 0.0301 | 0.0766 |
|
| 88 |
+
| 0.3359 | 24.0 | 16392 | 0.3524 | 0.0061 | 0.1308 | 0.0242 | 0.0301 | 0.0765 |
|
| 89 |
+
| 0.322 | 25.0 | 17075 | 0.3512 | 0.0061 | 0.1304 | 0.0241 | 0.0301 | 0.0762 |
|
| 90 |
+
| 0.3169 | 26.0 | 17758 | 0.3507 | 0.0061 | 0.1301 | 0.0243 | 0.0300 | 0.0758 |
|
| 91 |
+
| 0.3351 | 27.0 | 18441 | 0.3508 | 0.0061 | 0.1300 | 0.0243 | 0.0299 | 0.0758 |
|
| 92 |
+
| 0.3221 | 28.0 | 19124 | 0.3501 | 0.0061 | 0.1300 | 0.0243 | 0.0299 | 0.0758 |
|
| 93 |
+
| 0.324 | 29.0 | 19807 | 0.3499 | 0.0061 | 0.1300 | 0.0243 | 0.0299 | 0.0758 |
|
| 94 |
+
| 0.3271 | 30.0 | 20490 | 0.3499 | 0.0061 | 0.1300 | 0.0243 | 0.0299 | 0.0758 |
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
### Framework versions
|
| 98 |
+
|
| 99 |
+
- Transformers 4.57.3
|
| 100 |
+
- Pytorch 2.5.1+cu121
|
| 101 |
+
- Datasets 4.4.2
|
| 102 |
+
- Tokenizers 0.22.2
|
config.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"SegmentationModel"
|
| 4 |
+
],
|
| 5 |
+
"chunk_duration": 10.0,
|
| 6 |
+
"dtype": "float32",
|
| 7 |
+
"max_speakers_per_chunk": 3,
|
| 8 |
+
"max_speakers_per_frame": 2,
|
| 9 |
+
"min_duration": null,
|
| 10 |
+
"model_type": "pyannet",
|
| 11 |
+
"sample_rate": 16000,
|
| 12 |
+
"transformers_version": "4.57.3",
|
| 13 |
+
"warm_up": [
|
| 14 |
+
0.0,
|
| 15 |
+
0.0
|
| 16 |
+
],
|
| 17 |
+
"weigh_by_cardinality": false
|
| 18 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7fe45631ad5f1fbf6a38ccfd9a608a7eee2263334a7aa31d268b806f1b8ae4e1
|
| 3 |
+
size 5899124
|
training_args.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f2c6015f30f4b8bb1ddd75b49f5e7a4365be08988344e1ecbcd40052627b955
|
| 3 |
+
size 5496
|