--- library_name: transformers license: mit base_model: pyannote/segmentation-3.0 tags: - speaker-diarization - speaker-segmentation - generated_from_trainer datasets: - syvai/synthetic-diarization-mixed-speakers model-index: - name: speaker-segmentation-fine-tuned results: [] --- # speaker-segmentation-fine-tuned This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the syvai/synthetic-diarization-mixed-speakers dataset. It achieves the following results on the evaluation set: - Loss: 0.2088 - Model Preparation Time: 0.0013 - Der: 0.0754 - False Alarm: 0.0218 - Missed Detection: 0.0180 - Confusion: 0.0356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion | |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:------:|:-----------:|:----------------:|:---------:| | 0.2579 | 1.0 | 1232 | 0.2571 | 0.0013 | 0.0941 | 0.0253 | 0.0190 | 0.0498 | | 0.2437 | 2.0 | 2464 | 0.2483 | 0.0013 | 0.0898 | 0.0230 | 0.0196 | 0.0471 | | 0.2169 | 3.0 | 3696 | 0.2200 | 0.0013 | 0.0787 | 0.0232 | 0.0179 | 0.0376 | | 0.206 | 4.0 | 4928 | 0.2104 | 0.0013 | 0.0760 | 0.0218 | 0.0181 | 0.0361 | | 0.1928 | 5.0 | 6160 | 0.2088 | 0.0013 | 0.0754 | 0.0218 | 0.0180 | 0.0356 | ### Framework versions - Transformers 4.53.0 - Pytorch 2.7.1+cu126 - Datasets 3.6.0 - Tokenizers 0.21.2