You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

speaker-segmentation-fine-tuned-hindi

This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3534
  • Model Preparation Time: 0.0039
  • Der: 0.1283
  • False Alarm: 0.0212
  • Missed Detection: 0.0473
  • Confusion: 0.0598

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Model Preparation Time Der False Alarm Missed Detection Confusion
0.41 1.0 47 0.5364 0.0039 0.1575 0.0245 0.0710 0.0621
0.3468 2.0 94 0.4971 0.0039 0.1530 0.0268 0.0648 0.0614
0.3272 3.0 141 0.4690 0.0039 0.1513 0.0273 0.0620 0.0620
0.3174 4.0 188 0.4478 0.0039 0.1481 0.0272 0.0607 0.0602
0.3181 5.0 235 0.4342 0.0039 0.1473 0.0267 0.0591 0.0615
0.2946 6.0 282 0.4204 0.0039 0.1409 0.0270 0.0566 0.0574
0.2731 7.0 329 0.4103 0.0039 0.1394 0.0266 0.0558 0.0571
0.2748 8.0 376 0.4042 0.0039 0.1378 0.0272 0.0543 0.0564
0.2717 9.0 423 0.3977 0.0039 0.1369 0.0262 0.0542 0.0565
0.2822 10.0 470 0.3902 0.0039 0.1371 0.0257 0.0541 0.0573
0.2743 11.0 517 0.3836 0.0039 0.1360 0.0256 0.0538 0.0566
0.2692 12.0 564 0.3788 0.0039 0.1355 0.0254 0.0532 0.0569
0.2518 13.0 611 0.3763 0.0039 0.1365 0.0246 0.0533 0.0587
0.2605 14.0 658 0.3727 0.0039 0.1366 0.0250 0.0521 0.0594
0.2477 15.0 705 0.3701 0.0039 0.1345 0.0250 0.0512 0.0582
0.2439 16.0 752 0.3668 0.0039 0.1327 0.0251 0.0496 0.0579
0.2402 17.0 799 0.3662 0.0039 0.1317 0.0241 0.0498 0.0577
0.2476 18.0 846 0.3668 0.0039 0.1317 0.0234 0.0500 0.0582
0.2288 19.0 893 0.3660 0.0039 0.1326 0.0236 0.0493 0.0597
0.2373 20.0 940 0.3646 0.0039 0.1321 0.0237 0.0489 0.0595
0.2279 21.0 987 0.3638 0.0039 0.1326 0.0240 0.0488 0.0598
0.2349 22.0 1034 0.3621 0.0039 0.1318 0.0234 0.0488 0.0596
0.2348 23.0 1081 0.3608 0.0039 0.1308 0.0227 0.0489 0.0592
0.23 24.0 1128 0.3600 0.0039 0.1305 0.0223 0.0492 0.0589
0.2293 25.0 1175 0.3603 0.0039 0.1304 0.0225 0.0489 0.0590
0.219 26.0 1222 0.3615 0.0039 0.1308 0.0227 0.0487 0.0594
0.2235 27.0 1269 0.3603 0.0039 0.1298 0.0224 0.0486 0.0588
0.218 28.0 1316 0.3592 0.0039 0.1288 0.0222 0.0485 0.0581
0.216 29.0 1363 0.3591 0.0039 0.1285 0.0219 0.0487 0.0580
0.2265 30.0 1410 0.3543 0.0039 0.1285 0.0216 0.0483 0.0587
0.2199 31.0 1457 0.3551 0.0039 0.1290 0.0218 0.0482 0.0589
0.2113 32.0 1504 0.3552 0.0039 0.1285 0.0215 0.0483 0.0587
0.2122 33.0 1551 0.3546 0.0039 0.1285 0.0215 0.0481 0.0590
0.2232 34.0 1598 0.3542 0.0039 0.1284 0.0216 0.0479 0.0590
0.2049 35.0 1645 0.3544 0.0039 0.1284 0.0215 0.0479 0.0590
0.2155 36.0 1692 0.3547 0.0039 0.1284 0.0214 0.0479 0.0591
0.2056 37.0 1739 0.3549 0.0039 0.1286 0.0214 0.0479 0.0593
0.2047 38.0 1786 0.3551 0.0039 0.1286 0.0214 0.0477 0.0595
0.2155 39.0 1833 0.3550 0.0039 0.1285 0.0212 0.0479 0.0594
0.209 40.0 1880 0.3547 0.0039 0.1284 0.0211 0.0478 0.0596
0.2021 41.0 1927 0.3550 0.0039 0.1285 0.0211 0.0477 0.0596
0.2085 42.0 1974 0.3545 0.0039 0.1285 0.0212 0.0475 0.0597
0.2161 43.0 2021 0.3535 0.0039 0.1283 0.0211 0.0474 0.0598
0.215 44.0 2068 0.3543 0.0039 0.1284 0.0212 0.0474 0.0598
0.2139 45.0 2115 0.3535 0.0039 0.1284 0.0212 0.0474 0.0598
0.2033 46.0 2162 0.3535 0.0039 0.1283 0.0211 0.0474 0.0598
0.2019 47.0 2209 0.3535 0.0039 0.1283 0.0212 0.0474 0.0598
0.2109 48.0 2256 0.3534 0.0039 0.1283 0.0212 0.0473 0.0598
0.2117 49.0 2303 0.3534 0.0039 0.1283 0.0212 0.0473 0.0598
0.1957 50.0 2350 0.3534 0.0039 0.1283 0.0212 0.0473 0.0598

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.1
Downloads last month
-
Safetensors
Model size
1.47M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Evaluation results