syncopation-transformer
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2122
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 50
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.3875 | 1.0 | 1 | 193.7451 |
| 176.6205 | 2.0 | 2 | 67.3707 |
| 59.7935 | 3.0 | 3 | 0.7692 |
| 0.6997 | 4.0 | 4 | 1.1924 |
| 1.0481 | 5.0 | 5 | 6.9250 |
| 6.4428 | 6.0 | 6 | 10.4059 |
| 9.5356 | 7.0 | 7 | 3.7911 |
| 3.5411 | 8.0 | 8 | 0.7080 |
| 0.6259 | 9.0 | 9 | 2.0620 |
| 1.8183 | 10.0 | 10 | 0.5511 |
| 0.4832 | 11.0 | 11 | 1.8261 |
| 1.7219 | 12.0 | 12 | 3.0991 |
| 2.9314 | 13.0 | 13 | 1.2140 |
| 1.1776 | 14.0 | 14 | 0.5618 |
| 0.5003 | 15.0 | 15 | 1.2087 |
| 1.019 | 16.0 | 16 | 0.3980 |
| 0.3441 | 17.0 | 17 | 1.0748 |
| 1.0519 | 18.0 | 18 | 1.8093 |
| 1.7163 | 19.0 | 19 | 0.7556 |
| 0.7415 | 20.0 | 20 | 0.4668 |
| 0.4209 | 21.0 | 21 | 0.8939 |
| 0.78 | 22.0 | 22 | 0.3421 |
| 0.3046 | 23.0 | 23 | 0.7928 |
| 0.773 | 24.0 | 24 | 1.3035 |
| 1.2436 | 25.0 | 25 | 0.5759 |
| 0.577 | 26.0 | 26 | 0.4136 |
| 0.3697 | 27.0 | 27 | 0.7350 |
| 0.6573 | 28.0 | 28 | 0.3152 |
| 0.2891 | 29.0 | 29 | 0.6358 |
| 0.6361 | 30.0 | 30 | 1.0179 |
| 0.976 | 31.0 | 31 | 0.4738 |
| 0.4792 | 32.0 | 32 | 0.3808 |
| 0.3367 | 33.0 | 33 | 0.6379 |
| 0.556 | 34.0 | 34 | 0.3007 |
| 0.275 | 35.0 | 35 | 0.5268 |
| 0.5081 | 36.0 | 36 | 0.8208 |
| 0.8002 | 37.0 | 37 | 0.4035 |
| 0.4111 | 38.0 | 38 | 0.3569 |
| 0.3208 | 39.0 | 39 | 0.5670 |
| 0.495 | 40.0 | 40 | 0.2902 |
| 0.2728 | 41.0 | 41 | 0.4459 |
| 0.4394 | 42.0 | 42 | 0.6731 |
| 0.6649 | 43.0 | 43 | 0.3518 |
| 0.3682 | 44.0 | 44 | 0.3366 |
| 0.3202 | 45.0 | 45 | 0.5087 |
| 0.4577 | 46.0 | 46 | 0.2815 |
| 0.2578 | 47.0 | 47 | 0.3818 |
| 0.3896 | 48.0 | 48 | 0.5542 |
| 0.5497 | 49.0 | 49 | 0.3102 |
| 0.3193 | 50.0 | 50 | 0.3202 |
| 0.2954 | 51.0 | 51 | 0.4611 |
| 0.4 | 52.0 | 52 | 0.2750 |
| 0.2485 | 53.0 | 53 | 0.3298 |
| 0.3217 | 54.0 | 54 | 0.4582 |
| 0.4494 | 55.0 | 55 | 0.2784 |
| 0.2772 | 56.0 | 56 | 0.3036 |
| 0.2832 | 57.0 | 57 | 0.4157 |
| 0.37 | 58.0 | 58 | 0.2668 |
| 0.25 | 59.0 | 59 | 0.2911 |
| 0.2939 | 60.0 | 60 | 0.3842 |
| 0.3931 | 61.0 | 61 | 0.2552 |
| 0.2607 | 62.0 | 62 | 0.2865 |
| 0.2761 | 63.0 | 63 | 0.3727 |
| 0.3326 | 64.0 | 64 | 0.2578 |
| 0.2541 | 65.0 | 65 | 0.2620 |
| 0.2678 | 66.0 | 66 | 0.3262 |
| 0.3435 | 67.0 | 67 | 0.2380 |
| 0.2443 | 68.0 | 68 | 0.2699 |
| 0.25 | 69.0 | 69 | 0.3334 |
| 0.3088 | 70.0 | 70 | 0.2487 |
| 0.2376 | 71.0 | 71 | 0.2404 |
| 0.2436 | 72.0 | 72 | 0.2820 |
| 0.2906 | 73.0 | 73 | 0.2257 |
| 0.2263 | 74.0 | 74 | 0.2541 |
| 0.2322 | 75.0 | 75 | 0.2976 |
| 0.2684 | 76.0 | 76 | 0.2391 |
| 0.2353 | 77.0 | 77 | 0.2257 |
| 0.2371 | 78.0 | 78 | 0.2502 |
| 0.2527 | 79.0 | 79 | 0.2178 |
| 0.2257 | 80.0 | 80 | 0.2392 |
| 0.229 | 81.0 | 81 | 0.2661 |
| 0.2632 | 82.0 | 82 | 0.2299 |
| 0.2335 | 83.0 | 83 | 0.2163 |
| 0.2155 | 84.0 | 84 | 0.2283 |
| 0.2309 | 85.0 | 85 | 0.2128 |
| 0.2059 | 86.0 | 86 | 0.2267 |
| 0.2337 | 87.0 | 87 | 0.2408 |
| 0.2267 | 88.0 | 88 | 0.2220 |
| 0.2138 | 89.0 | 89 | 0.2111 |
| 0.2172 | 90.0 | 90 | 0.2151 |
| 0.21 | 91.0 | 91 | 0.2102 |
| 0.2127 | 92.0 | 92 | 0.2173 |
| 0.2245 | 93.0 | 93 | 0.2226 |
| 0.2154 | 94.0 | 94 | 0.2159 |
| 0.202 | 95.0 | 95 | 0.2096 |
| 0.2113 | 96.0 | 96 | 0.2098 |
| 0.2145 | 97.0 | 97 | 0.2097 |
| 0.2109 | 98.0 | 98 | 0.2113 |
| 0.2057 | 99.0 | 99 | 0.2123 |
| 0.2178 | 100.0 | 100 | 0.2122 |
Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 291
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support