license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['image-classification']
false
resnet34 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d()
3f716353386f6de51e5c809c6287af85
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-qnli-target-glue-sst2 This model is a fine-tuned version of [muhtasham/small-mlm-glue-qnli](https://huggingface.co/muhtasham/small-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4217 - Accuracy: 0.8716
fcfcb9d33beb779a7bab97c7fb9523c5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3912 | 0.24 | 500 | 0.3462 | 0.8406 | | 0.3049 | 0.48 | 1000 | 0.3246 | 0.8544 | | 0.2574 | 0.71 | 1500 | 0.3264 | 0.8739 | | 0.2381 | 0.95 | 2000 | 0.2983 | 0.8807 | | 0.1836 | 1.19 | 2500 | 0.3447 | 0.8784 | | 0.1681 | 1.43 | 3000 | 0.3553 | 0.8819 | | 0.1656 | 1.66 | 3500 | 0.3758 | 0.8784 | | 0.1701 | 1.9 | 4000 | 0.3134 | 0.8991 | | 0.1337 | 2.14 | 4500 | 0.5031 | 0.8521 | | 0.1232 | 2.38 | 5000 | 0.4217 | 0.8716 |
257d44592b7b6f33e307431fdab760a5
apache-2.0
['generated_from_trainer']
false
t5-base-sede-txt2sql This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the sede dataset. It achieves the following results on the evaluation set: - Loss: 1.1577 - Bleu Score: 0.5923 - Parsable Queries Accuracy: 0.0 - Partial Match F1: 0.0 - Partial Match F1 No Values: 0.0 - Partial Match Em: 0.0 - Partial Match No Values Em: 0.0
5fc73e7c936a3e828813b699ccd9b426
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Score | Parsable Queries Accuracy | Partial Match F1 | Partial Match F1 No Values | Partial Match Em | Partial Match No Values Em | |:-------------:|:-----:|:----:|:---------------:|:----------:|:-------------------------:|:----------------:|:--------------------------:|:----------------:|:--------------------------:| | No log | 1.0 | 95 | 13.2410 | 0.0069 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | No log | 2.0 | 190 | 7.6317 | 0.0134 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | No log | 3.0 | 285 | 6.0919 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | No log | 4.0 | 380 | 5.4922 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | No log | 5.0 | 475 | 4.7151 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 12.0698 | 6.0 | 570 | 4.1412 | 0.0003 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 12.0698 | 7.0 | 665 | 3.6398 | 0.0003 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 12.0698 | 8.0 | 760 | 3.2643 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 12.0698 | 9.0 | 855 | 3.0544 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 12.0698 | 10.0 | 950 | 2.8015 | 0.0043 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 4.696 | 11.0 | 1045 | 2.5552 | 0.0789 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 4.696 | 12.0 | 1140 | 2.3535 | 0.1036 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 4.696 | 13.0 | 1235 | 2.2132 | 0.0050 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 4.696 | 14.0 | 1330 | 2.1084 | 0.1333 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 4.696 | 15.0 | 1425 | 2.0117 | 0.2972 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.1348 | 16.0 | 1520 | 1.9333 | 0.2481 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.1348 | 17.0 | 1615 | 1.8395 | 0.4149 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.1348 | 18.0 | 1710 | 1.7661 | 0.5439 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.1348 | 19.0 | 1805 | 1.7101 | 0.6001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.1348 | 20.0 | 1900 | 1.6562 | 0.6219 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 3.1348 | 21.0 | 1995 | 1.6073 | 0.5865 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.4276 | 22.0 | 2090 | 1.5773 | 0.5683 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.4276 | 23.0 | 2185 | 1.5478 | 0.5408 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.4276 | 24.0 | 2280 | 1.5190 | 0.5749 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.4276 | 25.0 | 2375 | 1.4927 | 0.5818 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.4276 | 26.0 | 2470 | 1.4671 | 0.5673 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.076 | 27.0 | 2565 | 1.4499 | 0.5616 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.076 | 28.0 | 2660 | 1.4275 | 0.6041 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.076 | 29.0 | 2755 | 1.4096 | 0.5764 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.076 | 30.0 | 2850 | 1.3983 | 0.5862 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.076 | 31.0 | 2945 | 1.3812 | 0.5982 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.8828 | 32.0 | 3040 | 1.3679 | 0.5927 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.8828 | 33.0 | 3135 | 1.3548 | 0.5916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.8828 | 34.0 | 3230 | 1.3461 | 0.5769 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.8828 | 35.0 | 3325 | 1.3353 | 0.5871 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.8828 | 36.0 | 3420 | 1.3293 | 0.5687 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7602 | 37.0 | 3515 | 1.3195 | 0.5689 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7602 | 38.0 | 3610 | 1.3109 | 0.5949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7602 | 39.0 | 3705 | 1.3049 | 0.5619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7602 | 40.0 | 3800 | 1.2953 | 0.5872 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7602 | 41.0 | 3895 | 1.2907 | 0.6014 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7602 | 42.0 | 3990 | 1.2831 | 0.5917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6652 | 43.0 | 4085 | 1.2757 | 0.5718 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6652 | 44.0 | 4180 | 1.2692 | 0.5707 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6652 | 45.0 | 4275 | 1.2642 | 0.5758 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6652 | 46.0 | 4370 | 1.2619 | 0.6012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6652 | 47.0 | 4465 | 1.2527 | 0.5749 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6009 | 48.0 | 4560 | 1.2496 | 0.5722 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6009 | 49.0 | 4655 | 1.2447 | 0.5633 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6009 | 50.0 | 4750 | 1.2411 | 0.5615 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6009 | 51.0 | 4845 | 1.2356 | 0.5691 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6009 | 52.0 | 4940 | 1.2322 | 0.5636 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5481 | 53.0 | 5035 | 1.2285 | 0.5724 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5481 | 54.0 | 5130 | 1.2255 | 0.5771 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5481 | 55.0 | 5225 | 1.2201 | 0.5827 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5481 | 56.0 | 5320 | 1.2181 | 0.5928 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5481 | 57.0 | 5415 | 1.2152 | 0.5599 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5082 | 58.0 | 5510 | 1.2123 | 0.5779 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5082 | 59.0 | 5605 | 1.2083 | 0.5609 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5082 | 60.0 | 5700 | 1.2070 | 0.5654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5082 | 61.0 | 5795 | 1.2036 | 0.5566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5082 | 62.0 | 5890 | 1.2011 | 0.5569 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5082 | 63.0 | 5985 | 1.1993 | 0.5567 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4799 | 64.0 | 6080 | 1.1958 | 0.5619 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4799 | 65.0 | 6175 | 1.1950 | 0.5691 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4799 | 66.0 | 6270 | 1.1914 | 0.5572 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4799 | 67.0 | 6365 | 1.1879 | 0.5635 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4799 | 68.0 | 6460 | 1.1866 | 0.5654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4475 | 69.0 | 6555 | 1.1850 | 0.5575 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4475 | 70.0 | 6650 | 1.1833 | 0.5507 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4475 | 71.0 | 6745 | 1.1820 | 0.5493 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4475 | 72.0 | 6840 | 1.1786 | 0.5525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4475 | 73.0 | 6935 | 1.1789 | 0.5615 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4233 | 74.0 | 7030 | 1.1770 | 0.5603 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4233 | 75.0 | 7125 | 1.1749 | 0.5699 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4233 | 76.0 | 7220 | 1.1754 | 0.5730 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4233 | 77.0 | 7315 | 1.1735 | 0.5798 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4233 | 78.0 | 7410 | 1.1716 | 0.5771 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4101 | 79.0 | 7505 | 1.1699 | 0.5800 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4101 | 80.0 | 7600 | 1.1675 | 0.5736 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4101 | 81.0 | 7695 | 1.1661 | 0.5845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4101 | 82.0 | 7790 | 1.1659 | 0.5974 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4101 | 83.0 | 7885 | 1.1664 | 0.5825 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4101 | 84.0 | 7980 | 1.1647 | 0.5871 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3965 | 85.0 | 8075 | 1.1639 | 0.5772 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3965 | 86.0 | 8170 | 1.1628 | 0.5826 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3965 | 87.0 | 8265 | 1.1615 | 0.5960 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3965 | 88.0 | 8360 | 1.1616 | 0.5908 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3965 | 89.0 | 8455 | 1.1613 | 0.5775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3835 | 90.0 | 8550 | 1.1604 | 0.5917 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3835 | 91.0 | 8645 | 1.1597 | 0.5732 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3835 | 92.0 | 8740 | 1.1594 | 0.5767 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3835 | 93.0 | 8835 | 1.1584 | 0.5719 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3835 | 94.0 | 8930 | 1.1581 | 0.5700 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3766 | 95.0 | 9025 | 1.1583 | 0.5845 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3766 | 96.0 | 9120 | 1.1578 | 0.5808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3766 | 97.0 | 9215 | 1.1578 | 0.5889 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3766 | 98.0 | 9310 | 1.1577 | 0.5851 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3766 | 99.0 | 9405 | 1.1578 | 0.5923 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.3726 | 100.0 | 9500 | 1.1577 | 0.5923 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
6a2ecc36c3abaa44da69ae86ed7c3570
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
hegde- Dreambooth model trained by broidkhegde with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
0e9e5cff08bb9be9bd0a5f0437da2f4c
apache-2.0
['automatic-speech-recognition', '../AI_Light_Dance.py', 'generated_from_trainer']
false
ai-light-dance_singing_ft_wav2vec2-large-lv60-v2 This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-lv60) on the ../AI_LIGHT_DANCE.PY - ONSET-SINGING dataset. It achieves the following results on the evaluation set: - Loss: 0.4285 - Wer: 0.1858
3f7b6a7df2b638581e6d2eed2c6eb281
apache-2.0
['automatic-speech-recognition', '../AI_Light_Dance.py', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10.0 - mixed_precision_training: Native AMP
b155dc7151ac0e67b9b6ccfc79c89f97
apache-2.0
['automatic-speech-recognition', '../AI_Light_Dance.py', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2775 | 1.0 | 1106 | 0.4372 | 0.2117 | | 0.2154 | 2.0 | 2212 | 0.4474 | 0.2044 | | 0.2023 | 3.0 | 3318 | 0.4372 | 0.1920 | | 0.186 | 4.0 | 4424 | 0.4285 | 0.1858 | | 0.1856 | 5.0 | 5530 | 0.4589 | 0.1826 | | 0.1537 | 6.0 | 6636 | 0.4658 | 0.1774 | | 0.1337 | 7.0 | 7742 | 0.4769 | 0.1744 | | 0.108 | 8.0 | 8848 | 0.4604 | 0.1724 | | 0.1593 | 9.0 | 9954 | 0.4731 | 0.1694 | | 0.0904 | 10.0 | 11060 | 0.4843 | 0.1683 |
7757d117b4820caaff8e618dede7b22a
mit
['generated_from_trainer']
false
TweetEval_roBERTa_5E This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.2770 - Accuracy: 0.9467
fddbe5a97aadf0efbed7cf7c0ef5352d
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5967 | 0.04 | 50 | 0.4851 | 0.7333 | | 0.4085 | 0.08 | 100 | 0.2177 | 0.9333 | | 0.3449 | 0.12 | 150 | 0.2164 | 0.9333 | | 0.2739 | 0.16 | 200 | 0.2285 | 0.9267 | | 0.2588 | 0.2 | 250 | 0.2748 | 0.92 | | 0.3406 | 0.24 | 300 | 0.1956 | 0.9467 | | 0.2726 | 0.28 | 350 | 0.2285 | 0.92 | | 0.2645 | 0.32 | 400 | 0.2192 | 0.9267 | | 0.2549 | 0.37 | 450 | 0.2115 | 0.9333 | | 0.2387 | 0.41 | 500 | 0.2230 | 0.9333 | | 0.2415 | 0.45 | 550 | 0.2156 | 0.94 | | 0.2829 | 0.49 | 600 | 0.2575 | 0.9267 | | 0.2865 | 0.53 | 650 | 0.1572 | 0.9467 | | 0.2107 | 0.57 | 700 | 0.1437 | 0.9467 | | 0.2609 | 0.61 | 750 | 0.1595 | 0.94 | | 0.2234 | 0.65 | 800 | 0.2611 | 0.9333 | | 0.266 | 0.69 | 850 | 0.1544 | 0.9467 | | 0.2407 | 0.73 | 900 | 0.2145 | 0.9333 | | 0.2529 | 0.77 | 950 | 0.1861 | 0.9333 | | 0.2083 | 0.81 | 1000 | 0.1448 | 0.9533 | | 0.2942 | 0.85 | 1050 | 0.1703 | 0.9333 | | 0.1916 | 0.89 | 1100 | 0.1831 | 0.94 | | 0.2425 | 0.93 | 1150 | 0.2349 | 0.9333 | | 0.2521 | 0.97 | 1200 | 0.1268 | 0.94 | | 0.1742 | 1.01 | 1250 | 0.1782 | 0.9333 | | 0.172 | 1.06 | 1300 | 0.2636 | 0.9333 | | 0.1487 | 1.1 | 1350 | 0.1987 | 0.9467 | | 0.1805 | 1.14 | 1400 | 0.3030 | 0.9333 | | 0.1295 | 1.18 | 1450 | 0.2229 | 0.94 | | 0.2114 | 1.22 | 1500 | 0.1441 | 0.9467 | | 0.1714 | 1.26 | 1550 | 0.2157 | 0.9467 | | 0.1886 | 1.3 | 1600 | 0.2353 | 0.9267 | | 0.1666 | 1.34 | 1650 | 0.2572 | 0.94 | | 0.2254 | 1.38 | 1700 | 0.1569 | 0.9467 | | 0.1531 | 1.42 | 1750 | 0.2351 | 0.9333 | | 0.2174 | 1.46 | 1800 | 0.2137 | 0.9267 | | 0.2015 | 1.5 | 1850 | 0.2234 | 0.94 | | 0.1785 | 1.54 | 1900 | 0.1944 | 0.9333 | | 0.1954 | 1.58 | 1950 | 0.2013 | 0.9467 | | 0.1481 | 1.62 | 2000 | 0.2196 | 0.94 | | 0.1426 | 1.66 | 2050 | 0.2005 | 0.9467 | | 0.1951 | 1.7 | 2100 | 0.2281 | 0.9467 | | 0.1943 | 1.75 | 2150 | 0.1934 | 0.94 | | 0.2027 | 1.79 | 2200 | 0.1845 | 0.96 | | 0.2119 | 1.83 | 2250 | 0.1338 | 0.9533 | | 0.208 | 1.87 | 2300 | 0.1605 | 0.94 | | 0.1972 | 1.91 | 2350 | 0.1460 | 0.9533 | | 0.1876 | 1.95 | 2400 | 0.1488 | 0.9467 | | 0.1923 | 1.99 | 2450 | 0.2055 | 0.9533 | | 0.1391 | 2.03 | 2500 | 0.2245 | 0.9533 | | 0.1416 | 2.07 | 2550 | 0.2194 | 0.9533 | | 0.1521 | 2.11 | 2600 | 0.2234 | 0.9533 | | 0.0943 | 2.15 | 2650 | 0.2114 | 0.9533 | | 0.1452 | 2.19 | 2700 | 0.1772 | 0.9467 | | 0.1148 | 2.23 | 2750 | 0.2541 | 0.9333 | | 0.1706 | 2.27 | 2800 | 0.2151 | 0.9533 | | 0.12 | 2.31 | 2850 | 0.2521 | 0.9467 | | 0.181 | 2.35 | 2900 | 0.2518 | 0.9467 | | 0.1308 | 2.39 | 2950 | 0.2610 | 0.9533 | | 0.1482 | 2.44 | 3000 | 0.1789 | 0.9533 | | 0.1019 | 2.48 | 3050 | 0.2377 | 0.9467 | | 0.1474 | 2.52 | 3100 | 0.2468 | 0.94 | | 0.0843 | 2.56 | 3150 | 0.3056 | 0.94 | | 0.1521 | 2.6 | 3200 | 0.2067 | 0.96 | | 0.1333 | 2.64 | 3250 | 0.1921 | 0.94 | | 0.1318 | 2.68 | 3300 | 0.1699 | 0.96 | | 0.1503 | 2.72 | 3350 | 0.2186 | 0.94 | | 0.1242 | 2.76 | 3400 | 0.2322 | 0.94 | | 0.1179 | 2.8 | 3450 | 0.2313 | 0.9467 | | 0.1247 | 2.84 | 3500 | 0.2298 | 0.9467 | | 0.1289 | 2.88 | 3550 | 0.2502 | 0.94 | | 0.1597 | 2.92 | 3600 | 0.1875 | 0.9467 | | 0.1645 | 2.96 | 3650 | 0.2469 | 0.94 | | 0.1366 | 3.0 | 3700 | 0.2469 | 0.94 | | 0.1418 | 3.04 | 3750 | 0.2457 | 0.9467 | | 0.1146 | 3.08 | 3800 | 0.2188 | 0.9467 | | 0.091 | 3.12 | 3850 | 0.2476 | 0.94 | | 0.0972 | 3.17 | 3900 | 0.2791 | 0.94 | | 0.0976 | 3.21 | 3950 | 0.2933 | 0.9333 | | 0.0872 | 3.25 | 4000 | 0.2877 | 0.9467 | | 0.0857 | 3.29 | 4050 | 0.2664 | 0.9467 | | 0.1368 | 3.33 | 4100 | 0.2533 | 0.9467 | | 0.0713 | 3.37 | 4150 | 0.2855 | 0.9467 | | 0.1101 | 3.41 | 4200 | 0.2716 | 0.9533 | | 0.0871 | 3.45 | 4250 | 0.2654 | 0.9467 | | 0.1152 | 3.49 | 4300 | 0.2449 | 0.9467 | | 0.0441 | 3.53 | 4350 | 0.2904 | 0.9467 | | 0.1503 | 3.57 | 4400 | 0.2784 | 0.9467 | | 0.0763 | 3.61 | 4450 | 0.2804 | 0.9467 | | 0.083 | 3.65 | 4500 | 0.3278 | 0.94 | | 0.1111 | 3.69 | 4550 | 0.2899 | 0.9333 | | 0.0791 | 3.73 | 4600 | 0.3137 | 0.9333 | | 0.0837 | 3.77 | 4650 | 0.2799 | 0.9467 | | 0.1048 | 3.81 | 4700 | 0.2496 | 0.9533 | | 0.1031 | 3.86 | 4750 | 0.2689 | 0.9533 | | 0.0837 | 3.9 | 4800 | 0.2753 | 0.9533 | | 0.0929 | 3.94 | 4850 | 0.2357 | 0.9467 | | 0.0856 | 3.98 | 4900 | 0.2615 | 0.9467 | | 0.0619 | 4.02 | 4950 | 0.2983 | 0.9467 | | 0.0974 | 4.06 | 5000 | 0.2706 | 0.9533 | | 0.0548 | 4.1 | 5050 | 0.2978 | 0.9467 | | 0.0425 | 4.14 | 5100 | 0.3217 | 0.9333 | | 0.0808 | 4.18 | 5150 | 0.3054 | 0.94 | | 0.0466 | 4.22 | 5200 | 0.3142 | 0.94 | | 0.0593 | 4.26 | 5250 | 0.3193 | 0.9267 | | 0.0551 | 4.3 | 5300 | 0.3017 | 0.9333 | | 0.0493 | 4.34 | 5350 | 0.2954 | 0.94 | | 0.0897 | 4.38 | 5400 | 0.2912 | 0.9467 | | 0.0529 | 4.42 | 5450 | 0.2956 | 0.94 | | 0.0924 | 4.46 | 5500 | 0.2858 | 0.94 | | 0.1018 | 4.5 | 5550 | 0.2826 | 0.94 | | 0.1137 | 4.55 | 5600 | 0.2711 | 0.94 | | 0.0667 | 4.59 | 5650 | 0.2776 | 0.94 | | 0.0521 | 4.63 | 5700 | 0.2955 | 0.94 | | 0.0334 | 4.67 | 5750 | 0.2972 | 0.94 | | 0.0298 | 4.71 | 5800 | 0.3133 | 0.94 | | 0.1261 | 4.75 | 5850 | 0.2891 | 0.9467 | | 0.0514 | 4.79 | 5900 | 0.2804 | 0.9467 | | 0.0416 | 4.83 | 5950 | 0.2809 | 0.94 | | 0.0745 | 4.87 | 6000 | 0.2774 | 0.9467 | | 0.1134 | 4.91 | 6050 | 0.2715 | 0.9467 | | 0.0446 | 4.95 | 6100 | 0.2748 | 0.9467 | | 0.0581 | 4.99 | 6150 | 0.2770 | 0.9467 |
9239bc11a0eb84a4954c483287bc7a7e
mit
['ja', 'japanese', 'tokenizer']
false
Japanese Dummy Tokenizer Repository containing a dummy Japanese Tokenizer trained on ```snow_simplified_japanese_corpus``` dataset. The tokenizer has been trained using Hugging Face datasets in a streaming manner.
044fdaeb0d610750d212726ea02f3141
apache-2.0
['bert']
false
Erlangshen-Deberta-97M-Chinese,one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). The 97 million parameter deberta-V2 base model, using 180G Chinese data, 24 A100(40G) training for 7 days,which is a encoder-only transformer structure. Consumed totally 1B samples.
86f2028fc1a79af892c509c665f2d063
apache-2.0
['bert']
false
Usage ```python from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline import torch tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese', use_fast=False) model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese') text = '生活的真谛是[MASK]。' fillmask_pipe = FillMaskPipeline(model, tokenizer, device=7) print(fillmask_pipe(text, top_k=10)) ```
76487b756c163847caa1918b030747ad
apache-2.0
['bert']
false
Finetune We present the dev results on some tasks. | Model | OCNLI | CMNLI | | ---------------------------------- | ----- | ------ | | RoBERTa-base | 0.743 | 0.7973 | | **Erlangshen-Deberta-97M-Chinese** | 0.752 | 0.807 |
cc6555d1055cbb3adcfe3c1b8dc8534c
apache-2.0
['bert']
false
Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2022}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
7be58359883a05a37f7dfe76f8d440bf
apache-2.0
['translation']
false
opus-mt-en-sm * source languages: en * target languages: sm * OPUS readme: [en-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sm/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.eval.txt)
74a454ee13f66bdd21b61cfac280eed8
mit
['generated_from_trainer']
false
roberta-base.CEBaB_confounding.uniform.sa.5-class.seed_42 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.6956 - Accuracy: 0.7262 - Macro-f1: 0.7053 - Weighted-macro-f1: 0.7201
234c33eab00749001e339a9cf455518f
mit
[]
false
Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `w-m-vote-nonstrict-epoch-4`
5ddb4eba057ab11cbd5658fa932c5ce2
mit
[]
false
Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'w-m-vote-nonstrict-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts))
ff54dcc0002b33307953a34233615710
apache-2.0
['translation']
false
opus-mt-en-bem * source languages: en * target languages: bem * OPUS readme: [en-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bem/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.eval.txt)
c49e67da386311b8d4b3cc1c304eee10
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Spanish Added custom language model to https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
546bb82be4cb221c44eed7d1446e89ed
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows... Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library: ```python from asrecognition import ASREngine asr = ASREngine("es", model_path="jonatasgrosman/wav2vec2-large-xlsr-53-spanish") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = asr.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "es" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
edeb26d8cccd7265803a4cdf037bfe30
apache-2.0
['audio', 'automatic-speech-recognition', 'es', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021wav2vec2-large-xlsr-53-spanish, title={XLSR Wav2Vec2 Spanish by Jonatas Grosman}, author={Grosman, Jonatas}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}}, year={2021} } ```
73af57646337521fa4f5ac898696a94b
mit
['msmarco', 'miniLM', 'pytorch', 'tensorflow', 'pt', 'pt-br']
false
Introduction mMiniLM-L6-v2-mmarco-v1 is a multilingual miniLM-based model finetuned on a multilingual version of MS MARCO passage dataset. This dataset, named mMARCO, is formed by passages in 9 different languages, translated from English MS MARCO passages collection. In the version v1, the datasets were translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
7a47c32fba06df024cf7357e994e5643
mit
['msmarco', 'miniLM', 'pytorch', 'tensorflow', 'pt', 'pt-br']
false
Usage ```python from transformers import AutoTokenizer, AutoModel model_name = 'unicamp-dl/mMiniLM-L6-v2-mmarco-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ```
7a9a248ad4354ed93ab8958ecbc725d2
mit
['msmarco', 'miniLM', 'pytorch', 'tensorflow', 'pt', 'pt-br']
false
Citation If you use mMiniLM-L6-v2-mmarco-v1, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
540de832d47d4a369bd0326c7b31e726
apache-2.0
['generated_from_trainer']
false
nli-distilroberta-base-finetuned-cola This model is a fine-tuned version of [cross-encoder/nli-distilroberta-base](https://huggingface.co/cross-encoder/nli-distilroberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8280 - Matthews Correlation: 0.4957
5146f0cc4ca552b416e2a26910bf502f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5646 | 1.0 | 535 | 0.6462 | 0.3422 | | 0.4267 | 2.0 | 1070 | 0.5672 | 0.4422 | | 0.3354 | 3.0 | 1605 | 0.6441 | 0.4698 | | 0.2723 | 4.0 | 2140 | 0.7464 | 0.4670 | | 0.2204 | 5.0 | 2675 | 0.8280 | 0.4957 |
0c09a061569fdb669e0b041cf170868a
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_sst2 This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2364 - Accuracy: 0.9266
64e17885aa237d2fdf456346fa0cc0d6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4176 | 1.0 | 527 | 0.2978 | 0.9197 | | 0.1807 | 2.0 | 1054 | 0.2951 | 0.9174 | | 0.1163 | 3.0 | 1581 | 0.2749 | 0.9186 | | 0.0862 | 4.0 | 2108 | 0.2988 | 0.9083 | | 0.0695 | 5.0 | 2635 | 0.2760 | 0.9174 | | 0.0598 | 6.0 | 3162 | 0.2695 | 0.9151 | | 0.0525 | 7.0 | 3689 | 0.2723 | 0.9255 | | 0.0464 | 8.0 | 4216 | 0.2430 | 0.9243 | | 0.0422 | 9.0 | 4743 | 0.2814 | 0.9243 | | 0.0395 | 10.0 | 5270 | 0.2464 | 0.9163 | | 0.0357 | 11.0 | 5797 | 0.2390 | 0.9197 | | 0.0341 | 12.0 | 6324 | 0.2713 | 0.9197 | | 0.0328 | 13.0 | 6851 | 0.2685 | 0.9220 | | 0.0315 | 14.0 | 7378 | 0.2585 | 0.9186 | | 0.0296 | 15.0 | 7905 | 0.2367 | 0.9220 | | 0.0283 | 16.0 | 8432 | 0.2560 | 0.9186 | | 0.0277 | 17.0 | 8959 | 0.2635 | 0.9174 | | 0.0269 | 18.0 | 9486 | 0.2364 | 0.9266 | | 0.026 | 19.0 | 10013 | 0.2749 | 0.9209 | | 0.0252 | 20.0 | 10540 | 0.2507 | 0.9174 | | 0.0248 | 21.0 | 11067 | 0.2769 | 0.9163 | | 0.0248 | 22.0 | 11594 | 0.2543 | 0.9220 | | 0.024 | 23.0 | 12121 | 0.2677 | 0.9209 |
1236efa40982fdd48d3c185bc48578eb
apache-2.0
['generated_from_trainer']
false
english-filipino-wav2vec2-l-xls-r-test-02 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4561 - Wer: 0.2632
9cc07e9cd537709d4806909aaacf0aa3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP
87b52d5a6adcf2fbe3908401fe10996f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1707 | 2.09 | 400 | 0.8006 | 0.8224 | | 0.4801 | 4.19 | 800 | 0.3363 | 0.4329 | | 0.2541 | 6.28 | 1200 | 0.3365 | 0.3676 | | 0.1851 | 8.38 | 1600 | 0.3485 | 0.3739 | | 0.1408 | 10.47 | 2000 | 0.3628 | 0.3420 | | 0.1098 | 12.57 | 2400 | 0.3979 | 0.3277 | | 0.1019 | 14.66 | 2800 | 0.4031 | 0.2896 | | 0.0887 | 16.75 | 3200 | 0.3977 | 0.3024 | | 0.0798 | 18.85 | 3600 | 0.3959 | 0.3129 | | 0.0671 | 20.94 | 4000 | 0.4489 | 0.3241 | | 0.0633 | 23.04 | 4400 | 0.4455 | 0.3026 | | 0.055 | 25.13 | 4800 | 0.4668 | 0.2910 | | 0.0523 | 27.23 | 5200 | 0.4670 | 0.2960 | | 0.0468 | 29.32 | 5600 | 0.4536 | 0.2781 | | 0.0392 | 31.41 | 6000 | 0.4612 | 0.2860 | | 0.0381 | 33.51 | 6400 | 0.4651 | 0.2841 | | 0.034 | 35.6 | 6800 | 0.4723 | 0.2716 | | 0.0315 | 37.7 | 7200 | 0.4546 | 0.2642 | | 0.0294 | 39.79 | 7600 | 0.4561 | 0.2632 |
ebf7c521f91d4dd7284120eb842e0376
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Nepali This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ne-NP dataset. It achieves the following results on the evaluation set: - Loss: 1.5835 - Wer: 231.7073
6398199e008b7a8f2be8e4f4ab2065b4
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0 | 999.0 | 1000 | 1.5835 | 231.7073 | | 0.0 | 1999.0 | 2000 | 1.9067 | 231.7073 | | 0.0 | 2999.0 | 3000 | 2.1258 | 236.5854 | | 0.0 | 3999.0 | 4000 | 2.3147 | 243.9024 | | 0.0 | 4999.0 | 5000 | 2.3599 | 234.1463 |
22cda70c29f6d10ecfe77afb0975a277
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-stsb-target-glue-qnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-stsb](https://huggingface.co/muhtasham/small-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3477 - Accuracy: 0.8547
8d8c59531175a13a51e4a9123b73df81
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4913 | 0.15 | 500 | 0.3941 | 0.8287 | | 0.4468 | 0.31 | 1000 | 0.3872 | 0.8303 | | 0.4246 | 0.46 | 1500 | 0.3619 | 0.8411 | | 0.4133 | 0.61 | 2000 | 0.3757 | 0.8375 | | 0.4133 | 0.76 | 2500 | 0.3445 | 0.8503 | | 0.3958 | 0.92 | 3000 | 0.3340 | 0.8574 | | 0.3576 | 1.07 | 3500 | 0.3426 | 0.8558 | | 0.318 | 1.22 | 4000 | 0.3568 | 0.8559 | | 0.3166 | 1.37 | 4500 | 0.3477 | 0.8547 |
38fe0f388ced6d29ba6a0c507ea5b968
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Thai - Parinthapat Pengpun This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 and the FLEURS datasets. It achieves the following results on the evaluation set: - eval_loss: 0.1875 - eval_wer: 17.5807 - eval_cer: 8.9942 - eval_runtime: 14734.8594 - eval_samples_per_second: 0.742 - eval_steps_per_second: 0.046 - epoch: 10.02 - step: 11000
5f2e0f5a4fcd6df137707a80361ff3b7
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 15000 - mixed_precision_training: Native AMP
81715c0e99dbf4eedbdad237a335a285
apache-2.0
['generated_from_trainer']
false
roberta-base-ca-finetuned-mnli This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4137 - Accuracy: 0.8778
a3a87fd5236b9af8b951898ea9f571ff
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3699 | 1.0 | 1255 | 0.3712 | 0.8669 | | 0.3082 | 2.0 | 2510 | 0.3401 | 0.8766 | | 0.2375 | 3.0 | 3765 | 0.4137 | 0.8778 | | 0.1889 | 4.0 | 5020 | 0.4671 | 0.8733 | | 0.1486 | 5.0 | 6275 | 0.5205 | 0.8749 |
888be37fcc78c3856b0f8d1cf072842f
mit
[]
false
model by alxdfy This your the Stable Diffusion model fine-tuned the noggles_glasses_1200 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of a person wearing sks glasses** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/_DSC3476.jpg) ![image 1](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/292068449_779660049832297_7554632901123311495_n_2875x.jpg) ![image 2](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/Screenshot 2022-09-28 101632.jpg) ![image 3](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/292471692_1200098353866646_8688611891608490893_n_2672x.jpg) ![image 4](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/291437103_575113617405080_4253713068724854490_n_3121x.jpg) ![image 5](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/strip1.jpg) ![image 6](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/Screenshot 2022-09-28 101717.jpg) ![image 7](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/20220910_182800-01.jpg) ![image 8](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/20220910_225712-02.jpg) ![image 9](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/292236552_1477604436022119_7495376372190185135_n_2749x.jpg) ![image 10](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/293054543_1413890889119491_3885435733085354832_n_1284x.jpg) ![image 11](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/_DSC3613.jpg) ![image 12](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/gossamer-min.jpg) ![image 13](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/kidsnouns.jpg) ![image 14](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/GOPR0023-01.jpg) ![image 15](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/292316029_435557191830626_6362856498470202385_n_3004x.jpg) ![image 16](https://huggingface.co/sd-dreambooth-library/noggles-glasses-1200/resolve/main/concept_images/_DSC3466.jpg)
fee52744aeaed226d12b2a80c1c6e21c
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'hy', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_4/ - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.1693 - Wer: 0.2373 - Cer: 0.0429
5de493b6ec2414525e8d3206935c54b2
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'hy', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 842 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 5000 - mixed_precision_training: Native AMP
045cf0b351b5c7011e20ca4a16f7d917
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'hy', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.255 | 7.24 | 500 | 0.2978 | 0.4294 | 0.0758 | | 1.0058 | 14.49 | 1000 | 0.1883 | 0.2838 | 0.0483 | | 0.9371 | 21.73 | 1500 | 0.1813 | 0.2627 | 0.0457 | | 0.8999 | 28.98 | 2000 | 0.1693 | 0.2373 | 0.0429 | | 0.8814 | 36.23 | 2500 | 0.1760 | 0.2420 | 0.0435 | | 0.8364 | 43.47 | 3000 | 0.1765 | 0.2416 | 0.0419 | | 0.8019 | 50.72 | 3500 | 0.1758 | 0.2311 | 0.0398 | | 0.7665 | 57.96 | 4000 | 0.1745 | 0.2240 | 0.0399 | | 0.7376 | 65.22 | 4500 | 0.1717 | 0.2190 | 0.0385 | | 0.716 | 72.46 | 5000 | 0.1700 | 0.2147 | 0.0382 |
046b22af9f1df42e3644fc7db1db91eb
apache-2.0
['generated_from_trainer']
false
tiny-mlm-imdb This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5540
355bd13645f2cc96cab8b18465af3f88
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.2358 | 0.16 | 500 | 3.8225 | | 4.1206 | 0.32 | 1000 | 3.7793 | | 4.0857 | 0.48 | 1500 | 3.7520 | | 4.0699 | 0.64 | 2000 | 3.7277 | | 4.0378 | 0.8 | 2500 | 3.7125 | | 4.0191 | 0.96 | 3000 | 3.7019 | | 3.9747 | 1.12 | 3500 | 3.6871 | | 3.9647 | 1.28 | 4000 | 3.6735 | | 3.956 | 1.44 | 4500 | 3.6773 | | 3.9574 | 1.6 | 5000 | 3.6580 | | 3.9408 | 1.76 | 5500 | 3.6435 | | 3.9421 | 1.92 | 6000 | 3.6419 | | 3.9265 | 2.08 | 6500 | 3.6343 | | 3.9198 | 2.24 | 7000 | 3.6306 | | 3.9205 | 2.4 | 7500 | 3.6198 | | 3.8985 | 2.56 | 8000 | 3.6158 | | 3.9167 | 2.72 | 8500 | 3.6091 | | 3.9111 | 2.88 | 9000 | 3.6073 | | 3.8882 | 3.04 | 9500 | 3.5922 | | 3.8761 | 3.2 | 10000 | 3.5908 | | 3.8603 | 3.36 | 10500 | 3.5841 | | 3.8621 | 3.52 | 11000 | 3.5835 | | 3.8332 | 3.68 | 11500 | 3.5883 | | 3.8523 | 3.84 | 12000 | 3.5798 | | 3.8449 | 4.0 | 12500 | 3.5771 | | 3.8284 | 4.16 | 13000 | 3.5653 | | 3.8253 | 4.32 | 13500 | 3.5701 | | 3.8021 | 4.48 | 14000 | 3.5681 | | 3.8316 | 4.64 | 14500 | 3.5537 | | 3.8318 | 4.8 | 15000 | 3.5609 | | 3.82 | 4.96 | 15500 | 3.5579 | | 3.8094 | 5.12 | 16000 | 3.5540 |
7362d50fd4c33f01c2e9b0ad8ffbf6ce
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-base-squad-qg-no-paragraph` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). This model is fine-tuned without pargraph information but only the sentence that contains the answer.
68d5ece1298648cf3ca1a7dad0737e0a
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-base-squad-qg-no-paragraph") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
1d0ea686c5a9c76e02172a887dcb88dc
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-squad-qg-no-paragraph/raw/main/eval/metric.first.sentence.sentence_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 55.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 39.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 30.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 23.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.85 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 51.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
286778d96b7d154b61e90b258beec858
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['sentence_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/bart-base - max_length: 128 - max_length_output: 32 - epoch: 3 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-squad-qg-no-paragraph/raw/main/trainer_config.json).
524510572cd39d1a8045289745fdef23
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_unispeech-sat_s459 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
53ef484e01ce2eb4920b362a00b74807
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.9728 | 0.19 | 500 | 8.6854 | | 8.7387 | 0.39 | 1000 | 8.7712 | | 8.6739 | 0.58 | 1500 | 8.7362 | | 8.786 | 0.77 | 2000 | 8.7816 | | 8.6918 | 0.97 | 2500 | 8.6802 | | 8.595 | 1.16 | 3000 | 8.7086 | | 8.5342 | 1.36 | 3500 | 8.6558 | | 8.6484 | 1.55 | 4000 | 8.7442 | | 8.5594 | 1.74 | 4500 | 8.7238 | | 8.4791 | 1.94 | 5000 | 8.7073 | | 8.4489 | 2.13 | 5500 | 8.6470 | | 8.42 | 2.32 | 6000 | 8.7016 | | 8.4389 | 2.52 | 6500 | 8.6039 | | 8.5176 | 2.71 | 7000 | 8.6179 | | 8.5392 | 2.9 | 7500 | 8.6394 |
0295613f741e216ea1c1f2885182583f
apache-2.0
['generated_from_keras_callback']
false
tomthekkan/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.1138 - Validation Loss: 3.3816 - Epoch: 7
5076e9b10075d1c8fb9dbd65bd305485
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.9822 | 4.2802 | 0 | | 5.9654 | 3.7811 | 1 | | 5.2343 | 3.6557 | 2 | | 4.8190 | 3.5433 | 3 | | 4.5149 | 3.4695 | 4 | | 4.3105 | 3.4202 | 5 | | 4.1907 | 3.3909 | 6 | | 4.1138 | 3.3816 | 7 |
f1ac31996c8314c1e6bc8f599aa66289
creativeml-openrail-m
['text-to-image']
false
Open Potion Bottle v2 Dreambooth model trained by [piEsposito](https://twitter.com/piesposi_to) with open weights, configs and prompts (as it should be) - Concept: `potionbottle` You can run this concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
6d7bc6b29fc269169a3c0c98282d63d5
creativeml-openrail-m
['text-to-image']
false
Usage examples with `potionbottle` - Prompt: fantasy dragon inside a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_1.png" width=512/> - Prompt: potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_2.png" width=512/> - Prompt: green potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_3.png" width=512/> - Prompt: spiral galaxy inside a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_4.png" width=512/> - Prompt: lightning storm inside a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_5.png" width=512/> - Prompt: pomeranian as a potionbottle, perfectly ornated, intricate details, 3d render vray, uhd, beautiful, trending on artstation - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/potionbottle_6.png" width=512/> - Prompt: milkshake as potionbottle, perfectly ornated, intricate details, 3d render vray, beautiful, trending on artstation - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_7.png" width=512/> - Prompt: a square potionbottle full of fire. Art by smoose2. Caustic reflections, shadows - CFG Scale: 10 - Scheduler: `diffusers.EulerAncestralDiscreteScheduler` - Steps: 30 <img src="https://huggingface.co/piEsposito/openpotionbottle-v2/resolve/main/concept_images/pottionbottle_8.png" width=512/>
aeac67eb67c80bfa44765094641712f6
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-sst2-target-glue-cola This model is a fine-tuned version of [muhtasham/small-mlm-glue-sst2](https://huggingface.co/muhtasham/small-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5598 - Matthews Correlation: 0.3885
48d3b16e50083d934c9ec524c56f6aa5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5397 | 1.87 | 500 | 0.6364 | 0.2396 | | 0.3514 | 3.73 | 1000 | 0.7722 | 0.3110 | | 0.2254 | 5.6 | 1500 | 0.8466 | 0.3528 | | 0.1675 | 7.46 | 2000 | 0.9693 | 0.3824 | | 0.1238 | 9.33 | 2500 | 1.1907 | 0.3798 | | 0.1043 | 11.19 | 3000 | 1.2831 | 0.4028 | | 0.0934 | 13.06 | 3500 | 1.3186 | 0.3478 | | 0.0807 | 14.93 | 4000 | 1.3018 | 0.4120 | | 0.0616 | 16.79 | 4500 | 1.4735 | 0.3913 | | 0.0626 | 18.66 | 5000 | 1.5598 | 0.3885 |
6151dec8e706a477d5ee7d6b37c54ec6
apache-2.0
['translation', 'generated_from_trainer']
false
marian-finetuned-kde4-en-to-vi-190322 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on the mt_eng_vietnamese dataset. It achieves the following results on the evaluation set: - Loss: 1.2652 - Bleu: 37.2837
ff2d198eef69d481b7820691fbcbd679
apache-2.0
[]
false
Model description **CAMeLBERT-CA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model. For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
7b311b230fd4442ab233e4d9db0ae678
apache-2.0
[]
false
How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf') >>> text = 'شلونك ؟ شخبارك ؟' >>> pos(text) [{'entity': 'noun', 'score': 0.99572617, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'noun', 'score': 0.9411187, 'index': 2, 'word': '
15e515217a0f8d8723a859105f9c4b6b
apache-2.0
[]
false
ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999661, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.99286526, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9983397, 'index': 5, 'word': '
ff944d6f8bc9300f3446ee85aeafa74e
apache-2.0
[]
false
ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999668, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
7eb509f0bf62ccdf749972b17b2ede94
mit
['generated_from_trainer']
false
amh_xlmr This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1295
66fadbc319d937a305fa5131bd627697
apache-2.0
['generated_from_trainer']
false
wav2vec2_imtiaz This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1956 - eval_wer: 0.2202 - eval_runtime: 574.912 - eval_samples_per_second: 8.697 - eval_steps_per_second: 0.544 - epoch: 9.41 - step: 22000
a66dae6dcca5b441070eb34dda38887d
apache-2.0
['generated_from_trainer', 'text-generation', 'opt', 'non-commercial', 'dialogue', 'chatbot']
false
pszemraj/opt-peter-1.3B This model is a fine-tuned version of [pszemraj/opt-peter-1.3B-1E](https://huggingface.co/pszemraj/opt-peter-1.3B-1E) on 80k Whatsapp/iMessages (mine). It achieves the following results on the evaluation set, after training for 1 epoch (_on top of the 1E checkpoint linked above_): - eval_loss: 3.4220 - eval_runtime: 954.9678 - eval_samples_per_second: 9.114 - eval_steps_per_second: 2.279 - epoch: 1.0 - step: 1235
09731b72dc5f5a82436cdb8ffbe3eea3
apache-2.0
['generated_from_trainer', 'text-generation', 'opt', 'non-commercial', 'dialogue', 'chatbot']
false
Intended uses & limitations - OPT has a license that does not allow for commercial use, see original for details - **any statements or claims made by this model do not reflect actual claims/statements by me**
9090f042ce852fe25dcacd479268ff14
apache-2.0
['generated_from_trainer', 'text-generation', 'opt', 'non-commercial', 'dialogue', 'chatbot']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 2
541aeef70eb367857bc275a68051fcc4
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/sentence-t5-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model works well for sentence similarity tasks, but doesn't perform that well for semantic search tasks. This model was converted from the Tensorflow model [st5-base-1](https://tfhub.dev/google/sentence-t5/st5-base/1) to PyTorch. When using this model, have a look at the publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results. The model uses only the encoder from a T5-base model. The weights are stored in FP16.
c4bd6adb311e8de2453e1e383506352f
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/sentence-t5-base') embeddings = model.encode(sentences) print(embeddings) ``` The model requires sentence-transformers version 2.2.0 or newer.
310e4a1ac5f326f987515a55781d509b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/sentence-t5-base)
eb9958cd1606e2fecf20c67a79d77eb7
apache-2.0
['generated_from_trainer']
false
beit-base-patch16-224-pt22k-ft22k This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1433 - Accuracy: 0.3333
f9590bf513eed63f73c952431128dc94
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.67 | 1 | 1.5398 | 0.1667 | | No log | 1.67 | 2 | 1.1394 | 0.5556 | | No log | 2.67 | 3 | 1.1433 | 0.3333 |
14bc42bafedb0c1799a1b6cfe46d3f02
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-base-subjqa-vanilla-books-qg` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: books) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
a3cf10c37365f8ce97299f4796788f5e
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (books) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
74fecb9ed9edb4a67a09fbb9fc1d95e2
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-base-subjqa-vanilla-books-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
d726d399a16a04993f4fb5da68596024
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-subjqa-vanilla-books-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:-----------------------------------------------------------------| | BERTScore | 84.11 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 3.75 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 1.84 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 0.52 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 11.37 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 52.79 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 8.31 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
aaf64a5676a4b4ab5f9d49ec1c9abc72
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: books - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: facebook/bart-base - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 8 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-subjqa-vanilla-books-qg/raw/main/trainer_config.json).
701f5625c60807a280b4f38003f4ea36
creativeml-openrail-m
['text-to-image']
false
model by kingery This your the Stable Diffusion model fine-tuned the hyc_01_sdv1-5_2e_6_1500_man_ddim concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of yangguangkechuang man** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/kingery/hyc-01-sdv1-5-2e-6-1500-man-ddim/resolve/main/concept_images/002uNeqWgy1gvporcu8skj60u0140n0s02.jpeg) ![image 1](https://huggingface.co/kingery/hyc-01-sdv1-5-2e-6-1500-man-ddim/resolve/main/concept_images/885543d6gy1h8eb60qmthj219d0u0ak6.jpg) ![image 2](https://huggingface.co/kingery/hyc-01-sdv1-5-2e-6-1500-man-ddim/resolve/main/concept_images/885543d6gy1gydqr9swtgj20u011iad6.jpeg) ![image 3](https://huggingface.co/kingery/hyc-01-sdv1-5-2e-6-1500-man-ddim/resolve/main/concept_images/885543d6gy1gydqr81sdej20u011ijwn.jpeg) ![image 4](https://huggingface.co/kingery/hyc-01-sdv1-5-2e-6-1500-man-ddim/resolve/main/concept_images/885543d6gy1h10eucftkmj20w60u010t.jpeg) ![image 5](https://huggingface.co/kingery/hyc-01-sdv1-5-2e-6-1500-man-ddim/resolve/main/concept_images/885543d6gy1h8h77dm4kvj20u0140k1a.jpg)
81d1197697be5e1acc6a25da9c66b657
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 8ee35df7260008e9a8a20d9a9b64773a02f706ef pip install -e . cd egs2/tedlium2/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_conformer_e15 ``` <!-- Generated by scripts/utils/show_asr_result.sh -->
330af2749e6bfa4c1ad15f090c78595b
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
Environments - date: `Sat Dec 17 04:27:41 CST 2022` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202209` - pytorch version: `pytorch 1.12.1` - Git hash: `26f432bc859e5e40cac1a86042d498ba7baffbb0` - Commit date: `Fri Dec 9 02:16:01 2022 +0000`
225b98f704784bbcb27c20ca0aaf5ee9
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev|466|14671|93.5|4.1|2.5|1.0|7.5|70.0| |decode_asr_asr_model_valid.acc.ave/test|1155|27500|93.4|4.0|2.6|1.0|7.6|64.2|
6e4ff7fe0c27f5de106d75e6beecd2c7
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev|466|78259|97.0|0.8|2.1|0.8|3.8|70.0| |decode_asr_asr_model_valid.acc.ave/test|1155|145066|97.0|0.9|2.2|0.9|4.0|64.2|
703e8f12bb23d182a84f55ee69259731
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.acc.ave/dev|466|28296|95.0|2.8|2.2|0.8|5.9|70.0| |decode_asr_asr_model_valid.acc.ave/test|1155|52113|95.1|2.5|2.4|0.9|5.8|64.2|
5e7f71621e43085f3e09057c162c7ce1
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_e15.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_e15_raw_en_bpe500_sp ngpu: 1 seed: 2022 num_workers: 6 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 59747 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 50000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe500_sp/train/speech_shape - exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - s - ▁the - t - ▁a - ▁and - ▁to - d - e - ▁of - '''' - n - ing - ▁in - ▁i - ▁that - i - a - l - p - m - y - o - ▁it - ▁we - c - u - ▁you - ed - ▁ - r - ▁is - re - ▁this - ar - g - ▁so - al - b - ▁s - or - ▁f - ▁c - in - k - f - ▁for - ic - er - le - ▁be - ▁do - ▁re - ve - ▁e - ▁w - ▁was - es - ▁they - ly - h - ▁on - v - ▁are - ri - ▁have - an - ▁what - ▁with - ▁t - w - ur - it - ent - ▁can - ▁he - ▁but - ra - ce - ▁me - ▁b - ▁ma - ▁p - ll - ▁st - ▁one - 'on' - ▁about - th - ▁de - en - ▁all - ▁not - il - ▁g - ch - at - ▁there - ▁mo - ter - ation - tion - ▁at - ▁my - ro - ▁as - te - ▁le - ▁con - ▁like - ▁people - ▁or - ▁an - el - ▁if - ▁from - ver - ▁su - ▁co - ate - ▁these - ol - ci - ▁now - ▁see - ▁out - ▁our - ion - ▁know - ect - ▁just - as - ▁ex - ▁ch - ▁d - ▁when - ▁very - ▁think - ▁who - ▁because - ▁go - ▁up - ▁us - ▁pa - ▁no - ies - ▁di - ▁ho - om - ive - ▁get - id - ▁o - ▁hi - un - ▁how - ▁by - ir - et - ck - ity - ▁po - ul - ▁which - ▁mi - ▁some - z - ▁sp - ▁un - ▁going - ▁pro - ist - ▁se - ▁look - ▁time - ment - de - ▁more - ▁had - ng - ▁would - ge - la - ▁here - ▁really - x - ▁your - ▁them - us - me - ▁en - ▁two - ▁k - ▁li - ▁world - ne - ow - ▁way - ▁want - ▁work - ▁don - ▁lo - ▁fa - ▁were - ▁their - age - vi - ▁ha - ac - der - est - ▁bo - am - ▁other - able - ▁actually - ▁sh - ▁make - ▁ba - ▁la - ine - ▁into - ▁where - ▁could - ▁comp - ting - ▁has - ▁will - ▁ne - j - ical - ally - ▁vi - ▁things - ▁te - igh - ▁say - ▁years - ers - ▁ra - ther - ▁than - ru - ▁ro - op - ▁did - ▁any - ▁new - ound - ig - ▁well - mo - ▁she - ▁na - ▁been - he - ▁thousand - ▁car - ▁take - ▁right - ▁then - ▁need - ▁start - ▁hundred - ▁something - ▁over - ▁com - ia - ▁kind - um - if - ▁those - ▁first - ▁pre - ta - ▁said - ize - end - ▁even - ▁thing - one - ▁back - ite - ▁every - ▁little - ry - ▁life - ▁much - ke - ▁also - ▁most - ant - per - ▁three - ▁come - ▁lot - ance - ▁got - ▁talk - ▁per - ▁inter - ▁sa - ▁use - ▁mu - ▁part - ish - ence - ▁happen - ▁bi - ▁mean - ough - ▁qu - ▁bu - ▁day - ▁ga - ▁only - ▁many - ▁different - ▁dr - ▁th - ▁show - ful - ▁down - ated - ▁good - ▁tra - ▁around - ▁idea - ▁human - ous - ▁put - ▁through - ▁five - ▁why - ▁change - ▁real - ff - ible - ▁fact - ▁same - ▁jo - ▁live - ▁year - ▁problem - ▁ph - ▁four - ▁give - ▁big - ▁tell - ▁great - ▁try - ▁va - ▁ru - ▁system - ▁six - ▁plan - ▁place - ▁build - ▁called - ▁again - ▁point - ▁twenty - ▁percent - ▁nine - ▁find - ▁app - ▁after - ▁long - ▁eight - ▁imp - ▁gene - ▁design - ▁today - ▁should - ▁made - ious - ▁came - ▁learn - ▁last - ▁own - way - ▁turn - ▁seven - ▁high - ▁question - ▁person - ▁brain - ▁important - ▁another - ▁thought - ▁trans - ▁create - ness - ▁hu - ▁power - ▁act - land - ▁play - ▁sort - ▁old - ▁before - ▁course - ▁understand - ▁feel - ▁might - ▁each - ▁million - ▁better - ▁together - ▁ago - ▁example - ▁help - ▁story - ▁next - ▁hand - ▁school - ▁water - ▁develop - ▁technology - que - ▁second - ▁grow - ▁still - ▁cell - ▁believe - ▁number - ▁small - ▁between - qui - ▁data - ▁become - ▁america - ▁maybe - ▁space - ▁project - ▁organ - ▁vo - ▁children - ▁book - graph - ▁open - ▁fifty - ▁picture - ▁health - ▁thirty - ▁africa - ▁reason - ▁large - ▁hard - ▁computer - ▁always - ▁sense - ▁money - ▁women - ▁everything - ▁information - ▁country - ▁teach - ▁energy - ▁experience - ▁food - ▁process - qua - ▁interesting - ▁future - ▁science - q - '0' - '5' - '6' - '9' - '3' - '8' - '4' - N - A - '7' - S - G - F - R - L - U - E - T - H - _ - B - D - J - M - ă - ō - ť - '2' - '-' - '1' - C - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 1024 num_blocks: 15 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202209' distributed: true ``` </details>
f3af97f757f0dfe9d765b342472b55fc
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-1']
false
MultiBERTs Seed 1 Checkpoint 80k (uncased) Seed 1 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
bb198938bb29e0633a9139ea9973be00
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-1']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-80k') model = BertModel.from_pretrained("multiberts-seed-1-80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
c3c5e98c7e59ae15b4ee12d136676073
mit
['fill-mask', 'alloys', 'metallurgy']
false
Abstract: Alloy Property Prediction is a task under the sub field of Alloy Material Science wherein Machine Learning has been applied rigorously. This is modeled as a Supervised Task wherein Alloy Composition is provided for the Model to predict a desired property. Efficiency of tasks such as *Alloy Property Prediction*, Alloy Synthesis can be modeled additionally with an Unsupervised Pre-training Task. We describe the idea of Pre-training using Language Modelling kind of approach in terms of Alloy Compositions.We specifically inspect that random masking proposed in is not suitable for modelling Alloys. We further go on proposing two types of masking strategies that are used to train GlassBERTa to encompass the properties of an Alloy Composition. The results suggest that Pre-training is an important field of direction in this field of research for further improvement.
8a88c225c0dc98765e251b79bc963b35
mit
['fill-mask', 'alloys', 'metallurgy']
false
Footnote: Work done via [MLDMM Lab](https://sites.google.com/view/mldmm-lab/home) ![alt text](https://lh4.googleusercontent.com/4L1C4_7ZBScAs9TIlkbyfjlotpnlnA4w22PLJXDWrYzh434Cu8RBhExvfBNdV8roOSb_k3WsM6MQHxv0zErcUhg=w16383 "Machine Learning for Design of Mechanical Materials Lab")
3c5fd6ee7a26930339e3ae02ece22b75
apache-2.0
['generated_from_trainer']
false
wav2vec2-arabic-gpu-colab-similar-to-german-bigger-warm-up This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6370 - Wer: 0.4146
bfe423418e7093dbabd2d8d3aad18352
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 40 - mixed_precision_training: Native AMP
7f99cacb65c656e87b0497694323eec0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 9.4958 | 2.83 | 400 | 3.4822 | 1.0 | | 3.2281 | 5.67 | 800 | 2.9404 | 1.0 | | 2.942 | 8.51 | 1200 | 2.8690 | 1.0 | | 2.6346 | 11.35 | 1600 | 1.5452 | 0.9994 | | 1.3472 | 14.18 | 2000 | 0.8261 | 0.6853 | | 0.8972 | 17.02 | 2400 | 0.6812 | 0.5737 | | 0.6924 | 19.85 | 2800 | 0.6552 | 0.5291 | | 0.5687 | 22.69 | 3200 | 0.6108 | 0.4909 | | 0.4734 | 25.53 | 3600 | 0.5877 | 0.4674 | | 0.4029 | 28.37 | 4000 | 0.6204 | 0.4662 | | 0.3483 | 31.2 | 4400 | 0.5932 | 0.4451 | | 0.307 | 34.04 | 4800 | 0.6445 | 0.4392 | | 0.2722 | 36.88 | 5200 | 0.6126 | 0.4292 | | 0.2247 | 39.71 | 5600 | 0.6370 | 0.4146 |
2d5730de9a654dd01bea606f3686e4bd
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/distilbert-base-nli-stsb-quora-ranking This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
a5f2e996e3da0f475441da86f88e3701
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distilbert-base-nli-stsb-quora-ranking') embeddings = model.encode(sentences) print(embeddings) ```
cbb6f1ec24a311bcb0c5499baaccb1e4
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/distilbert-base-nli-stsb-quora-ranking') model = AutoModel.from_pretrained('sentence-transformers/distilbert-base-nli-stsb-quora-ranking')
938f5e4e604916d0a53b916dc13e5ee1
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distilbert-base-nli-stsb-quora-ranking)
3e98b8c5a474f7cc3e20715663a7dde8
mit
[]
false
Retro-Girl on Stable Diffusion This is the `<retro-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<retro-girl> 0](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/0.jpeg) ![<retro-girl> 1](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/3.jpeg) ![<retro-girl> 2](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/1.jpeg) ![<retro-girl> 3](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/2.jpeg) ![<retro-girl> 4](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/4.jpeg)
37392625606e6acb5f812a49f7bb78c4
mit
['stable-diffusion', 'text-to-image']
false
Usage To use this model you have to download the .ckpt file as well as drop it into the "\stable-diffusion-webui\models\Stable-diffusion" folder To use it in a prompt: ```"Rebecca girl"``` for highest strength or just "Rebecca" To increase the strength put "Rebecca girl" in () brackets To decrease the strength put "Rebecca girl" in [] brackets Waifu_diffusion base trained model trained to 3,500 steps Have fun :)
c2efde139b9ccdded48dbf147fce7c27