license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.6013 | 4.2024 | 0 | | 5.8556 | 3.7335 | 1 | | 5.0930 | 3.5494 | 2 | | 4.6610 | 3.4502 | 3 | | 4.3874 | 3.4030 | 4 | | 4.2103 | 3.3568 | 5 | | 4.0930 | 3.3311 | 6 | | 4.0061 | 3.3257 | 7 |
b84657b6e8e4959901c97ebc22bf0a98
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Wer: 0.3080
20801c327db9adc3cdc1747f21b8ca73
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5787 | 0.87 | 500 | 1.7648 | 1.0305 | | 0.8692 | 1.73 | 1000 | 0.5136 | 0.5103 | | 0.4346 | 2.6 | 1500 | 0.4364 | 0.4515 | | 0.31 | 3.46 | 2000 | 0.3889 | 0.4070 | | 0.234 | 4.33 | 2500 | 0.4161 | 0.3863 | | 0.2054 | 5.19 | 3000 | 0.3845 | 0.3722 | | 0.165 | 6.06 | 3500 | 0.4035 | 0.3643 | | 0.1436 | 6.92 | 4000 | 0.4090 | 0.3623 | | 0.1381 | 7.79 | 4500 | 0.4007 | 0.3673 | | 0.1175 | 8.65 | 5000 | 0.4588 | 0.3632 | | 0.1052 | 9.52 | 5500 | 0.4441 | 0.3588 | | 0.0988 | 10.38 | 6000 | 0.4133 | 0.3489 | | 0.0877 | 11.25 | 6500 | 0.4758 | 0.3510 | | 0.0856 | 12.11 | 7000 | 0.4454 | 0.3425 | | 0.0731 | 12.98 | 7500 | 0.4252 | 0.3351 | | 0.0712 | 13.84 | 8000 | 0.4163 | 0.3370 | | 0.0711 | 14.71 | 8500 | 0.4166 | 0.3367 | | 0.06 | 15.57 | 9000 | 0.4195 | 0.3347 | | 0.0588 | 16.44 | 9500 | 0.4697 | 0.3367 | | 0.0497 | 17.3 | 10000 | 0.4255 | 0.3314 | | 0.0523 | 18.17 | 10500 | 0.4676 | 0.3307 | | 0.0444 | 19.03 | 11000 | 0.4570 | 0.3244 | | 0.0435 | 19.9 | 11500 | 0.4307 | 0.3243 | | 0.0348 | 20.76 | 12000 | 0.4763 | 0.3245 | | 0.036 | 21.63 | 12500 | 0.4635 | 0.3238 | | 0.0347 | 22.49 | 13000 | 0.4602 | 0.3212 | | 0.0333 | 23.36 | 13500 | 0.4472 | 0.3195 | | 0.0311 | 24.22 | 14000 | 0.4449 | 0.3183 | | 0.0294 | 25.09 | 14500 | 0.4631 | 0.3175 | | 0.025 | 25.95 | 15000 | 0.4466 | 0.3164 | | 0.023 | 26.82 | 15500 | 0.4581 | 0.3138 | | 0.0216 | 27.68 | 16000 | 0.4665 | 0.3114 | | 0.0198 | 28.55 | 16500 | 0.4590 | 0.3092 | | 0.0181 | 29.41 | 17000 | 0.4659 | 0.3080 |
2067decaffd4e441ba4236df13d263f3
apache-2.0
['generated_from_trainer']
false
mobilebert_add_GLUE_Experiment_sst2_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4543 - Accuracy: 0.7982
56c1e7132163182d003c396121f97e3c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6677 | 1.0 | 527 | 0.6771 | 0.5757 | | 0.5966 | 2.0 | 1054 | 0.7135 | 0.5424 | | 0.5714 | 3.0 | 1581 | 0.7271 | 0.5550 | | 0.5573 | 4.0 | 2108 | 0.6892 | 0.5619 | | 0.501 | 5.0 | 2635 | 0.4546 | 0.7798 | | 0.2856 | 6.0 | 3162 | 0.4613 | 0.8050 | | 0.2288 | 7.0 | 3689 | 0.4543 | 0.7982 | | 0.2027 | 8.0 | 4216 | 0.4662 | 0.7993 | | 0.1883 | 9.0 | 4743 | 0.5168 | 0.8039 | | 0.1779 | 10.0 | 5270 | 0.5748 | 0.7856 | | 0.1691 | 11.0 | 5797 | 0.5196 | 0.8028 | | 0.1596 | 12.0 | 6324 | 0.5943 | 0.7947 |
7993b8a7da6a6305638553e7aae60f92
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__subj__train-8-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2766 - Accuracy: 0.8845
c7ef5497f5c218e7e4321b2bbb6e15f5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7044 | 1.0 | 3 | 0.6909 | 0.5 | | 0.6678 | 2.0 | 6 | 0.6901 | 0.5 | | 0.6336 | 3.0 | 9 | 0.6807 | 0.5 | | 0.5926 | 4.0 | 12 | 0.6726 | 0.5 | | 0.5221 | 5.0 | 15 | 0.6648 | 0.5 | | 0.4573 | 6.0 | 18 | 0.6470 | 0.5 | | 0.4177 | 7.0 | 21 | 0.6251 | 0.5 | | 0.3252 | 8.0 | 24 | 0.5994 | 0.5 | | 0.2831 | 9.0 | 27 | 0.5529 | 0.5 | | 0.213 | 10.0 | 30 | 0.5078 | 0.75 | | 0.1808 | 11.0 | 33 | 0.4521 | 1.0 | | 0.1355 | 12.0 | 36 | 0.3996 | 1.0 | | 0.1027 | 13.0 | 39 | 0.3557 | 1.0 | | 0.0862 | 14.0 | 42 | 0.3121 | 1.0 | | 0.0682 | 15.0 | 45 | 0.2828 | 1.0 | | 0.0517 | 16.0 | 48 | 0.2603 | 1.0 | | 0.0466 | 17.0 | 51 | 0.2412 | 1.0 | | 0.038 | 18.0 | 54 | 0.2241 | 1.0 | | 0.0276 | 19.0 | 57 | 0.2096 | 1.0 | | 0.0246 | 20.0 | 60 | 0.1969 | 1.0 | | 0.0249 | 21.0 | 63 | 0.1859 | 1.0 | | 0.0201 | 22.0 | 66 | 0.1770 | 1.0 | | 0.018 | 23.0 | 69 | 0.1703 | 1.0 | | 0.0164 | 24.0 | 72 | 0.1670 | 1.0 | | 0.0172 | 25.0 | 75 | 0.1639 | 1.0 | | 0.0135 | 26.0 | 78 | 0.1604 | 1.0 | | 0.014 | 27.0 | 81 | 0.1585 | 1.0 | | 0.0108 | 28.0 | 84 | 0.1569 | 1.0 | | 0.0116 | 29.0 | 87 | 0.1549 | 1.0 | | 0.0111 | 30.0 | 90 | 0.1532 | 1.0 | | 0.0113 | 31.0 | 93 | 0.1513 | 1.0 | | 0.0104 | 32.0 | 96 | 0.1503 | 1.0 | | 0.01 | 33.0 | 99 | 0.1490 | 1.0 | | 0.0079 | 34.0 | 102 | 0.1479 | 1.0 | | 0.0097 | 35.0 | 105 | 0.1466 | 1.0 | | 0.0112 | 36.0 | 108 | 0.1458 | 1.0 | | 0.0091 | 37.0 | 111 | 0.1457 | 1.0 | | 0.0098 | 38.0 | 114 | 0.1454 | 1.0 | | 0.0076 | 39.0 | 117 | 0.1451 | 1.0 | | 0.0085 | 40.0 | 120 | 0.1448 | 1.0 | | 0.0079 | 41.0 | 123 | 0.1445 | 1.0 | | 0.0096 | 42.0 | 126 | 0.1440 | 1.0 | | 0.0081 | 43.0 | 129 | 0.1430 | 1.0 | | 0.0083 | 44.0 | 132 | 0.1424 | 1.0 | | 0.0088 | 45.0 | 135 | 0.1418 | 1.0 | | 0.0077 | 46.0 | 138 | 0.1414 | 1.0 | | 0.0073 | 47.0 | 141 | 0.1413 | 1.0 | | 0.0084 | 48.0 | 144 | 0.1412 | 1.0 | | 0.0072 | 49.0 | 147 | 0.1411 | 1.0 | | 0.0077 | 50.0 | 150 | 0.1411 | 1.0 |
a0aa3d5f0354a55d39f465a057e2a8a2
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Greek This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
90ea8f90060e801ed106eea1a0535c9e
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el") ```
f2a919e3f6ee372b6983edd22e7c2e48
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
wav2vec2-large-xls-r-300m-br-d2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: - Loss: 1.1257 - Wer: 0.4631
a7ed5d4710e245091b9bed890bdce717
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Evaluation Commands 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Breton language isn't available in speech-recognition-community-v2/dev_data
ec4aa26c838ad0c8ae41cb9cb4bccf4c
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00034 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 750 - num_epochs: 50 - mixed_precision_training: Native AMP
d187ae370ad795e0119b03468fd2e9b9
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 14.0379 | 0.68 | 100 | 5.6808 | 1.0 | | 3.9145 | 1.35 | 200 | 3.1970 | 1.0 | | 3.0293 | 2.03 | 300 | 2.9513 | 1.0 | | 2.0927 | 2.7 | 400 | 1.4545 | 0.8887 | | 1.1556 | 3.38 | 500 | 1.0966 | 0.7564 | | 0.9628 | 4.05 | 600 | 0.9808 | 0.7364 | | 0.7869 | 4.73 | 700 | 1.0488 | 0.7355 | | 0.703 | 5.41 | 800 | 0.9500 | 0.6881 | | 0.6657 | 6.08 | 900 | 0.9309 | 0.6259 | | 0.5663 | 6.76 | 1000 | 0.9133 | 0.6357 | | 0.496 | 7.43 | 1100 | 0.9890 | 0.6028 | | 0.4748 | 8.11 | 1200 | 0.9469 | 0.5894 | | 0.4135 | 8.78 | 1300 | 0.9270 | 0.6045 | | 0.3579 | 9.46 | 1400 | 0.8818 | 0.5708 | | 0.353 | 10.14 | 1500 | 0.9244 | 0.5781 | | 0.334 | 10.81 | 1600 | 0.9009 | 0.5638 | | 0.2917 | 11.49 | 1700 | 1.0132 | 0.5828 | | 0.29 | 12.16 | 1800 | 0.9696 | 0.5668 | | 0.2691 | 12.84 | 1900 | 0.9811 | 0.5455 | | 0.25 | 13.51 | 2000 | 0.9951 | 0.5624 | | 0.2467 | 14.19 | 2100 | 0.9653 | 0.5573 | | 0.2242 | 14.86 | 2200 | 0.9714 | 0.5378 | | 0.2066 | 15.54 | 2300 | 0.9829 | 0.5394 | | 0.2075 | 16.22 | 2400 | 1.0547 | 0.5520 | | 0.1923 | 16.89 | 2500 | 1.0014 | 0.5397 | | 0.1919 | 17.57 | 2600 | 0.9978 | 0.5477 | | 0.1908 | 18.24 | 2700 | 1.1064 | 0.5397 | | 0.157 | 18.92 | 2800 | 1.0629 | 0.5238 | | 0.159 | 19.59 | 2900 | 1.0642 | 0.5321 | | 0.1652 | 20.27 | 3000 | 1.0207 | 0.5328 | | 0.141 | 20.95 | 3100 | 0.9948 | 0.5312 | | 0.1417 | 21.62 | 3200 | 1.0338 | 0.5328 | | 0.1514 | 22.3 | 3300 | 1.0513 | 0.5313 | | 0.1365 | 22.97 | 3400 | 1.0357 | 0.5291 | | 0.1319 | 23.65 | 3500 | 1.0587 | 0.5167 | | 0.1298 | 24.32 | 3600 | 1.0636 | 0.5236 | | 0.1245 | 25.0 | 3700 | 1.1367 | 0.5280 | | 0.1114 | 25.68 | 3800 | 1.0633 | 0.5200 | | 0.1088 | 26.35 | 3900 | 1.0495 | 0.5210 | | 0.1175 | 27.03 | 4000 | 1.0897 | 0.5095 | | 0.1043 | 27.7 | 4100 | 1.0580 | 0.5309 | | 0.0951 | 28.38 | 4200 | 1.0448 | 0.5067 | | 0.1011 | 29.05 | 4300 | 1.0665 | 0.5137 | | 0.0889 | 29.73 | 4400 | 1.0579 | 0.5026 | | 0.0833 | 30.41 | 4500 | 1.0740 | 0.5037 | | 0.0889 | 31.08 | 4600 | 1.0933 | 0.5083 | | 0.0784 | 31.76 | 4700 | 1.0715 | 0.5089 | | 0.0767 | 32.43 | 4800 | 1.0658 | 0.5049 | | 0.0769 | 33.11 | 4900 | 1.1118 | 0.4979 | | 0.0722 | 33.78 | 5000 | 1.1413 | 0.4986 | | 0.0709 | 34.46 | 5100 | 1.0706 | 0.4885 | | 0.0664 | 35.14 | 5200 | 1.1217 | 0.4884 | | 0.0648 | 35.81 | 5300 | 1.1298 | 0.4941 | | 0.0657 | 36.49 | 5400 | 1.1330 | 0.4920 | | 0.0582 | 37.16 | 5500 | 1.0598 | 0.4835 | | 0.0602 | 37.84 | 5600 | 1.1097 | 0.4943 | | 0.0598 | 38.51 | 5700 | 1.0976 | 0.4876 | | 0.0547 | 39.19 | 5800 | 1.0734 | 0.4825 | | 0.0561 | 39.86 | 5900 | 1.0926 | 0.4850 | | 0.0516 | 40.54 | 6000 | 1.1579 | 0.4751 | | 0.0478 | 41.22 | 6100 | 1.1384 | 0.4706 | | 0.0396 | 41.89 | 6200 | 1.1462 | 0.4739 | | 0.0472 | 42.57 | 6300 | 1.1277 | 0.4732 | | 0.0447 | 43.24 | 6400 | 1.1517 | 0.4752 | | 0.0423 | 43.92 | 6500 | 1.1219 | 0.4784 | | 0.0426 | 44.59 | 6600 | 1.1311 | 0.4724 | | 0.0391 | 45.27 | 6700 | 1.1135 | 0.4692 | | 0.0362 | 45.95 | 6800 | 1.0878 | 0.4645 | | 0.0329 | 46.62 | 6900 | 1.1137 | 0.4668 | | 0.0356 | 47.3 | 7000 | 1.1233 | 0.4687 | | 0.0328 | 47.97 | 7100 | 1.1238 | 0.4653 | | 0.0323 | 48.65 | 7200 | 1.1307 | 0.4646 | | 0.0325 | 49.32 | 7300 | 1.1242 | 0.4645 | | 0.03 | 50.0 | 7400 | 1.1257 | 0.4631 |
aa29ab3938d5791dac227a14128d45b6
gpl-3.0
[]
false
Pre-trained word embeddings using the text of published biomedical manuscripts. These embeddings use 100 dimensions and were trained using the GloVe algorithm on all published manuscripts found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/ Citation: ``` @article{flamholz2022word, title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information}, author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E}, journal={Journal of Biomedical Informatics}, volume={125}, pages={103971}, year={2022}, publisher={Elsevier} } ```
be620bd2c8770dd6b23465cd9c85080f
gpl-3.0
[]
false
Quick start Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format. First download the files from this archive. Then load the embeddings into Python. ```python from gensim.models import FastText, Word2Vec, KeyedVectors
653d6a4e0cfe50a1e9afa3f189ad2021
gpl-3.0
[]
false
Try out cosine similarity model.similarity('copd', 'chronic_obstructive_pulmonary_disease') model.similarity('myocardial_infarction', 'heart_attack') model.similarity('lymphangioleiomyomatosis', 'lam') ```
f3af6bfc63c651b087f9305e16ca5e75
apache-2.0
['generated_from_trainer']
false
mbert-finetuned-azerbaijani-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1385 - Precision: 0.8899 - Recall: 0.9154 - F1: 0.9025 - Accuracy: 0.9669
94ed43b1b511a394b9bde45f679863ea
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2928 | 1.0 | 625 | 0.1415 | 0.8584 | 0.8918 | 0.8748 | 0.9595 | | 0.1254 | 2.0 | 1250 | 0.1335 | 0.8875 | 0.9119 | 0.8996 | 0.9637 | | 0.077 | 3.0 | 1875 | 0.1385 | 0.8899 | 0.9154 | 0.9025 | 0.9669 |
d4c9f6eba58063f325c1194980af1669
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5499 - Wer: 0.3435
236fdd285948e0b7d9b95e208ecd601c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.599 | 1.0 | 500 | 2.1267 | 0.9976 | | 1.016 | 2.01 | 1000 | 0.6193 | 0.5443 | | 0.5299 | 3.01 | 1500 | 0.5324 | 0.4889 | | 0.3626 | 4.02 | 2000 | 0.4525 | 0.4402 | | 0.2854 | 5.02 | 2500 | 0.4266 | 0.4233 | | 0.2373 | 6.02 | 3000 | 0.4713 | 0.4082 | | 0.1979 | 7.03 | 3500 | 0.4778 | 0.4018 | | 0.1761 | 8.03 | 4000 | 0.4585 | 0.3947 | | 0.1537 | 9.04 | 4500 | 0.5297 | 0.3946 | | 0.1379 | 10.04 | 5000 | 0.4988 | 0.3856 | | 0.124 | 11.04 | 5500 | 0.5262 | 0.3852 | | 0.11 | 12.05 | 6000 | 0.5545 | 0.3854 | | 0.106 | 13.05 | 6500 | 0.5196 | 0.3805 | | 0.0918 | 14.06 | 7000 | 0.4515 | 0.3655 | | 0.0829 | 15.06 | 7500 | 0.5087 | 0.3722 | | 0.0775 | 16.06 | 8000 | 0.4980 | 0.3781 | | 0.0685 | 17.07 | 8500 | 0.5564 | 0.3650 | | 0.0655 | 18.07 | 9000 | 0.5323 | 0.3672 | | 0.0578 | 19.08 | 9500 | 0.5675 | 0.3637 | | 0.052 | 20.08 | 10000 | 0.5604 | 0.3664 | | 0.0512 | 21.08 | 10500 | 0.5922 | 0.3804 | | 0.0431 | 22.09 | 11000 | 0.6379 | 0.3754 | | 0.0428 | 23.09 | 11500 | 0.5905 | 0.3764 | | 0.0393 | 24.1 | 12000 | 0.5667 | 0.3542 | | 0.0326 | 25.1 | 12500 | 0.5612 | 0.3537 | | 0.0289 | 26.1 | 13000 | 0.5618 | 0.3475 | | 0.0298 | 27.11 | 13500 | 0.5578 | 0.3439 | | 0.0264 | 28.11 | 14000 | 0.5547 | 0.3433 | | 0.026 | 29.12 | 14500 | 0.5499 | 0.3435 |
2e0d178a5a709f29e4ff9d82f12650ce
apache-2.0
['generated_from_trainer']
false
muril-base-cased-finetuned-combined-DS This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5291 - Accuracy: 0.6657 - Precision: 0.6355 - Recall: 0.6275 - F1: 0.6294
31a3e8ec2c600bcfd72eb41d80763683
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25
2c3b59cee106d8d42ad6a08c83bd3739
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9961 | 2.0 | 711 | 0.9148 | 0.5625 | 0.5495 | 0.5636 | 0.5265 | | 0.8211 | 3.99 | 1422 | 0.8542 | 0.6096 | 0.6023 | 0.6071 | 0.5928 | | 0.6667 | 5.99 | 2133 | 0.8459 | 0.6601 | 0.6366 | 0.6379 | 0.6361 | | 0.5272 | 7.99 | 2844 | 0.9667 | 0.6517 | 0.6190 | 0.6223 | 0.6201 | | 0.4327 | 9.99 | 3555 | 1.0185 | 0.6503 | 0.6351 | 0.6222 | 0.6229 | | 0.3608 | 11.98 | 4266 | 1.1409 | 0.6313 | 0.6053 | 0.6100 | 0.6049 | | 0.3038 | 13.98 | 4977 | 1.2336 | 0.6601 | 0.6287 | 0.6269 | 0.6273 | | 0.2631 | 15.98 | 5688 | 1.3151 | 0.6503 | 0.6199 | 0.6167 | 0.6177 | | 0.2368 | 17.97 | 6399 | 1.4230 | 0.6594 | 0.6315 | 0.6233 | 0.6251 | | 0.2093 | 19.97 | 7110 | 1.4881 | 0.6629 | 0.6332 | 0.6220 | 0.6239 | | 0.1968 | 21.97 | 7821 | 1.5003 | 0.6559 | 0.6279 | 0.6230 | 0.6242 | | 0.1824 | 23.97 | 8532 | 1.5291 | 0.6657 | 0.6355 | 0.6275 | 0.6294 |
934f12c76c655e74247f1349b828864b
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Uzbek This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 uz dataset. It achieves the following results on the evaluation set: - Loss: 0.4357 - Wer: 25.7857
f89547f81404ac8605fb4a222f5a1178
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 8000 - mixed_precision_training: Native AMP
b8847d624488bbb19a7904560527e684
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3621 | 1.03 | 1000 | 0.4819 | 32.3209 | | 0.2378 | 2.07 | 2000 | 0.4413 | 29.0077 | | 0.2342 | 4.01 | 3000 | 0.4224 | 27.3939 | | 0.1286 | 5.04 | 4000 | 0.4357 | 25.7857 | | 0.1192 | 6.08 | 5000 | 0.4727 | 27.2752 | | 0.0147 | 8.02 | 6000 | 0.5230 | 26.7267 | | 0.0425 | 9.05 | 7000 | 0.5336 | 26.3628 | | 0.0059 | 10.08 | 8000 | 0.5658 | 26.8476 |
3949412066bb225e644406031ffa1e2a
apache-2.0
['irish', 'electra']
false
gaELECTRA [gaELECTRA](https://arxiv.org/abs/2107.12930) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model.
0bb6104f28cb58accf9c0c46f900dd67
apache-2.0
['irish', 'electra']
false
Limitations and bias Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations.
54168863f82301268316009cae11c251
apache-2.0
['irish', 'electra']
false
BibTeX entry and citation info If you use this model in your research, please consider citing our paper: ``` @article{DBLP:journals/corr/abs-2107-12930, author = {James Barry and Joachim Wagner and Lauren Cassidy and Alan Cowap and Teresa Lynn and Abigail Walsh and M{\'{\i}}che{\'{a}}l J. {\'{O}} Meachair and Jennifer Foster}, title = {gaBERT - an Irish Language Model}, journal = {CoRR}, volume = {abs/2107.12930}, year = {2021}, url = {https://arxiv.org/abs/2107.12930}, archivePrefix = {arXiv}, eprint = {2107.12930}, timestamp = {Fri, 30 Jul 2021 13:03:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-12930.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
607e3d93bc2ef96ca22381ab43947000
mit
['generated_from_keras_callback']
false
Deep98/Web_browser-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1604 - Train End Logits Accuracy: 0.9826 - Train Start Logits Accuracy: 0.9375 - Validation Loss: 0.0757 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
f549ea963a900fe656e52ea5a6b941b0
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.1604 | 0.9826 | 0.9375 | 0.0757 | 1.0 | 1.0 | 0 |
cb2d4a38af8c440cc849c9ade57ebf2b
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
p-AI-nter -- v0.2 Core model is SD-1.5, trained on artworks of different painters (Rob Hefferan, Anna Marinova, Omar Ortiz, Thomas Saliot, Serge Marshennikov). Use the token 'oil painting' in your prompts for better effect. > Trained with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook. Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb). Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb).
195d40540be7980cadbc6e823b40f234
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Prompt and settings for samples ``` (portrait photo)++ of (young)+ woman on river bank, dressed in silk shirt, golden and white and bronze color scheme, (oil painting)+, (epic composition)+, intricate, Highly Detailed, Sharp focus, dramatic light, (high bun black hair)++, (bokeh)+, (deep eyes)+, (sunset)++, (model pose)+, (ideal hands)++, (ray tracing)++, (cleavage)+, (ideal breast)+ ``` __negative:__ ``` Deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, extra limb, ugly, poorly drawn hands, missing limb, blurry, disconnected limbs, malformed hands, blur, out of focus, long neck, long body, mutated hands and fingers, fat, overweight, multiple heads, group of people, three or more legs, cross-eye, nude, naked, naked, (extra fingers)+, (fused fingers)+ ``` * Steps: 50 * Scale: 9 * Sampler: Euler_A - - -
0ecfca210d992bf9345624e468baa6a2
apache-2.0
['automatic-speech-recognition', 'ar']
false
exp_w2v2t_ar_vp-fr_s957 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0bcdb32e482a3e319979b83f304d6731
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_100v0_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v0_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4601 - Precision: 0.1802 - Recall: 0.0830 - F1: 0.1137 - Accuracy: 0.8143
5ce21c7682bbc88881cfb2885576d6dd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 33 | 0.5687 | 0.0882 | 0.0015 | 0.0030 | 0.7791 | | No log | 2.0 | 66 | 0.5410 | 0.1319 | 0.0270 | 0.0448 | 0.7946 | | No log | 3.0 | 99 | 0.4601 | 0.1802 | 0.0830 | 0.1137 | 0.8143 |
9623bb9b7762de0e24f044286e454ef2
other
['vision', 'image-segmentation']
false
MaskFormer MaskFormer model trained on COCO panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py
5484ff243f1c40a01cf212a3250665ae
other
['vision', 'image-segmentation']
false
Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png)
da1151ba5570964761e40d501d4cd8e9
other
['vision', 'image-segmentation']
false
Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you.
bc815cf8c1cb42302c4840d0b895a5eb
other
['vision', 'image-segmentation']
false
load MaskFormer fine-tuned on COCO panoptic segmentation feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-tiny-coco") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-tiny-coco") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs)
e0c1fac3e4a6bcbe462d43a028155b7c
other
['vision', 'image-segmentation']
false
we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
7a2b3b38e617bdebf0c1fba674cd0266
apache-2.0
[]
false
ALBERT XXLarge model HPU configuration This model only contains the `GaudiConfig` file for running the [albert-xxlarge-v1](https://huggingface.co/albert-xxlarge-v1) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP) - `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html
642c0634d870300e3aa069e10d9c564a
apache-2.0
[]
false
Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with ALBERT XXL with the following command: ```bash python run_qa.py \ --model_name_or_path albert-xxlarge-v1 \ --gaudi_config_name Habana/albert-xxlarge-v1 \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --per_device_eval_batch_size 2 \ --learning_rate 5e-6 \ --num_train_epochs 2 \ --max_seq_length 384 \ --output_dir /tmp/squad/ \ --use_habana \ --use_lazy_mode \ --throughput_warmup_steps 2 ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
9642a212db4599088889a9cc1f626ad3
mit
[]
false
ResNet101 model ported from [torchvision](https://pytorch.org/vision/stable/index.html) for use with [Metalhead.jl](https://github.com/FluxML/Metalhead.jl). The scripts for creating this file can be found at [this gist](https://gist.github.com/darsnack/bfb8594cf5fdc702bdacb66586f518ef). To use this model in Julia, [add the Metalhead.jl package to your environment](https://pkgdocs.julialang.org/v1/managing-packages/
2dd4b489734d607e3a21bfce1e682670
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Zulu This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs zu_za dataset. It achieves the following results on the evaluation set: - Loss: 1.1143 - Wer: 56.7866
2d495a840fe25838c750e9581e364ba9
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 200 - mixed_precision_training: Native AMP
aa05b0903ed858aa32b5731c0d5b5aa6
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.6219 | 9.01 | 100 | 1.0758 | 62.0201 | | 0.0318 | 18.01 | 200 | 1.1143 | 56.7866 |
adcf3ab518d72be669cb5495aecb7c89
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-base-hu-uk Neural machine translation model for translating from Hungarian (hu) to Ukrainian (uk). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
3c00dfc21e5c98427a070bbd439d3290
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-08 * source language(s): hun * target language(s): ukr * model: transformer-align * data: opusTCv20210807+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pbt_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.zip) * more information released models: [OPUS-MT hun-ukr README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-ukr/README.md)
20885920063c788e237bc0a3a64edfdb
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "1000 dollárral tartozom neked.", "Vizet iszom." ] model_name = "pytorch-models/opus-mt-tc-base-hu-uk" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
197817f52a3c96dd18cac59fb88e78dc
cc-by-4.0
['translation', 'opus-mt-tc']
false
Я п'ю воду. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-hu-uk") print(pipe("1000 dollárral tartozom neked."))
6b2c547d6a89b942c8dc12c5bbf8de2d
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.test.txt) * test set scores: [opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opusTCv20210807+pbt_transformer-align_2022-03-08.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
122050e6af5ce249e422953ad9ad6e0a
apache-2.0
['generated_from_trainer']
false
medium-mlm-imdb-target-tweet This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/medium-mlm-imdb) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.6869 - Accuracy: 0.7620 - F1: 0.7599
65ea4f667ccf1781c0f52eb6f1888016
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.456 | 4.9 | 500 | 0.8890 | 0.7754 | 0.7720 | | 0.0578 | 9.8 | 1000 | 1.3492 | 0.7540 | 0.7509 | | 0.0173 | 14.71 | 1500 | 1.6143 | 0.7594 | 0.7584 | | 0.0124 | 19.61 | 2000 | 1.6869 | 0.7620 | 0.7599 |
56b77c01a99f27b6774e87b0beb6d8e8
apache-2.0
['generated_from_trainer']
false
vit-base-patch16-224-in21k-finetuned-cifar10-test This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
21fa78d0e16cb5addd4c80c7fdbe7ced
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1
ee3b7c981ab6c6a1802530640087b31e
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-checkpoint-8 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-7.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-7.1) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9561 - Wer: 0.3271
296e1a04a9b876b3b681fe913a4c0255
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3117 | 1.59 | 1000 | 0.5514 | 0.3451 | | 0.2509 | 3.19 | 2000 | 0.5912 | 0.3328 | | 0.1918 | 4.78 | 3000 | 0.6103 | 0.3346 | | 0.1612 | 6.38 | 4000 | 0.6469 | 0.3377 | | 0.1388 | 7.97 | 5000 | 0.6597 | 0.3391 | | 0.121 | 9.57 | 6000 | 0.6911 | 0.3472 | | 0.1096 | 11.16 | 7000 | 0.7300 | 0.3457 | | 0.0959 | 12.76 | 8000 | 0.7660 | 0.3400 | | 0.0882 | 14.35 | 9000 | 0.8316 | 0.3394 | | 0.0816 | 15.95 | 10000 | 0.8042 | 0.3357 | | 0.0739 | 17.54 | 11000 | 0.8087 | 0.3346 | | 0.0717 | 19.14 | 12000 | 0.8590 | 0.3353 | | 0.066 | 20.73 | 13000 | 0.8750 | 0.3336 | | 0.0629 | 22.33 | 14000 | 0.8759 | 0.3333 | | 0.0568 | 23.92 | 15000 | 0.8963 | 0.3321 | | 0.0535 | 25.52 | 16000 | 0.9391 | 0.3323 | | 0.0509 | 27.11 | 17000 | 0.9279 | 0.3296 | | 0.0498 | 28.71 | 18000 | 0.9561 | 0.3271 |
e77fa532ea2a7cf0e24f7ffff6ffb762
other
['generated_from_trainer']
false
6.7b-dalio-book-handwritten-io-constant-1e-6-v2 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted-v2 dataset. It achieves the following results on the evaluation set: - Loss: 2.4238 - Accuracy: 0.2793
09b1fb93898d3adf131471d12f2024c5
other
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0
a24d8803d8daf7e840170c2d4a991fc4
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5852 | 0.08 | 6 | 2.5957 | 0.2697 | | 2.5956 | 0.16 | 12 | 2.5762 | 0.2706 | | 2.5961 | 0.24 | 18 | 2.5547 | 0.2711 | | 2.5731 | 0.32 | 24 | 2.5312 | 0.2722 | | 2.5415 | 0.4 | 30 | 2.5117 | 0.2734 | | 2.5168 | 0.48 | 36 | 2.4961 | 0.2746 | | 2.4972 | 0.56 | 42 | 2.4824 | 0.2756 | | 2.4354 | 0.64 | 48 | 2.4727 | 0.2761 | | 2.4055 | 0.72 | 54 | 2.4609 | 0.2768 | | 2.4681 | 0.8 | 60 | 2.4492 | 0.2778 | | 2.5866 | 0.88 | 66 | 2.4355 | 0.2784 | | 2.4221 | 0.96 | 72 | 2.4238 | 0.2793 |
16228c7ef14e5e40ba46cb293367e709
mit
[]
false
SAS style on Stable Diffusion This is the `<smooth-aesthetic-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<smooth-aesthetic-style> 0](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/3.jpeg) ![<smooth-aesthetic-style> 1](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/1.jpeg) ![<smooth-aesthetic-style> 2](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/0.jpeg) ![<smooth-aesthetic-style> 3](https://huggingface.co/sd-concepts-library/sas-style/resolve/main/concept_images/2.jpeg)
7b90cbfccec9ba9777e6b3cb83dd78a7
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
shru Dreambooth model trained by Suniljl with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
a199be4e0b806848371a28132f19e431
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
RPG: Source(s): [Hugging Face](https://huggingface.co/Anashel/rpg) - [CivitAI](https://civitai.com/models/1116/rpg) **Latest Update: Feb 5th, 2023** - Version 4.0 is live **[available here](https://huggingface.co/Anashel/rpg/tree/main/RPG-V4-Model-Download)** - New Prompt User Guide for RPG v4 **[Download Now](https://huggingface.co/Anashel/rpg/resolve/main/RPG-V4-Model-Download/RPG-Guide-v4.pdf)**
2ea10b32f609e697465d95049f81ba69
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Contribute If you wish to support the prompt research on this project. - Rate RPG V4 on **[CivitAI](https://civitai.com/models/1116/rpg)** - Donate (ETH Only): anashel.eth | 0xc4055f3c65D01a48Bc47bE87751794eA9f42E367
7c742602e3b5bbf73c959570c35e3999
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Future Updates I am in the process of writing a detailed guide with a list of word you can switch easily in the main prompt. Ex: Blood Elf Knight, Female Death Knight Mage, etc... In the meantime, fell free to share your creation on my *[Discord Server](https://discord.gg/7CGDRjDz7P)* ---
f83f31169b6fef91d2716db8f8faa65d
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
RPG v4 Render Sample ![07.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655387859-631ba4758de8e645af703f33.jpeg) ![03.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655391409-631ba4758de8e645af703f33.jpeg) ![02.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655393058-631ba4758de8e645af703f33.jpeg) ![05.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655429420-631ba4758de8e645af703f33.jpeg) ![04.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655446594-631ba4758de8e645af703f33.jpeg) ![01.jpg](https://s3.amazonaws.com/moonup/production/uploads/1675655485563-631ba4758de8e645af703f33.jpeg) --- **How to reach me** - Reddit: [u/Anashel](https://www.reddit.com/user/anashel) - Discord: [RPG V3 Channel](https://discord.gg/rDrhtWZk8u) ----
b009ca9fc40f4ae3d896a26f22dc187d
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
RPG v3 Render Sample ![01.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979006989-631ba4758de8e645af703f33.jpeg) ![02.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979015000-631ba4758de8e645af703f33.jpeg) ![03.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979010769-631ba4758de8e645af703f33.jpeg) ![04.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979024887-631ba4758de8e645af703f33.jpeg) ![05.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672979028290-631ba4758de8e645af703f33.jpeg)
0f78836fc4bf7a7f90cb0e9d9b6f1911
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
RPG v2 Render Sample Genereated with RPG V2. [Available here](https://huggingface.co/Anashel/rpg/tree/main/All-Concept-Zip-Format) ![Cover-01.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337224-631ba4758de8e645af703f33.jpeg) ![Cover-02.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337238-631ba4758de8e645af703f33.jpeg) ![Cover-03.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337256-631ba4758de8e645af703f33.jpeg) ![Cover-04.jpg](https://s3.amazonaws.com/moonup/production/uploads/1670187337271-631ba4758de8e645af703f33.jpeg) ----
ec7ede4e7d97b7c77295c3d43d8ad2a8
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
OTHER EXAMPLE ![02.png](https://s3.amazonaws.com/moonup/production/uploads/1669621805120-631ba4758de8e645af703f33.png) ![03.png](https://s3.amazonaws.com/moonup/production/uploads/1669621861406-631ba4758de8e645af703f33.png) ![04.png](https://s3.amazonaws.com/moonup/production/uploads/1669621871167-631ba4758de8e645af703f33.png) ![05.png](https://s3.amazonaws.com/moonup/production/uploads/1669621878493-631ba4758de8e645af703f33.png) ![06.png](https://s3.amazonaws.com/moonup/production/uploads/1669621914034-631ba4758de8e645af703f33.png) ![07.png](https://s3.amazonaws.com/moonup/production/uploads/1669621922049-631ba4758de8e645af703f33.png) ![08.png](https://s3.amazonaws.com/moonup/production/uploads/1669621929158-631ba4758de8e645af703f33.png)
e7541cef40b10e7aed6d85a9fa826d31
apache-2.0
['automatic-speech-recognition', 'uk']
false
exp_w2v2t_uk_vp-fr_s473 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
80adf6cb378b7e0ca110058c34346948
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/nli-distilbert-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
0e621e0158df0fb5f98a76634bc7d467
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/nli-distilbert-base') embeddings = model.encode(sentences) print(embeddings) ```
07392fcc848e98199d4a186f09aa4cdf
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-distilbert-base)
e73ab560e1ebe59ef2db662285f8a316
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
653b731264175e7302889c412c94c952
apache-2.0
['tapas', 'TapasModel']
false
TAPAS mini model This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `revision="no_reset"`, which corresponds to `tapas_inter_masklm_mini` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors.
509fd3b0bfb988dbf4584e45e7f015ea
apache-2.0
['tapas', 'TapasModel']
false
Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on a downstream task.
cdeb2219b998abe97bb06ae1c67a76ef
apache-2.0
['tapas', 'TapasModel']
false
Intended uses & limitations You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you.
6adaa5d74277db23d48a7b24ef2835cd
apache-2.0
['tapas', 'TapasModel']
false
Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ```
cc6c4a1bb3888ee9000ded6b6b8bc19d
apache-2.0
['tapas', 'TapasModel']
false
Pre-training The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512. In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details. The optimizer used is Adam with a learning rate of 5e-5, and a warmup ratio of 0.01.
023873ec760345864b65d129a90bad90
apache-2.0
['tapas', 'TapasModel']
false
BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
39a1c1d382a04a6b2e7c4fa35343d32d
apache-2.0
['translation']
false
eng-iir * source group: English * target group: Indo-Iranian languages * OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md) * model: transformer * source language(s): eng * target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt)
013c4f02c4f9bee330c7e24f29e3bf8c
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.7 | 0.326 | | newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 | | newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 | | newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 | | Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 | | Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 | | Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 | | Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 | | Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 | | Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 | | Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 | | Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 | | Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 | | Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 | | Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 | | Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 | | Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 | | Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 | | Tatoeba-test.eng.multi | 13.7 | 0.392 | | Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 | | Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 | | Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 | | Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 | | Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 | | Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 | | Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 | | Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 | | Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 | | Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 | | Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 |
a3f63276318014d59c37019e588164e6
apache-2.0
['translation']
false
System Info: - hf_name: eng-iir - source_languages: eng - target_languages: iir - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir'] - src_constituents: {'eng'} - tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: iir - short_pair: en-iir - chrF2_score: 0.392 - bleu: 13.7 - brevity_penalty: 1.0 - ref_len: 63351.0 - src_name: English - tgt_name: Indo-Iranian languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: iir - prefer_old: False - long_pair: eng-iir - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
e7ac9442f643e01b7e62df35573aa2fe
cc-by-sa-4.0
['coptic', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Coptic Scriptorium Corpora for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-coptic](https://huggingface.co/KoichiYasuoka/roberta-base-coptic).
dc3fd5fc4dfe13ee664a02056b1a6624
cc-by-sa-4.0
['coptic', 'token-classification', 'pos', 'dependency-parsing']
false
text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-base-coptic-ud-goeswith") print(nlp("ⲧⲉⲛⲟⲩⲇⲉⲛ̄ⲟⲩⲟⲉⲓⲛϩ︤ⲙ︥ⲡϫⲟⲉⲓⲥ·")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-coptic-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("ⲧⲉⲛⲟⲩⲇⲉⲛ̄ⲟⲩⲟⲉⲓⲛϩ︤ⲙ︥ⲡϫⲟⲉⲓⲥ·")) ```
3ba0cd2f942fd27eb6b205c4eb423039
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0612 - Precision: 0.9247 - Recall: 0.9385 - F1: 0.9315 - Accuracy: 0.9837
30ef77f28c888bfb47d38d5e87c5b24f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2421 | 1.0 | 878 | 0.0701 | 0.9083 | 0.9217 | 0.9149 | 0.9801 | | 0.0555 | 2.0 | 1756 | 0.0599 | 0.9204 | 0.9357 | 0.9280 | 0.9830 | | 0.0311 | 3.0 | 2634 | 0.0612 | 0.9247 | 0.9385 | 0.9315 | 0.9837 |
f4b8c180074b24e99c323e44e5b80ee1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 244 | 2.6029 | 29.4956 | 13.5156 | 25.8306 | 25.842 | 18.2896 |
d08ec19bb2ec4871486c775be2b82f82
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_vp-es_s399 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
e6caa9fad84807b3ef1b10d401d8207b
mit
[]
false
🇹🇷 Turkish ELECTRA model <p align="center"> <img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png"> </p> [![DOI](https://zenodo.org/badge/237817454.svg)](https://zenodo.org/badge/latestdoi/237817454) We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉 Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk. Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
1f65a0c07628621a88ffaa40c07ba562
mit
[]
false
Stats We've also trained an ELECTRA (cased) model on the recently released Turkish part of the [multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team. After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens. We used the original 32k vocab (instead of creating a new one).
462f40fa41b39e0109968813143c324e
mit
[]
false
mC4 ELECTRA In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
12f70195f7ac05610ed843d26884fd9f
mit
[]
false
Model usage All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz) using their model name. Example usage with 🤗/Transformers: ```python tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator") model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator") ```
540d121f9d91bd7b61a6af533acc8a7a
mit
[]
false
Citation You can use the following BibTeX entry for citation: ```bibtex @software{stefan_schweter_2020_3770924, author = {Stefan Schweter}, title = {BERTurk - BERT models for Turkish}, month = apr, year = 2020, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.3770924}, url = {https://doi.org/10.5281/zenodo.3770924} } ```
53b2f31e660be4de2da3c905b88067bd
mit
[]
false
Acknowledgments Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing us the Turkish NER dataset for evaluation. We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the awesome logo! Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️
ec70cbc8e47af56f96a7ec76b9f303b2
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.0332 - Accuracy: 0.9303
153670f8e579f7583b01d5bfaf148170
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4409 | 1.0 | 318 | 0.2288 | 0.6206 | | 0.1898 | 2.0 | 636 | 0.1106 | 0.8461 | | 0.116 | 3.0 | 954 | 0.0729 | 0.8994 | | 0.0861 | 4.0 | 1272 | 0.0548 | 0.9097 | | 0.0707 | 5.0 | 1590 | 0.0454 | 0.9184 | | 0.0613 | 6.0 | 1908 | 0.0399 | 0.9239 | | 0.0557 | 7.0 | 2226 | 0.0371 | 0.9294 | | 0.0522 | 8.0 | 2544 | 0.0348 | 0.93 | | 0.05 | 9.0 | 2862 | 0.0336 | 0.9297 | | 0.0487 | 10.0 | 3180 | 0.0332 | 0.9303 |
be4277e3571622a7c4a454870897f6e3
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-summarization-app This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.6614 - Rouge1: 24.5589 - Rouge2: 11.8509 - Rougel: 20.3011 - Rougelsum: 23.1768 - Gen Len: 19.0
f21dc429b575c603b0b5036e285fc8c6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
dace0b27b7e0bfcdbcab480e34b84303