license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix | |:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:--------------------------------------------------------------------------------------:| | 1.642 | 4.16 | 100 | 1.5891 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] | | 1.4791 | 8.33 | 200 | 1.3227 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] | | 1.2376 | 12.49 | 300 | 1.0446 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] | | 0.9622 | 16.65 | 400 | 0.8811 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] | | 0.8614 | 20.82 | 500 | 0.8174 | 1.0 | 0.2581 | 0.4103 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.2135 | 1.0 | 0.3519 | 19 | 0.0 | 0.0 | 0.0 | 22 | 0.2784 | 0.3034 | 0.3145 | 0.1905 | 97 | 0.3614 | 0.2784 | 0.2000 | 97 | 0.9780 | [[0, 1, 2, 3], [0, 8, 0, 23, 0], [1, 0, 0, 25, 0], [2, 0, 0, 19, 0], [3, 0, 0, 22, 0]] | | 0.8344 | 24.98 | 600 | 0.7498 | 1.0 | 1.0 | 1.0 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 1.0 | 1.0 | 19 | 1.0 | 1.0 | 1.0 | 22 | 1.0 | 1.0 | 1.0 | 1.0 | 97 | 1.0 | 1.0 | 1.0 | 97 | 1.0 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 19, 0], [3, 0, 0, 0, 22]] | | 0.8105 | 29.16 | 700 | 0.7907 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 0.95 | 1.0 | 0.9744 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9794 | 0.9797 | 0.9786 | 0.9787 | 97 | 0.9802 | 0.9794 | 0.9794 | 97 | 1.0 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 1, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] | | 0.6168 | 33.33 | 800 | 0.5496 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 0.95 | 1.0 | 0.9744 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9794 | 0.9797 | 0.9786 | 0.9787 | 97 | 0.9802 | 0.9794 | 0.9794 | 97 | 0.5840 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 1, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] | | 0.2701 | 37.49 | 900 | 0.2587 | 1.0 | 1.0 | 1.0 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 0.9474 | 0.9474 | 0.9474 | 19 | 0.9565 | 1.0 | 0.9778 | 22 | 0.9794 | 0.9760 | 0.9768 | 0.9762 | 97 | 0.9798 | 0.9794 | 0.9794 | 97 | 0.2375 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 1, 0], [2, 0, 0, 18, 1], [3, 0, 0, 0, 22]] | | 0.1745 | 41.65 | 1000 | 0.2219 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9545 | 0.9545 | 0.9545 | 22 | 0.9794 | 0.9808 | 0.9755 | 0.9779 | 97 | 0.9797 | 0.9794 | 0.9793 | 97 | 0.2445 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] | | 0.1494 | 45.82 | 1100 | 0.2548 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 0.96 | 0.9796 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9130 | 0.9545 | 0.9333 | 22 | 0.9691 | 0.9704 | 0.9655 | 0.9675 | 97 | 0.9703 | 0.9691 | 0.9693 | 97 | 0.2352 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 0, 1], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] | | 0.1213 | 49.98 | 1200 | 0.1756 | 0.9688 | 1.0 | 0.9841 | 31 | 0.9615 | 1.0 | 0.9804 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9794 | 0.9826 | 0.9755 | 0.9786 | 97 | 0.9801 | 0.9794 | 0.9793 | 97 | 0.2260 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 1, 18, 0], [3, 1, 0, 0, 21]] | | 0.0964 | 54.16 | 1300 | 0.1884 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9545 | 0.9545 | 0.9545 | 22 | 0.9794 | 0.9808 | 0.9755 | 0.9779 | 97 | 0.9797 | 0.9794 | 0.9793 | 97 | 0.2260 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] | | 0.0859 | 58.33 | 1400 | 0.1212 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 1.0 | 1.0 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9897 | 0.9922 | 0.9886 | 0.9902 | 97 | 0.9900 | 0.9897 | 0.9897 | 97 | 0.2202 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] | | 0.0845 | 62.49 | 1500 | 0.1254 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 1.0 | 1.0 | 19 | 1.0 | 0.9545 | 0.9767 | 22 | 0.9897 | 0.9922 | 0.9886 | 0.9902 | 97 | 0.9900 | 0.9897 | 0.9897 | 97 | 0.2178 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 19, 0], [3, 1, 0, 0, 21]] | | 0.0831 | 66.65 | 1600 | 0.1590 | 0.9688 | 1.0 | 0.9841 | 31 | 1.0 | 1.0 | 1.0 | 25 | 1.0 | 0.9474 | 0.9730 | 19 | 0.9545 | 0.9545 | 0.9545 | 22 | 0.9794 | 0.9808 | 0.9755 | 0.9779 | 97 | 0.9797 | 0.9794 | 0.9793 | 97 | 0.2202 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 0, 18, 1], [3, 1, 0, 0, 21]] |
1c5529b1b019ee3613b451a4df7316b4
mit
['BERT', 'token-classification', 'sequence-tagger-model']
false
Arabic NER Model - [Github repo](https://github.com/edchengg/GigaBERT) - NER BIO tagging model based on [GigaBERTv4](https://huggingface.co/lanwuwei/GigaBERT-v4-Arabic-and-English). - ACE2005 Training data: English + Arabic - [NER tags](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-entities-guidelines-v6.6.pdf) including: PER, VEH, GPE, WEA, ORG, LOC, FAC
bc0e8a87d87eb9cb4f50af49ba9d2968
mit
['BERT', 'token-classification', 'sequence-tagger-model']
false
How to use ```python >>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer >>> ner_model = AutoModelForTokenClassification.from_pretrained("ychenNLP/arabic-ner-ace") >>> ner_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-ner-ace") >>> ner_pip = pipeline("ner", model=ner_model, tokenizer=ner_tokenizer, grouped_entities=True) >>> output = ner_pip('Protests break out across the US after Supreme Court overturns.') >>> print(output) [{'entity_group': 'GPE', 'score': 0.9979881, 'word': 'us', 'start': 30, 'end': 32}, {'entity_group': 'ORG', 'score': 0.99898684, 'word': 'supreme court', 'start': 39, 'end': 52}] >>> output = ner_pip('قال وزير العدل التركي بكير بوزداغ إن أنقرة تريد 12 مشتبهاً بهم من فنلندا و 21 من السويد') >>> print(output) [{'entity_group': 'PER', 'score': 0.9996214, 'word': 'وزير', 'start': 4, 'end': 8}, {'entity_group': 'ORG', 'score': 0.9952383, 'word': 'العدل', 'start': 9, 'end': 14}, {'entity_group': 'GPE', 'score': 0.9996675, 'word': 'التركي', 'start': 15, 'end': 21}, {'entity_group': 'PER', 'score': 0.9978992, 'word': 'بكير بوزداغ', 'start': 22, 'end': 33}, {'entity_group': 'GPE', 'score': 0.9997154, 'word': 'انقرة', 'start': 37, 'end': 42}, {'entity_group': 'PER', 'score': 0.9946885, 'word': 'مشتبها بهم', 'start': 51, 'end': 62}, {'entity_group': 'GPE', 'score': 0.99967396, 'word': 'فنلندا', 'start': 66, 'end': 72}, {'entity_group': 'PER', 'score': 0.99694425, 'word': '21', 'start': 75, 'end': 77}, {'entity_group': 'GPE', 'score': 0.99963355, 'word': 'السويد', 'start': 81, 'end': 87}] ```
48709a68598bbd611a52f484a2cab5eb
apache-2.0
['generated_from_trainer']
false
Article_500v5_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.1914 - Precision: 0.6408 - Recall: 0.7218 - F1: 0.6789 - Accuracy: 0.9356
e56787c5e153482a93b33a956da657fe
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 56 | 0.2937 | 0.4307 | 0.5257 | 0.4735 | 0.9010 | | No log | 2.0 | 112 | 0.2037 | 0.6089 | 0.695 | 0.6491 | 0.9305 | | No log | 3.0 | 168 | 0.1914 | 0.6408 | 0.7218 | 0.6789 | 0.9356 |
2da8d4bde7ddb5fd36b04eb88a6d7906
mit
['generated_from_trainer']
false
roberta-large-mnli-misogyny-sexism-4tweets-2e-05-0.05 This model is a fine-tuned version of [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6222 - Accuracy: 0.7064 - F1: 0.7158 - Precision: 0.6462 - Recall: 0.8022 - Mae: 0.2936 - Tn: 336 - Fp: 202 - Fn: 91 - Tp: 369
e5247d17e1945ba1073a64d3b07c1731
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|:---:|:---:|:--:|:---:| | 0.5053 | 1.0 | 1346 | 0.6657 | 0.6663 | 0.7013 | 0.5969 | 0.85 | 0.3337 | 274 | 264 | 69 | 391 | | 0.4093 | 2.0 | 2692 | 0.6222 | 0.7064 | 0.7158 | 0.6462 | 0.8022 | 0.2936 | 336 | 202 | 91 | 369 |
6a47e51c513f4dd91a588fbddc807e14
apache-2.0
['generated_from_keras_callback']
false
Electra-base-squad-adversarialqa-epoch-3 This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5566 - Epoch: 2
488b252ebbea3251d4a777e6a99eb0e8
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ft500_6class This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5162 - Accuracy: 0.356 - F1: 0.3347
313b533e1b3530b590270f2c9da55e95
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.579 | 1.0 | 188 | 1.5575 | 0.2933 | 0.2521 | | 1.4527 | 2.0 | 376 | 1.5043 | 0.3227 | 0.2821 | | 1.3767 | 3.0 | 564 | 1.4982 | 0.34 | 0.2938 | | 1.3122 | 4.0 | 752 | 1.4784 | 0.368 | 0.3454 | | 1.2678 | 5.0 | 940 | 1.5162 | 0.356 | 0.3347 |
27209d5b2d1880323090aeac6c25e0c1
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_gender_male-2_female-8_s303 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b472cc4f120303b2a521cc2020f7b566
apache-2.0
['generated_from_trainer']
false
distilbert-base-multilingual-cased-misogyny-sexism-decay0.05-indomain-mix-bal-0 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5814 - Accuracy: 0.772 - F1: 0.7343 - Precision: 0.8799 - Recall: 0.63 - Mae: 0.228 - Tn: 457 - Fp: 43 - Fn: 185 - Tp: 315
631ada8a0e86a78ba280083f0142ab6e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|:---:|:--:|:---:|:---:| | 0.4018 | 1.0 | 1356 | 0.5260 | 0.744 | 0.68 | 0.9067 | 0.544 | 0.256 | 472 | 28 | 228 | 272 | | 0.2932 | 2.0 | 2712 | 0.5655 | 0.757 | 0.7047 | 0.8978 | 0.58 | 0.243 | 467 | 33 | 210 | 290 | | 0.236 | 3.0 | 4068 | 0.5814 | 0.772 | 0.7343 | 0.8799 | 0.63 | 0.228 | 457 | 43 | 185 | 315 |
4956289f5db585ae03e117d84d0027d0
apache-2.0
['multilingual model', 'generated_from_trainer']
false
mt5-small-finetuned-multilingual-xlsum-new This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the 45 languages of the XL-Sum dataset. It achieves the following results on the evaluation set: - Loss: 2.7679 - Rouge1: 9.1993 - Rouge2: 2.3416 - Rougel: 7.6684 - Rougelsum: 7.7074
380efd0ebe2a3e99cd924b2da5b1e45d
apache-2.0
['multilingual model', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 3.9684 | 1.0 | 1687 | 2.8902 | 8.0531 | 1.8357 | 6.7234 | 6.7401 | | 3.62 | 2.0 | 3374 | 2.8486 | 8.4881 | 2.0178 | 7.0542 | 7.0854 | | 3.3765 | 3.0 | 5061 | 2.7986 | 8.7796 | 2.2342 | 7.3363 | 7.3645 | | 3.5043 | 4.0 | 6748 | 2.7677 | 9.0486 | 2.3099 | 7.5493 | 7.5685 | | 3.338 | 5.0 | 8435 | 2.7679 | 9.1993 | 2.3416 | 7.6684 | 7.7074 |
41ab03c40c5b3444433c3b2efa43d44d
mit
[]
false
model by chrisemoody This your the Stable Diffusion model fine-tuned the robeez baby girl water shoes concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks shoes** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/robeez-baby-girl-water-shoes/resolve/main/concept_images/3.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/robeez-baby-girl-water-shoes/resolve/main/concept_images/1.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/robeez-baby-girl-water-shoes/resolve/main/concept_images/5.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/robeez-baby-girl-water-shoes/resolve/main/concept_images/0.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/robeez-baby-girl-water-shoes/resolve/main/concept_images/4.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/robeez-baby-girl-water-shoes/resolve/main/concept_images/2.jpeg)
96dfeac8e6d734b643e838b3e76ea8c9
apache-2.0
['classification']
false
camembert-fr-covid-tweet-sentiment-classification This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2. This model reaches an accuracy of 71% on the dev set. In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes: - 0 : negatif - 1 : neutre - 2 : positif
c38f5eed64d8d1e2bd18fe1bb6ee567c
apache-2.0
['classification']
false
Pipelining the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification") model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-sentiment-classification") nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer) nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...")
2dec0361663c680b760f6b4e2339b4a3
gpl-3.0
['electra', 'tagalog', 'filipino']
false
ELECTRA Tagalog Base Cased Generator Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This is the generator model used to sample synthetic text and pretrain the discriminator. Only use this model for retraining and mask-filling. For the actual model for downstream tasks, please refer to the discriminator models.
03ee246e5ae5d2c479a0daa3be520ab0
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/stsb-bert-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
9684bef52f971a1e9448065ade0a62fb
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/stsb-bert-base') embeddings = model.encode(sentences) print(embeddings) ```
1aa56071f41c25160a36834e0b7dfe39
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/stsb-bert-base)
e4d8ec22bb9957d4fdbc1d94f26b7dbc
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8628 - Matthews Correlation: 0.5331
07897ae55e21a95bd58e7221f5989b1a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5253 | 1.0 | 535 | 0.5214 | 0.3943 | | 0.3459 | 2.0 | 1070 | 0.5551 | 0.4693 | | 0.2326 | 3.0 | 1605 | 0.6371 | 0.5059 | | 0.1718 | 4.0 | 2140 | 0.7851 | 0.5111 | | 0.1262 | 5.0 | 2675 | 0.8628 | 0.5331 |
18d4e4036d48e6d5ef5dbead27ad3666
mit
['generated_from_trainer']
false
xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-3 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2864
81b1de3904f7f524c5a5504edad8b98e
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6088 | 1.0 | 5533 | 1.4429 | | 1.3928 | 2.0 | 11066 | 1.3183 | | 1.3059 | 3.0 | 16599 | 1.2864 |
1a27d9887e1a863a500b3b06b03bcfb6
mit
['generated_from_keras_callback']
false
deepiit98/Heresy-clustered This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1670 - Train End Logits Accuracy: 0.9688 - Train Start Logits Accuracy: 0.9444 - Validation Loss: 0.1247 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 1.0 - Epoch: 0
9ee051d3778fe133fbc6a6b0716daf80
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.1670 | 0.9688 | 0.9444 | 0.1247 | 1.0 | 1.0 | 0 |
192f8cb4af38223510ed996a4fd94745
apache-2.0
['generated_from_keras_callback']
false
JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6429 - Validation Loss: 0.8071 - Epoch: 2
b79902db0216851c326b403f305b26df
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6423 | 0.8071 | 0 | | 0.6424 | 0.8071 | 1 | | 0.6429 | 0.8071 | 2 |
0487e09cc6906c2ee8631e3230566b7b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.5628 | 1.0 | 2249 | 6.4705 | | 6.1956 | 2.0 | 4498 | 6.2012 | | 6.021 | 3.0 | 6747 | 6.1128 |
e820f10d5a8f365b7c4eb837e7420dbf
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large V2 Breton This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 br dataset. It achieves the following results on the evaluation set: - Loss: 0.6425 - Wer: 35.1077
9b03d7700b62903a81b45a01501db1fe
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0065 | 5.03 | 3000 | 0.6425 | 35.1077 |
bccbe2a5b31bd101dc82d2396ad263a5
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
Wav2Vec2-Large-XLSR-53-English Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
51f7f32c597452762523f5bf16f769e6
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. processor = Wav2Vec2Processor.from_pretrained("{model_id}")
8a8c4e38951cd900778ea075e2bb37b7
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \tbatch["speech"] = resampler(speech_array).squeeze().numpy() \treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ```
84c34c9b1355010024362ce1ca97e905
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
TODO: replace language with your {language}, *e.g.* French ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "{lang_id}", split="test")
a4d5903ba31a956e1c20b2d2fd901b2e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("{model_id}")
a1e8e392e492b7e02f9ca996e664e8b0
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
dbaceffe7df29a0e4be6a1805356d353
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
We need to read the aduio files as arrays def evaluate(batch): \tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) \twith torch.no_grad(): \t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits \tpred_ids = torch.argmax(logits, dim=-1) \tbatch["pred_strings"] = processor.batch_decode(pred_ids) \treturn batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: XX.XX %
091935f0f56247cddf071debc2d46976
apache-2.0
['translation']
false
opus-mt-sv-umb * source languages: sv * target languages: umb * OPUS readme: [sv-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-umb/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.eval.txt)
c8e26708e2f80a37ff4114fa84f4b777
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3847 - F1: 0.8178
345967def04cc90dd0cc75c74b49cd51
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.5654 | 1.0 | 17160 | 0.3847 | 0.8178 |
7f439598edbce07e58da9ab65001d2f3
gpl-3.0
['pytorch', 'token-classification', 'bert', 'zh']
false
Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
4333d3c5f32795aa69343e0cfdc0cb0c
apache-2.0
['generated_from_keras_callback']
false
MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0589 - Validation Loss: 5.3227 - Epoch: 0
bf3b172a6d455ca2bd12069a0ef2d044
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Accuracy: 0.7917 - F1: 0.8590
272240755314f61678010b67d9a75710
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 63 | 0.5387 | 0.7402 | 0.8349 | | No log | 2.0 | 126 | 0.5770 | 0.7696 | 0.8513 | | No log | 3.0 | 189 | 0.5357 | 0.7574 | 0.8223 | | No log | 4.0 | 252 | 0.6645 | 0.7917 | 0.8590 | | No log | 5.0 | 315 | 0.6977 | 0.7721 | 0.8426 |
b862830429b7082c0fa889cac75b4068
apache-2.0
[]
false
Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 12 repeating layers - 128 embedding dimension - 4096 hidden dimension - 64 attention heads - 223M parameters
863fad3912778b1bdc5e58930fefc0d3
apache-2.0
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1') model = AlbertModel.from_pretrained("albert-xxlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v1') model = TFAlbertModel.from_pretrained("albert-xxlarge-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
c1d5ab9e79b8c0045d5a85c6f4930a8e
apache-2.0
[]
false
Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] ``` This bias will also affect all fine-tuned versions of this model.
c7d5c1d82e7631f59fd2f54906353c6f
mit
['generated_from_trainer']
false
inspiring_mirzakhani This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
35081c8f2ca2aff226d823aacaa84bf2
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'inspiring_mirzakhani', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
c202f06200fcdf52b8f334c4db6d4105
apache-2.0
['generated_from_trainer']
false
finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.8609 - F1: 0.9156
ef991960c4d0e2ce781c0d740b1edd0b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 | | No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 | | No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 | | No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 | | No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 |
c756991e61a9d3f847c99042ffb6fe3a
apache-2.0
['translation']
false
opus-mt-en-sv * source languages: en * target languages: sv * OPUS readme: [en-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.eval.txt)
dce8373387c4c8726d94904a0887d89e
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_qnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3978 - Accuracy: 0.5883
21fc3d482a8d9df66db99b744c557929
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4154 | 1.0 | 410 | 0.3986 | 0.5779 | | 0.3986 | 2.0 | 820 | 0.3978 | 0.5883 | | 0.3909 | 3.0 | 1230 | 0.3990 | 0.5887 | | 0.384 | 4.0 | 1640 | 0.3988 | 0.5913 | | 0.3761 | 5.0 | 2050 | 0.4001 | 0.5900 | | 0.3634 | 6.0 | 2460 | 0.4026 | 0.6121 | | 0.3413 | 7.0 | 2870 | 0.4068 | 0.6174 |
09bdb46ab0917c2d552355897de075a8
apache-2.0
['translation']
false
tgl-por * source group: Tagalog * target group: Portuguese * OPUS readme: [tgl-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.eval.txt)
5ff818b6d65a6b0b2c81218e0048e1d7
apache-2.0
['translation']
false
System Info: - hf_name: tgl-por - source_languages: tgl - target_languages: por - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'pt'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'por'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: por - short_pair: tl-pt - chrF2_score: 0.522 - bleu: 28.8 - brevity_penalty: 0.981 - ref_len: 12826.0 - src_name: Tagalog - tgt_name: Portuguese - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: pt - prefer_old: False - long_pair: tgl-por - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
d3607900a39ff321b7d7905db90bddec
mit
['translation']
false
Usage ```bash pip3 install ctranslate2 pyonmttok ``` Simple translation using Python: ```python import ctranslate2 from huggingface_hub import snapshot_download model_dir = snapshot_download(repo_id="softcatala/opennmt-eng-cat", revision="main") translator = ctranslate2.Translator(model_dir) print(translator.translate_batch([["▁Hello", "▁world", "!"]])) [[{'tokens': ['▁Hola', '▁món', '!']}]] ``` Simple tokenization & translation using Python: ```python import ctranslate2 import pyonmttok from huggingface_hub import snapshot_download model_dir = snapshot_download(repo_id="softcatala/opennmt-eng-cat", revision="main") tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/sp_m.model") tokenized=tokenizer.tokenize("Hello world!") import ctranslate2 translator = ctranslate2.Translator(model_dir) translated = translator.translate_batch([tokenized[0]]) print(tokenizer.detokenize(translated[0][0]['tokens'])) Hola món! ```
3e76737d545b1568a16f83ddc9a8d35b
apache-2.0
['text-classification', 'neural-compressor', 'int8']
false
Model Details **Model Description:** This model is a [DistilBERT](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) fine-tuned on SST-2 dynamically quantized and pruned using a magnitude pruning strategy to obtain a sparsity of 10% with [optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor). - **Model Type:** Text Classification - **Language(s):** English - **License:** Apache-2.0 - **Parent Model:** For more details on the original model, we encourage users to check out [this](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model card.
06a2da2bb07a1b94c230afd1cdeeacf0
apache-2.0
['text-classification', 'neural-compressor', 'int8']
false
How to Get Started With the Model To load the quantized model and run inference using the Transformers [pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines), you can do as follows: ```python from transformers import AutoTokenizer, pipeline from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassification model_id = "echarlaix/distilbert-sst2-inc-dynamic-quantization-magnitude-pruning-0.1" model = IncQuantizedModelForSequenceClassification.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) text = "He's a dreadful magician." outputs = cls_pipe(text) ```
b6cf4fd444451ffbc45810d8346bc9b5
['apache-2.0']
['causal-lm', 'summarization']
false
How to use Colab: [link](https://colab.research.google.com/drive/1eR-ev0Y5ISWIwGnzYYoHyGMaSIUz8GTN) ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "IlyaGusev/rugpt3medium_sum_gazeta" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) article_text = "..." text_tokens = tokenizer( article_text, max_length=600, add_special_tokens=False, padding=False, truncation=True )["input_ids"] input_ids = text_tokens + [tokenizer.sep_token_id] input_ids = torch.LongTensor([input_ids]) output_ids = model.generate( input_ids=input_ids, no_repeat_ngram_size=4 ) summary = tokenizer.decode(output_ids[0], skip_special_tokens=False) summary = summary.split(tokenizer.sep_token)[1] summary = summary.split(tokenizer.eos_token)[0] print(summary) ```
c3b5903c38d21c857983b6319f4abd14
['apache-2.0']
['causal-lm', 'summarization']
false
Training procedure - Training script: [train.py](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/train.py) - Config: [gpt_training_config.json](https://github.com/IlyaGusev/summarus/blob/master/external/hf_scripts/configs/gpt_training_config.json)
f79f31bec2707d33c03b8457dc187c26
['apache-2.0']
['causal-lm', 'summarization']
false
Eval results * Train dataset: **Gazeta v1 train** * Test dataset: **Gazeta v1 test** * Source max_length: **600** * Target max_length: **200** * no_repeat_ngram_size: **4** * num_beams: **5** | Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length | |:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----| | [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **32.4** | 14.3 | 28.0 | 39.7 | **26.4** | 12.1 | 371 | | [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 32.2 | **14.4** | **28.1** | **39.8** | 25.7 | **12.3** | 330 | | [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 26.2 | 7.7 | 21.7 | 33.8 | 18.2 | 4.3 | 244 | * Train dataset: **Gazeta v1 train** * Test dataset: **Gazeta v2 test** * Source max_length: **600** * Target max_length: **200** * no_repeat_ngram_size: **4** * num_beams: **5** | Model | R-1-f | R-2-f | R-L-f | chrF | METEOR | BLEU | Avg char length | |:--------------------------|:------|:------|:------|:-------|:-------|:-----|:-----| | [mbart_ru_sum_gazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) | **28.7** | **11.1** | 24.4 | **37.3** | **22.7** | **9.4** | 373 | | [rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) | 28.6 | **11.1** | **24.5** | 37.2 | 22.0 | **9.4** | 331 | | [rugpt3medium_sum_gazeta](https://huggingface.co/IlyaGusev/rugpt3medium_sum_gazeta) | 24.1 | 6.5 | 19.8 | 32.1 | 16.3 | 3.6 | 242 | Evaluation script: [evaluate.py](https://github.com/IlyaGusev/summarus/blob/master/evaluate.py) Flags: --language ru --tokenize-after --lower
d13b7a3068262e91553065ab89781014
mit
[]
false
reksio dog on Stable Diffusion This is the `<reksio-dog>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<reksio-dog> 0](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/1.jpeg) ![<reksio-dog> 1](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/5.jpeg) ![<reksio-dog> 2](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/0.jpeg) ![<reksio-dog> 3](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/4.jpeg) ![<reksio-dog> 4](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/2.jpeg) ![<reksio-dog> 5](https://huggingface.co/sd-concepts-library/reksio-dog/resolve/main/concept_images/3.jpeg)
e219e12da9b7a1ad66d63ea9a014edd4
mit
['generated_from_trainer']
false
finetuned-pflegeinterventionen-evidenzbasiert-und-patientenorientiert-umsetzen This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4393 - Accuracy: 0.8187 - F1: 0.8137
c3bbe91781c50be01e61f9faf92ccfc0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.446 | 1.0 | 1365 | 0.4393 | 0.8115 | 0.8059 | | 0.3457 | 2.0 | 2730 | 0.4393 | 0.8187 | 0.8137 |
f02f12b4a15ea572db35f77c8acdc2d9
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_vp-100k_s449 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
21f0b0662c20eba6c46c900f0d711df6
mit
['generated_from_trainer']
false
roberta_base_fine_tuned_mind This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4252 - Accuracy: 0.8881
e91f0f12affa2247c014bae172b5b048
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7414 | 1.0 | 3054 | 0.6344 | 0.7878 | | 0.5612 | 2.0 | 6108 | 0.4568 | 0.8563 | | 0.3903 | 3.0 | 9162 | 0.4252 | 0.8881 |
db0ce63f20956eb68d4314f8f062f4dd
apache-2.0
['generated_from_trainer']
false
correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1801 - Precision: 0.6153 - Recall: 0.7301 - F1: 0.6678 - Accuracy: 0.9346
7044e7838eff6d380cfad933c8fb4fea
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.2746 | 0.4586 | 0.5922 | 0.5169 | 0.9031 | | No log | 2.0 | 22 | 0.2223 | 0.5233 | 0.6181 | 0.5668 | 0.9148 | | No log | 3.0 | 33 | 0.2162 | 0.5335 | 0.6699 | 0.5940 | 0.9274 | | No log | 4.0 | 44 | 0.2053 | 0.5989 | 0.7055 | 0.6478 | 0.9237 | | No log | 5.0 | 55 | 0.2123 | 0.5671 | 0.7249 | 0.6364 | 0.9267 |
01ef133daf60c5125e4507ec245ba7e3
apache-2.0
['Early Modern French', 'Historical', 'POS', 'flair']
false
D'AlemBERT-POS model This model is fine-tuned version of a [D'AlemBERT](https://huggingface.co/pjox/dalembert) on the [FreEMLPM corpus](https://doi.org/10.5281/zenodo.6481300) for Early Modern French. It was introduced in [this paper](https://aclanthology.org/2022.lrec-1.359/).
e18b553b0d70149ff2bcfaa92713911e
mit
['generated_from_trainer']
false
bart-cnn-science-v3-e6 This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8057 - Rouge1: 53.7462 - Rouge2: 34.9622 - Rougel: 37.5676 - Rougelsum: 51.0619 - Gen Len: 142.0
9fd9c3c24bb55d1f7b277a823e0dccf3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 398 | 0.9961 | 52.632 | 32.8104 | 35.0789 | 50.3747 | 142.0 | | 1.174 | 2.0 | 796 | 0.8565 | 52.8308 | 32.7064 | 34.6605 | 50.3348 | 142.0 | | 0.7073 | 3.0 | 1194 | 0.8322 | 52.2418 | 32.8677 | 36.1806 | 49.6297 | 141.5556 | | 0.4867 | 4.0 | 1592 | 0.8137 | 53.5537 | 34.5404 | 36.7194 | 50.8394 | 142.0 | | 0.4867 | 5.0 | 1990 | 0.7996 | 53.4959 | 35.1017 | 37.5143 | 50.9972 | 141.8704 | | 0.3529 | 6.0 | 2388 | 0.8057 | 53.7462 | 34.9622 | 37.5676 | 51.0619 | 142.0 |
a7499faa4d63e4d50ec445c37218de86
apache-2.0
['generated_from_trainer']
false
wac2vec-lllfantomlll This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5560 - Wer: 0.3417
36e477dc880914858dcea9576c72e6ab
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5768 | 1.0 | 500 | 2.0283 | 1.0238 | | 0.9219 | 2.01 | 1000 | 0.5103 | 0.5022 | | 0.4497 | 3.01 | 1500 | 0.4746 | 0.4669 | | 0.3163 | 4.02 | 2000 | 0.4144 | 0.4229 | | 0.2374 | 5.02 | 2500 | 0.4186 | 0.4161 | | 0.2033 | 6.02 | 3000 | 0.4115 | 0.3975 | | 0.1603 | 7.03 | 3500 | 0.4424 | 0.3817 | | 0.1455 | 8.03 | 4000 | 0.4151 | 0.3918 | | 0.1276 | 9.04 | 4500 | 0.4940 | 0.3798 | | 0.108 | 10.04 | 5000 | 0.4580 | 0.3688 | | 0.1053 | 11.04 | 5500 | 0.4243 | 0.3700 | | 0.0929 | 12.05 | 6000 | 0.4999 | 0.3727 | | 0.0896 | 13.05 | 6500 | 0.4991 | 0.3624 | | 0.0748 | 14.06 | 7000 | 0.4924 | 0.3602 | | 0.0681 | 15.06 | 7500 | 0.4908 | 0.3544 | | 0.0619 | 16.06 | 8000 | 0.5021 | 0.3559 | | 0.0569 | 17.07 | 8500 | 0.5448 | 0.3518 | | 0.0549 | 18.07 | 9000 | 0.4919 | 0.3508 | | 0.0478 | 19.08 | 9500 | 0.4704 | 0.3513 | | 0.0437 | 20.08 | 10000 | 0.5058 | 0.3555 | | 0.0421 | 21.08 | 10500 | 0.5127 | 0.3489 | | 0.0362 | 22.09 | 11000 | 0.5439 | 0.3527 | | 0.0322 | 23.09 | 11500 | 0.5418 | 0.3469 | | 0.0327 | 24.1 | 12000 | 0.5298 | 0.3422 | | 0.0292 | 25.1 | 12500 | 0.5511 | 0.3426 | | 0.0246 | 26.1 | 13000 | 0.5349 | 0.3472 | | 0.0251 | 27.11 | 13500 | 0.5646 | 0.3391 | | 0.0214 | 28.11 | 14000 | 0.5821 | 0.3424 | | 0.0217 | 29.12 | 14500 | 0.5560 | 0.3417 |
760536681c0586192dd5a561a56acdc0
apache-2.0
['translation']
false
roa-eng * source group: Romance languages * target group: English * OPUS readme: [roa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/roa-eng/README.md) * model: transformer * source language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.eval.txt)
a022956fc0d1b32ffb17568b42ae9487
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-enro-roneng.ron.eng | 37.1 | 0.631 | | newsdiscussdev2015-enfr-fraeng.fra.eng | 31.6 | 0.564 | | newsdiscusstest2015-enfr-fraeng.fra.eng | 36.1 | 0.592 | | newssyscomb2009-fraeng.fra.eng | 29.3 | 0.563 | | newssyscomb2009-itaeng.ita.eng | 33.1 | 0.589 | | newssyscomb2009-spaeng.spa.eng | 29.2 | 0.562 | | news-test2008-fraeng.fra.eng | 25.2 | 0.533 | | news-test2008-spaeng.spa.eng | 26.6 | 0.542 | | newstest2009-fraeng.fra.eng | 28.6 | 0.557 | | newstest2009-itaeng.ita.eng | 32.0 | 0.580 | | newstest2009-spaeng.spa.eng | 28.9 | 0.559 | | newstest2010-fraeng.fra.eng | 29.9 | 0.573 | | newstest2010-spaeng.spa.eng | 33.3 | 0.596 | | newstest2011-fraeng.fra.eng | 31.2 | 0.585 | | newstest2011-spaeng.spa.eng | 32.3 | 0.584 | | newstest2012-fraeng.fra.eng | 31.3 | 0.580 | | newstest2012-spaeng.spa.eng | 35.3 | 0.606 | | newstest2013-fraeng.fra.eng | 31.9 | 0.575 | | newstest2013-spaeng.spa.eng | 32.8 | 0.592 | | newstest2014-fren-fraeng.fra.eng | 34.6 | 0.611 | | newstest2016-enro-roneng.ron.eng | 35.8 | 0.614 | | Tatoeba-test.arg-eng.arg.eng | 38.7 | 0.512 | | Tatoeba-test.ast-eng.ast.eng | 35.2 | 0.520 | | Tatoeba-test.cat-eng.cat.eng | 54.9 | 0.703 | | Tatoeba-test.cos-eng.cos.eng | 68.1 | 0.666 | | Tatoeba-test.egl-eng.egl.eng | 6.7 | 0.209 | | Tatoeba-test.ext-eng.ext.eng | 24.2 | 0.427 | | Tatoeba-test.fra-eng.fra.eng | 53.9 | 0.691 | | Tatoeba-test.frm-eng.frm.eng | 25.7 | 0.423 | | Tatoeba-test.gcf-eng.gcf.eng | 14.8 | 0.288 | | Tatoeba-test.glg-eng.glg.eng | 54.6 | 0.703 | | Tatoeba-test.hat-eng.hat.eng | 37.0 | 0.540 | | Tatoeba-test.ita-eng.ita.eng | 64.8 | 0.768 | | Tatoeba-test.lad-eng.lad.eng | 21.7 | 0.452 | | Tatoeba-test.lij-eng.lij.eng | 11.2 | 0.299 | | Tatoeba-test.lld-eng.lld.eng | 10.8 | 0.273 | | Tatoeba-test.lmo-eng.lmo.eng | 5.8 | 0.260 | | Tatoeba-test.mfe-eng.mfe.eng | 63.1 | 0.819 | | Tatoeba-test.msa-eng.msa.eng | 40.9 | 0.592 | | Tatoeba-test.multi.eng | 54.9 | 0.697 | | Tatoeba-test.mwl-eng.mwl.eng | 44.6 | 0.674 | | Tatoeba-test.oci-eng.oci.eng | 20.5 | 0.404 | | Tatoeba-test.pap-eng.pap.eng | 56.2 | 0.669 | | Tatoeba-test.pms-eng.pms.eng | 10.3 | 0.324 | | Tatoeba-test.por-eng.por.eng | 59.7 | 0.738 | | Tatoeba-test.roh-eng.roh.eng | 14.8 | 0.378 | | Tatoeba-test.ron-eng.ron.eng | 55.2 | 0.703 | | Tatoeba-test.scn-eng.scn.eng | 10.2 | 0.259 | | Tatoeba-test.spa-eng.spa.eng | 56.2 | 0.714 | | Tatoeba-test.vec-eng.vec.eng | 13.8 | 0.317 | | Tatoeba-test.wln-eng.wln.eng | 17.3 | 0.323 |
f6b0300877f6443a76a0f1aa729f07e5
apache-2.0
['translation']
false
System Info: - hf_name: roa-eng - source_languages: roa - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/roa-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en'] - src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.test.txt - src_alpha3: roa - tgt_alpha3: eng - short_pair: roa-en - chrF2_score: 0.6970000000000001 - bleu: 54.9 - brevity_penalty: 0.9790000000000001 - ref_len: 74762.0 - src_name: Romance languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: roa - tgt_alpha2: en - prefer_old: False - long_pair: roa-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
21dc894766254de5e2cd7a59df320baf
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-cola-custom-tokenizer-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-cola-custom-tokenizer](https://huggingface.co/muhtasham/tiny-mlm-glue-cola-custom-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1575 - Accuracy: 0.7083 - F1: 0.8027
4e4129ae190d01ed47fbb4db32306a69
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.596 | 4.35 | 500 | 0.5737 | 0.7034 | 0.8045 | | 0.5008 | 8.7 | 1000 | 0.6054 | 0.7132 | 0.8104 | | 0.4191 | 13.04 | 1500 | 0.6542 | 0.7034 | 0.7939 | | 0.332 | 17.39 | 2000 | 0.7210 | 0.7157 | 0.7993 | | 0.2612 | 21.74 | 2500 | 0.8037 | 0.7206 | 0.81 | | 0.2045 | 26.09 | 3000 | 0.8845 | 0.7083 | 0.7993 | | 0.1645 | 30.43 | 3500 | 0.9976 | 0.7181 | 0.8080 | | 0.1301 | 34.78 | 4000 | 1.1575 | 0.7083 | 0.8027 |
c1c282872ba83fb3c45a2b510cfcfd0e
mit
['generated_from_keras_callback']
false
ksabeh/xlnet-base-cased-attribute-correction This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0599 - Validation Loss: 0.0214 - Epoch: 0
906a367390a1468e2ef0f6da32838074
mit
['generated_from_keras_callback']
false
recklessrecursion/Wayback_Machine-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2349 - Train End Logits Accuracy: 0.9618 - Train Start Logits Accuracy: 0.9306 - Validation Loss: 2.6776 - Validation End Logits Accuracy: 0.6667 - Validation Start Logits Accuracy: 0.6667 - Epoch: 0
1200bd98170480bb60f6d2ea9b582463
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.2349 | 0.9618 | 0.9306 | 2.6776 | 0.6667 | 0.6667 | 0 |
1075b288bf86e05be29df7fa5046144a
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
Wav2Vec2-Large-XLSR-53-Vietnamese Fine-tuned [dragonSwing/wav2vec2-base-pretrain-vietnamese](https://huggingface.co/dragonSwing/wav2vec2-base-pretrain-vietnamese) on Vietnamese Speech Recognition task using 100h labelled data from [VSLP dataset](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view?usp=sharing). When using this model, make sure that your speech input is sampled at 16kHz.
0d3b444de4f36fcd62029f9c58d2954f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "vi", split="test") processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") resampler = torchaudio.transforms.Resample(48_000, 16_000)
f00efe338435ed4dc7e92162ec68cc29
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
Evaluation The model can be evaluated as follows on the Vietnamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "vi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") model.to("cuda") chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
ae6693e9ba80e5f5c59b31210d38caa8
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
939c3d13cec7db17296b6edcf72d071f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=1) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 31.353591%
a2690cdc64cb61d4262c0566aa1bdac2
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
Wav2Vec2-Base-960h [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec
cb754bcbce29df8a4b290f2ae72437d6
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 3.4 | 8.6 |
b219a048c844c9413a856b618f011b27
gpl-3.0
['generated_from_trainer']
false
bert-base-chinese-ws-finetuned-ner_all This model is a fine-tuned version of [ckiplab/bert-base-chinese-ws](https://huggingface.co/ckiplab/bert-base-chinese-ws) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0330 - Precision: 0.9723 - Recall: 0.9734 - F1: 0.9728 - Accuracy: 0.9879
1fc189e40ac32b89c45175f94ee7356a
gpl-3.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 18 - eval_batch_size: 18 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
7c9e6401d90d95bb63dd0ab32613707b
gpl-3.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0648 | 0.29 | 500 | 0.0524 | 0.9586 | 0.9572 | 0.9579 | 0.9813 | | 0.0509 | 0.59 | 1000 | 0.0460 | 0.9615 | 0.9628 | 0.9622 | 0.9832 | | 0.0478 | 0.88 | 1500 | 0.0429 | 0.9624 | 0.9660 | 0.9642 | 0.9840 | | 0.0417 | 1.17 | 2000 | 0.0409 | 0.9650 | 0.9680 | 0.9665 | 0.9851 | | 0.0402 | 1.47 | 2500 | 0.0387 | 0.9662 | 0.9693 | 0.9677 | 0.9856 | | 0.0378 | 1.76 | 3000 | 0.0359 | 0.9699 | 0.9717 | 0.9708 | 0.9869 | | 0.0385 | 2.05 | 3500 | 0.0353 | 0.9703 | 0.9718 | 0.9710 | 0.9871 | | 0.0337 | 2.34 | 4000 | 0.0341 | 0.9709 | 0.9731 | 0.9720 | 0.9875 | | 0.0348 | 2.64 | 4500 | 0.0333 | 0.9721 | 0.9733 | 0.9727 | 0.9878 | | 0.0346 | 2.93 | 5000 | 0.0331 | 0.9722 | 0.9735 | 0.9729 | 0.9879 |
7215ed0e53d6b1453ada59421adf7f84
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Portuguese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 pt dataset. It achieves the following results on the evaluation set: - Loss: 0.2568 - Wer: 11.6487 - Cer: 4.4764
9f73135ca0b33507ede3b93b26b5f4d6
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:| | 0.2476 | 0.92 | 500 | 0.2900 | 13.2049 | 4.9765 | | 0.1886 | 1.84 | 1000 | 0.2611 | 12.2804 | 4.6173 | | 0.1066 | 2.76 | 1500 | 0.2568 | 11.6487 | 4.4764 | | 0.0698 | 3.68 | 2000 | 0.2701 | 11.8798 | 4.6145 | | 0.0403 | 4.6 | 2500 | 0.2831 | 11.8644 | 4.4405 | | 0.019 | 5.52 | 3000 | 0.3148 | 11.7565 | 4.4653 |
8ce58b6280b1bdffbbc4e58d38fbce43
mit
['generated_from_trainer']
false
umit_txtclass2 This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on a full dataset at Home PC i5 9600K RTX2060 6GB.It achieves the following results on the evaluation set: - Loss: 0.5844 - Accuracy: 0.9116
d2a6f0d5524c31cb9be4487f4f1d1744
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2855 | 1.0 | 858 | 0.6071 | 0.8986 | | 0.2077 | 2.0 | 1716 | 0.5425 | 0.9109 | | 0.112 | 3.0 | 2574 | 0.5844 | 0.9116 |
69dd3c2a29b4803df1933c15c086e905