license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large Northern Sámi This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5559 - Wer: 24.9143
b5ff9f78f12ad285a25a8559d9e5aebd
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 60000 - mixed_precision_training: Native AMP
d07f17f63c27bf2570d91000af4ca824
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:-------:| | 0.4665 | 58.0 | 1000 | 0.8572 | 54.5143 | | 0.3041 | 117.0 | 2000 | 0.6711 | 44.1143 | | 0.2671 | 176.0 | 3000 | 0.5794 | 39.7714 | | 0.1761 | 235.0 | 4000 | 0.5357 | 35.0857 | | 0.2089 | 294.0 | 5000 | 0.5094 | 33.6 | | 0.1456 | 352.0 | 6000 | 0.4959 | 33.0286 | | 0.1514 | 411.0 | 7000 | 0.4864 | 32.5714 | | 0.1203 | 470.0 | 8000 | 0.4625 | 31.4286 | | 0.0879 | 529.0 | 9000 | 0.4916 | 45.4857 | | 0.0825 | 588.0 | 10000 | 0.4962 | 30.6286 | | 0.0753 | 647.0 | 11000 | 0.4723 | 31.2 | | 0.0812 | 705.0 | 12000 | 0.4574 | 28.6857 | | 0.062 | 764.0 | 13000 | 0.4628 | 28.8000 | | 0.0604 | 823.0 | 14000 | 0.4668 | 28.0000 | | 0.0666 | 882.0 | 15000 | 0.4697 | 28.6857 | | 0.0405 | 941.0 | 16000 | 0.4908 | 54.6286 | | 0.0349 | 999.0 | 17000 | 0.4728 | 28.4571 | | 0.0409 | 1058.0 | 18000 | 0.4884 | 28.4571 | | 0.0292 | 1117.0 | 19000 | 0.4576 | 27.3143 | | 0.0247 | 1176.0 | 20000 | 0.4734 | 28.9143 | | 0.0229 | 1235.0 | 21000 | 0.4899 | 29.9429 | | 0.0271 | 1294.0 | 22000 | 0.4790 | 28.1143 | | 0.0271 | 1352.0 | 23000 | 0.5012 | 30.1714 | | 0.0184 | 1411.0 | 24000 | 0.5008 | 27.3143 | | 0.0211 | 1470.0 | 25000 | 0.5118 | 27.6571 | | 0.0183 | 1529.0 | 26000 | 0.5398 | 30.0571 | | 0.0164 | 1588.0 | 27000 | 0.5006 | 27.3143 | | 0.0169 | 1647.0 | 28000 | 0.5059 | 27.0857 | | 0.0147 | 1705.0 | 29000 | 0.5325 | 27.7714 | | 0.0104 | 1764.0 | 30000 | 0.4818 | 26.1714 | | 0.0128 | 1823.0 | 31000 | 0.5259 | 28.3429 | | 0.0145 | 1882.0 | 32000 | 0.5299 | 26.2857 | | 0.0075 | 1941.0 | 33000 | 0.5082 | 27.4286 | | 0.0087 | 1999.0 | 34000 | 0.5144 | 26.6286 | | 0.005 | 2058.0 | 35000 | 0.5590 | 27.0857 | | 0.0099 | 2117.0 | 36000 | 0.5546 | 28.9143 | | 0.007 | 2176.0 | 37000 | 0.5364 | 26.8571 | | 0.0045 | 2235.0 | 38000 | 0.5574 | 27.2000 | | 0.0064 | 2294.0 | 39000 | 0.5051 | 25.7143 | | 0.0079 | 2352.0 | 40000 | 0.5247 | 25.9429 | | 0.0083 | 2411.0 | 41000 | 0.5514 | 25.6 | | 0.0101 | 2470.0 | 42000 | 0.5710 | 25.6 | | 0.0062 | 2529.0 | 43000 | 0.5830 | 28.0000 | | 0.0046 | 2588.0 | 44000 | 0.5828 | 26.8571 | | 0.0053 | 2647.0 | 45000 | 0.5621 | 27.4286 | | 0.0047 | 2705.0 | 46000 | 0.5673 | 25.9429 | | 0.0045 | 2764.0 | 47000 | 0.5220 | 25.6 | | 0.0065 | 2823.0 | 48000 | 0.5704 | 27.7714 | | 0.0039 | 2882.0 | 49000 | 0.5741 | 27.7714 | | 0.0027 | 2941.0 | 50000 | 0.5762 | 26.0571 | | 0.0019 | 2999.0 | 51000 | 0.5559 | 24.9143 | | 0.0015 | 3058.0 | 52000 | 0.5777 | 28.5714 | | 0.0026 | 3117.0 | 53000 | 0.5589 | 25.2571 | | 0.0032 | 3176.0 | 54000 | 0.6061 | 26.9714 | | 0.0025 | 3235.0 | 55000 | 0.5776 | 25.1429 | | 0.0046 | 3294.0 | 56000 | 0.5753 | 27.3143 | | 0.0015 | 3352.0 | 57000 | 0.5736 | 27.2000 | | 0.003 | 3411.0 | 58000 | 0.5933 | 25.6 | | 0.002 | 3470.0 | 59000 | 0.6036 | 25.6 | | 0.0007 | 58.0 | 60000 | 0.5975 | 25.2571 |
cf9e9de3c12f939426fb69cb34712361
apache-2.0
['generated_from_keras_callback']
false
oscarth_54321 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.5784 - Validation Loss: 4.5266 - Epoch: 1
06e3bc8eed061d3e674d1da12b0fce6b
apache-2.0
['generated_from_trainer']
false
Millad_Customer_RN This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.5635 - Wer: 0.8113 - Cer: 0.4817
9c8ec4fe16256de4928b09a8d83f6cd2
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - num_epochs: 600 - mixed_precision_training: Native AMP
550b78c3e9e1194d91cd9eaf43ff8c56
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 1.9257 | 13.33 | 2000 | 2.0606 | 0.9767 | 0.5500 | | 1.4828 | 26.67 | 4000 | 2.1161 | 0.9019 | 0.4932 | | 1.2582 | 40.0 | 6000 | 2.0589 | 0.8504 | 0.4942 | | 0.9804 | 53.33 | 8000 | 2.4633 | 0.8745 | 0.4763 | | 0.7862 | 66.67 | 10000 | 2.4794 | 0.8861 | 0.4944 | | 0.6492 | 80.0 | 12000 | 2.8693 | 0.8554 | 0.4928 | | 0.5375 | 93.33 | 14000 | 2.6125 | 0.8296 | 0.4802 | | 0.4462 | 106.67 | 16000 | 2.7591 | 0.8770 | 0.4974 | | 0.3873 | 120.0 | 18000 | 3.0325 | 0.8379 | 0.4800 | | 0.3445 | 133.33 | 20000 | 2.9965 | 0.8761 | 0.4986 | | 0.3087 | 146.67 | 22000 | 3.3437 | 0.8221 | 0.4923 | | 0.2755 | 160.0 | 24000 | 3.3022 | 0.8803 | 0.5211 | | 0.2467 | 173.33 | 26000 | 3.2348 | 0.8479 | 0.4933 | | 0.2281 | 186.67 | 28000 | 3.8010 | 0.8695 | 0.5081 | | 0.2119 | 200.0 | 30000 | 3.0446 | 0.8545 | 0.4902 | | 0.194 | 213.33 | 32000 | 3.0873 | 0.8454 | 0.4840 | | 0.1677 | 226.67 | 34000 | 3.6184 | 0.8645 | 0.5019 | | 0.1642 | 240.0 | 36000 | 3.2480 | 0.8412 | 0.4903 | | 0.1656 | 253.33 | 38000 | 3.4379 | 0.8362 | 0.4816 | | 0.1371 | 266.67 | 40000 | 3.5117 | 0.8479 | 0.5040 | | 0.1301 | 280.0 | 42000 | 3.4360 | 0.8404 | 0.4870 | | 0.128 | 293.33 | 44000 | 3.6589 | 0.8537 | 0.4977 | | 0.1152 | 306.67 | 46000 | 4.2359 | 0.8545 | 0.5051 | | 0.1119 | 320.0 | 48000 | 3.5818 | 0.7980 | 0.4882 | | 0.1026 | 333.33 | 50000 | 3.7618 | 0.8013 | 0.4865 | | 0.0945 | 346.67 | 52000 | 4.2197 | 0.8404 | 0.5028 | | 0.0962 | 360.0 | 54000 | 3.9231 | 0.8653 | 0.5030 | | 0.088 | 373.33 | 56000 | 3.8400 | 0.8354 | 0.4914 | | 0.0743 | 386.67 | 58000 | 3.4924 | 0.8088 | 0.4824 | | 0.0811 | 400.0 | 60000 | 3.8370 | 0.8396 | 0.4861 | | 0.0696 | 413.33 | 62000 | 4.2808 | 0.8412 | 0.5065 | | 0.0692 | 426.67 | 64000 | 4.0161 | 0.8088 | 0.4744 | | 0.0622 | 440.0 | 66000 | 3.9080 | 0.8163 | 0.4910 | | 0.0591 | 453.33 | 68000 | 3.9838 | 0.8113 | 0.4823 | | 0.0527 | 466.67 | 70000 | 3.8067 | 0.8329 | 0.4914 | | 0.056 | 480.0 | 72000 | 4.1415 | 0.8096 | 0.4782 | | 0.0535 | 493.33 | 74000 | 4.3350 | 0.8229 | 0.4828 | | 0.0531 | 506.67 | 76000 | 3.9808 | 0.8071 | 0.4807 | | 0.0451 | 520.0 | 78000 | 4.0301 | 0.7988 | 0.4816 | | 0.044 | 533.33 | 80000 | 4.4680 | 0.8371 | 0.4921 | | 0.0389 | 546.67 | 82000 | 4.1380 | 0.8121 | 0.4819 | | 0.0392 | 560.0 | 84000 | 4.3910 | 0.7930 | 0.4763 | | 0.0389 | 573.33 | 86000 | 4.5086 | 0.8055 | 0.4802 | | 0.0355 | 586.67 | 88000 | 4.6259 | 0.8113 | 0.4821 | | 0.0307 | 600.0 | 90000 | 4.5635 | 0.8113 | 0.4817 |
8c9f8483980278df2986934b9b852c8e
other
[]
false
This is the model trained for this video: https://www.youtube.com/watch?v=OEPL5Tm3mmQ Due to hardware limitations, I trained this model with only a batch size of 2. (I know this isn't ideal). The quality of the model may be affected. After training was complete, the best model according to a hold-out set was used. This model was trained using a filtered version of this dataset: https://www.kaggle.com/datasets/thomaskonstantin/3500-popular-creepypastas This dataset had a lot of blank entries and missing text. Please subscribe to my YouTube Channel for bad quality videos and poorly trained models. https://www.youtube.com/channel/UCLXxfueCPZRZnyGFWJ07uqA
581189302d92b64262ccba320892537b
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-sst-2-english-zero-shot-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
328c3eb3b35b159c9693411510966274
apache-2.0
['translation']
false
spa-eng * source group: Spanish * target group: English * OPUS readme: [spa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md) * model: transformer * source language(s): spa * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip) * test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt) * test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.eval.txt)
541904b893c5fa3a02de8f396415c2e9
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-spaeng.spa.eng | 30.6 | 0.570 | | news-test2008-spaeng.spa.eng | 27.9 | 0.553 | | newstest2009-spaeng.spa.eng | 30.4 | 0.572 | | newstest2010-spaeng.spa.eng | 36.1 | 0.614 | | newstest2011-spaeng.spa.eng | 34.2 | 0.599 | | newstest2012-spaeng.spa.eng | 37.9 | 0.624 | | newstest2013-spaeng.spa.eng | 35.3 | 0.609 | | Tatoeba-test.spa.eng | 59.6 | 0.739 |
52e9bb83d41abe1daa8969fadf9f7f4e
apache-2.0
['translation']
false
System Info: - hf_name: spa-eng - source_languages: spa - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'en'] - src_constituents: {'spa'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt - src_alpha3: spa - tgt_alpha3: eng - short_pair: es-en - chrF2_score: 0.7390000000000001 - bleu: 59.6 - brevity_penalty: 0.9740000000000001 - ref_len: 79376.0 - src_name: Spanish - tgt_name: English - train_date: 2020-08-18 00:00:00 - src_alpha2: es - tgt_alpha2: en - prefer_old: False - long_pair: spa-eng - helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82 - transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9 - port_machine: brutasse - port_time: 2020-08-24-18:20
3e5c303eb532837727197c7a2dee2a29
apache-2.0
['generated_from_trainer']
false
XLSR_Fine_Tuned_Urdu_V2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_8_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.8023 - Wer: 0.4382
81b692be21f846077d7c1d4e799311fd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.424 | 3.25 | 1000 | 2.9777 | 1.0 | | 1.4315 | 6.49 | 2000 | 0.8493 | 0.5896 | | 0.6938 | 9.74 | 3000 | 0.7438 | 0.4978 | | 0.5129 | 12.99 | 4000 | 0.7480 | 0.4785 | | 0.4133 | 16.23 | 5000 | 0.7568 | 0.4600 | | 0.3496 | 19.48 | 6000 | 0.7387 | 0.4471 | | 0.3133 | 22.73 | 7000 | 0.7655 | 0.4426 | | 0.2767 | 25.97 | 8000 | 0.8081 | 0.4530 | | 0.2581 | 29.22 | 9000 | 0.8023 | 0.4382 |
d3a5f964a254352bb97dd59ea58261d7
apache-2.0
['translation']
false
opus-mt-fr-ase * source languages: fr * target languages: ase * OPUS readme: [fr-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ase/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.eval.txt)
f62c487db19a485b84e860d04c26c361
apache-2.0
['generated_from_keras_callback']
false
pedramyamini/distilbert-base-multilingual-cased-finetuned-mobile-banks-cafebazaar This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5059 - Validation Loss: 0.7437 - Epoch: 4
ee9fab977e55dbf9b99ecc9d858faf91
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5075 | 0.7437 | 0 | | 0.5074 | 0.7437 | 1 | | 0.5079 | 0.7437 | 2 | | 0.5086 | 0.7437 | 3 | | 0.5059 | 0.7437 | 4 |
d3a3418ace7ab37f5a327deb624d28ec
cc-by-sa-4.0
['generated_from_trainer']
false
ECHR_test_2 Task A This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.1998 - Macro-f1: 0.5295 - Micro-f1: 0.6157
5f1d7d11729e9fe444ee234125b73d6c
cc-by-sa-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP
50d36146b088eae2d96621bbde221b97
cc-by-sa-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.2142 | 0.44 | 500 | 0.2887 | 0.2391 | 0.4263 | | 0.172 | 0.89 | 1000 | 0.2672 | 0.2908 | 0.4628 | | 0.1737 | 1.33 | 1500 | 0.2612 | 0.3657 | 0.5102 | | 0.1581 | 1.78 | 2000 | 0.2412 | 0.3958 | 0.5468 | | 0.1509 | 2.22 | 2500 | 0.2264 | 0.3950 | 0.5552 | | 0.1606 | 2.67 | 3000 | 0.2342 | 0.4006 | 0.5511 | | 0.1491 | 3.11 | 3500 | 0.2176 | 0.4558 | 0.5622 | | 0.1392 | 3.56 | 4000 | 0.2454 | 0.4128 | 0.5596 | | 0.15 | 4.0 | 4500 | 0.2113 | 0.4684 | 0.5874 | | 0.1461 | 4.44 | 5000 | 0.2179 | 0.4631 | 0.5815 | | 0.1457 | 4.89 | 5500 | 0.2151 | 0.4805 | 0.5949 | | 0.1443 | 5.33 | 6000 | 0.2155 | 0.5123 | 0.5917 | | 0.1279 | 5.78 | 6500 | 0.2131 | 0.4915 | 0.5998 | | 0.1377 | 6.22 | 7000 | 0.2244 | 0.4705 | 0.5944 | | 0.1242 | 6.67 | 7500 | 0.2150 | 0.5089 | 0.5918 | | 0.1222 | 7.11 | 8000 | 0.2045 | 0.4801 | 0.5981 | | 0.1372 | 7.56 | 8500 | 0.2074 | 0.5317 | 0.5962 | | 0.1289 | 8.0 | 9000 | 0.2035 | 0.5323 | 0.6126 | | 0.1295 | 8.44 | 9500 | 0.2058 | 0.5213 | 0.6073 | | 0.123 | 8.89 | 10000 | 0.2027 | 0.5486 | 0.6135 | | 0.1335 | 9.33 | 10500 | 0.1984 | 0.5442 | 0.6249 | | 0.1258 | 9.78 | 11000 | 0.1998 | 0.5295 | 0.6157 |
e25201bac377a79ec959c84ef8fb8770
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-code-snippet-quality-scoring This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4070 - Accuracy: 0.8568
218e3323e12c25c03343d662411fcbd5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5353 | 0.13 | 1000 | 0.5110 | 0.7574 | | 0.4686 | 0.26 | 2000 | 0.4339 | 0.7859 | | 0.4517 | 0.39 | 3000 | 0.4240 | 0.8002 | | 0.4263 | 0.52 | 4000 | 0.3906 | 0.8169 | | 0.4053 | 0.66 | 5000 | 0.3934 | 0.8191 | | 0.3867 | 0.79 | 6000 | 0.3859 | 0.8253 | | 0.3906 | 0.92 | 7000 | 0.3936 | 0.8335 | | 0.3418 | 1.05 | 8000 | 0.3615 | 0.8380 | | 0.3418 | 1.18 | 9000 | 0.3585 | 0.8400 | | 0.3307 | 1.31 | 10000 | 0.3520 | 0.8432 | | 0.3301 | 1.44 | 11000 | 0.3476 | 0.8475 | | 0.3275 | 1.57 | 12000 | 0.3511 | 0.8497 | | 0.3192 | 1.71 | 13000 | 0.3519 | 0.8540 | | 0.3218 | 1.84 | 14000 | 0.3402 | 0.8495 | | 0.3199 | 1.97 | 15000 | 0.3375 | 0.8580 | | 0.2591 | 2.1 | 16000 | 0.3687 | 0.8568 | | 0.2732 | 2.23 | 17000 | 0.3619 | 0.8521 | | 0.2681 | 2.36 | 18000 | 0.3574 | 0.8563 | | 0.2606 | 2.49 | 19000 | 0.3404 | 0.8581 | | 0.2662 | 2.62 | 20000 | 0.3708 | 0.8566 | | 0.2685 | 2.76 | 21000 | 0.3743 | 0.8591 | | 0.246 | 2.89 | 22000 | 0.3786 | 0.8531 | | 0.258 | 3.02 | 23000 | 0.3781 | 0.8578 | | 0.2284 | 3.15 | 24000 | 0.3938 | 0.8583 | | 0.2206 | 3.28 | 25000 | 0.4121 | 0.8583 | | 0.2131 | 3.41 | 26000 | 0.4091 | 0.8575 | | 0.2181 | 3.54 | 27000 | 0.4264 | 0.8535 | | 0.2289 | 3.67 | 28000 | 0.3998 | 0.8568 | | 0.2262 | 3.81 | 29000 | 0.3983 | 0.8580 | | 0.2095 | 3.94 | 30000 | 0.4070 | 0.8568 |
00166018a7615254ed186f48a5d3dc89
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
KerasCV Stable Diffusion in Diffusers 🧨🤗 The pipeline contained in this repository was created using [this Space](https://huggingface.co/spaces/sayakpaul/convert-kerascv-sd-diffusers). The purpose is to convert the KerasCV Stable Diffusion weights in a way that is compatible with [Diffusers](https://github.com/huggingface/diffusers). This allows users to fine-tune using KerasCV and use the fine-tuned weights in Diffusers taking advantage of its nifty features (like [schedulers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers), [fast attention](https://huggingface.co/docs/diffusers/optimization/fp16), etc.). Following weight paths (KerasCV) were used : ['https://huggingface.co/sayakpaul/dreambooth-keras-dogs-unet/resolve/main/lr_1e-6_steps_1000.h5']
a52ca4c2ce07e10e18a7d271d5c73b58
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53_english This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2620 - Wer: 0.1916
071c19cfb02dada368d852a9a5095e46
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP
4677b05e5367462b389cd3ef36b57db2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0506 | 0.12 | 250 | 3.0206 | 0.9999 | | 1.4381 | 0.25 | 500 | 1.0267 | 0.6323 | | 1.0903 | 0.37 | 750 | 0.5841 | 0.3704 | | 1.0384 | 0.5 | 1000 | 0.5156 | 0.3348 | | 0.9658 | 0.62 | 1250 | 0.4721 | 0.3221 | | 0.9184 | 0.74 | 1500 | 0.4301 | 0.3213 | | 0.8939 | 0.87 | 1750 | 0.4188 | 0.2884 | | 0.9051 | 0.99 | 2000 | 0.3852 | 0.2807 | | 0.563 | 1.12 | 2250 | 0.3752 | 0.2804 | | 0.6122 | 1.24 | 2500 | 0.3745 | 0.2732 | | 0.6213 | 1.36 | 2750 | 0.3671 | 0.2575 | | 0.5839 | 1.49 | 3000 | 0.3560 | 0.2578 | | 0.615 | 1.61 | 3250 | 0.3555 | 0.2536 | | 0.5557 | 1.74 | 3500 | 0.3511 | 0.2485 | | 0.5497 | 1.86 | 3750 | 0.3364 | 0.2425 | | 0.5412 | 1.98 | 4000 | 0.3253 | 0.2418 | | 0.2834 | 2.11 | 4250 | 0.3293 | 0.2322 | | 0.2723 | 2.23 | 4500 | 0.3157 | 0.2322 | | 0.2713 | 2.35 | 4750 | 0.3148 | 0.2304 | | 0.2878 | 2.48 | 5000 | 0.3143 | 0.2286 | | 0.2776 | 2.6 | 5250 | 0.3122 | 0.2250 | | 0.2553 | 2.73 | 5500 | 0.3003 | 0.2234 | | 0.278 | 2.85 | 5750 | 0.2973 | 0.2198 | | 0.2445 | 2.97 | 6000 | 0.2938 | 0.2180 | | 0.4361 | 3.1 | 6250 | 0.2914 | 0.2132 | | 0.3979 | 3.22 | 6500 | 0.2916 | 0.2125 | | 0.4221 | 3.35 | 6750 | 0.2879 | 0.2113 | | 0.4051 | 3.47 | 7000 | 0.2819 | 0.2100 | | 0.4218 | 3.59 | 7250 | 0.2812 | 0.2072 | | 0.4201 | 3.72 | 7500 | 0.2772 | 0.2055 | | 0.3515 | 3.84 | 7750 | 0.2747 | 0.2031 | | 0.4021 | 3.97 | 8000 | 0.2702 | 0.2018 | | 0.4304 | 4.09 | 8250 | 0.2721 | 0.2007 | | 0.3923 | 4.21 | 8500 | 0.2689 | 0.1991 | | 0.3824 | 4.34 | 8750 | 0.2692 | 0.1980 | | 0.3743 | 4.46 | 9000 | 0.2718 | 0.1950 | | 0.3771 | 4.59 | 9250 | 0.2653 | 0.1950 | | 0.4048 | 4.71 | 9500 | 0.2649 | 0.1934 | | 0.3539 | 4.83 | 9750 | 0.2638 | 0.1919 | | 0.3498 | 4.96 | 10000 | 0.2620 | 0.1916 |
b62719cfb381d672ec830ef02e831e4b
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small - Swedish This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3500 - Wer: 19.5235
09efafccba3b43975612442c437c056b
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
b4f2912d742d1a4aecc02ab1134dc2be
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1391 | 1.3 | 1000 | 0.2981 | 21.5939 | | 0.049 | 2.59 | 2000 | 0.2954 | 20.5614 | | 0.0198 | 3.89 | 3000 | 0.3049 | 19.9564 | | 0.0036 | 5.18 | 4000 | 0.3381 | 19.6042 | | 0.0024 | 6.48 | 5000 | 0.3500 | 19.5235 |
72e42ce04b62b2e887c1150cd86660a4
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2315 - Accuracy: 0.926 - F1: 0.9260
6810500f397f09eedb70be57f547d688
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8794 | 1.0 | 250 | 0.3392 | 0.8985 | 0.8948 | | 0.2663 | 2.0 | 500 | 0.2315 | 0.926 | 0.9260 |
a6fc71d5f5f9980655f3a0e26dea9f0a
apache-2.0
['pytorch', 'causal-lm']
false
Model Description Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py
323da22a209283aacac094f828f6c52f
apache-2.0
['pytorch', 'causal-lm']
false
How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: [Fork](https://github.com/finetuneanon/transformers) to install with pip: ```bash pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b ``` This model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, ) model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-python-6B", use_auth_token=True).half().eval().cuda() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") text = '''def print_customer_name''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0][len(tokens[0]):] generated_text = tokenizer.decode(last_tokens) print("Generation:\n" + generated_text) ``` When ran, this code generates: ```python Prompt: def print_customer_name Generation: (self, customer): """Print the name of a customer.""" if not self.is_valid(): return print("Customer: {}".format(customer)) ``` For example usage, you can see our colab notebook as well: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
6b09078d31fa82f6abcad5ba3ab7fa63
apache-2.0
['pytorch', 'causal-lm']
false
Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project! - [Aero](https://github.com/AeroScripts) - [Finetune](https://github.com/finetuneanon) - [Kurumuz](https://github.com/kurumuz)
841b108aa981d4cf7645fee362d06a11
mit
['mbart-50']
false
mBART-50 mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
53fbc78aedb63a92e37f8cddb0416689
mit
['mbart-50']
false
Model description mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning. Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below. **Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data: `D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes, first randomly shuffling the original sentences' order, and second a novel in-filling scheme, where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text. 35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`. The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence.
4549538090e9ade4fa1d4cefa7853eea
mit
['mbart-50']
false
Intended uses & limitations `mbart-large-50` is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks. See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for fine-tuned versions.
daf4f15a98ca790a9765965e6a9c7a07
mit
['mbart-50']
false
Training As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is `[lang_code] X [eos]` with `X` being the source or target text respectively and `lang_code` is `source_lang_code` for source text and `tgt_lang_code` for target text. `bos` is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model. ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO") src_text = " UN Chief Says There Is No Military Solution in Syria" tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria" model_inputs = tokenizer(src_text, return_tensors="pt") with tokenizer.as_target_tokenizer(): labels = tokenizer(tgt_text, return_tensors="pt").input_ids model(**model_inputs, labels=labels)
f7296c1302de62e9779760bf81083758
mit
['mbart-50']
false
Languages covered Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
36da7dc6013b47ceee396de6f77f3b73
mit
['mbart-50']
false
BibTeX entry and citation info ``` @article{tang2020multilingual, title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning}, author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan}, year={2020}, eprint={2008.00401}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ec129e0757c01f55010de18052587bf4
mit
['spacy', 'token-classification']
false
en_core_web_lg English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_core_web_lg` | | **Version** | `3.4.1` | | **spaCy** | `>=3.4.0,<3.5.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) |
c0bc49092a03c75c34744ac58f68a0aa
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0596 - Precision: 0.9279 - Recall: 0.9378 - F1: 0.9328 - Accuracy: 0.9840
dd0500331cf047f81348f7d665b4d906
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2377 | 1.0 | 878 | 0.0717 | 0.9140 | 0.9205 | 0.9172 | 0.9800 | | 0.0498 | 2.0 | 1756 | 0.0609 | 0.9168 | 0.9332 | 0.9249 | 0.9827 | | 0.0301 | 3.0 | 2634 | 0.0596 | 0.9279 | 0.9378 | 0.9328 | 0.9840 |
d31a022ef85aa7af703ee729254a3b48
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5112 - Wer: 0.9988
6031c0e8896f996bf9f4c884a8955a18
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5557 | 1.0 | 500 | 1.6786 | 1.0 | | 0.8407 | 2.01 | 1000 | 0.5356 | 0.9988 | | 0.4297 | 3.01 | 1500 | 0.4431 | 0.9988 | | 0.2989 | 4.02 | 2000 | 0.4191 | 0.9988 | | 0.2338 | 5.02 | 2500 | 0.4251 | 0.9988 | | 0.1993 | 6.02 | 3000 | 0.4618 | 0.9988 | | 0.1585 | 7.03 | 3500 | 0.4577 | 0.9988 | | 0.1386 | 8.03 | 4000 | 0.4099 | 0.9982 | | 0.1234 | 9.04 | 4500 | 0.4945 | 0.9988 | | 0.1162 | 10.04 | 5000 | 0.4597 | 0.9988 | | 0.1008 | 11.04 | 5500 | 0.4563 | 0.9988 | | 0.0894 | 12.05 | 6000 | 0.5157 | 0.9988 | | 0.083 | 13.05 | 6500 | 0.5027 | 0.9988 | | 0.0735 | 14.06 | 7000 | 0.4905 | 0.9994 | | 0.0686 | 15.06 | 7500 | 0.4552 | 0.9988 | | 0.0632 | 16.06 | 8000 | 0.5522 | 0.9988 | | 0.061 | 17.07 | 8500 | 0.4874 | 0.9988 | | 0.0626 | 18.07 | 9000 | 0.5243 | 0.9988 | | 0.0475 | 19.08 | 9500 | 0.4798 | 0.9988 | | 0.0447 | 20.08 | 10000 | 0.5250 | 0.9988 | | 0.0432 | 21.08 | 10500 | 0.5195 | 0.9988 | | 0.0358 | 22.09 | 11000 | 0.5008 | 0.9988 | | 0.0319 | 23.09 | 11500 | 0.5376 | 0.9988 | | 0.0334 | 24.1 | 12000 | 0.5149 | 0.9988 | | 0.0269 | 25.1 | 12500 | 0.4911 | 0.9988 | | 0.0275 | 26.1 | 13000 | 0.4907 | 0.9988 | | 0.027 | 27.11 | 13500 | 0.4992 | 0.9988 | | 0.0239 | 28.11 | 14000 | 0.5021 | 0.9988 | | 0.0233 | 29.12 | 14500 | 0.5112 | 0.9988 |
1c65ce0891cdc8f7c5bcee625a6f2d45
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
Description: This model was trained by Manuel Pariente using the wham/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_clean` task of the WHAM! dataset.
4c9ffe97ab302b11d6714341cc452194
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
Training config: ```yaml data: n_src: 2 mode: min nondefault_nsrc: None sample_rate: 8000 segment: 3 task: sep_clean train_dir: data/wav8k/min/tr/ valid_dir: data/wav8k/min/cv/ filterbank: kernel_size: 16 n_filters: 512 stride: 8 main_args: exp_dir: exp/wham gpus: -1 help: None masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 2 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 24 early_stop: True epochs: 200 half_lr: True num_workers: 4 ```
b23451c2419a01085728b9f604220686
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
Results: ```yaml si_sdr: 16.21326632846293 si_sdr_imp: 16.21441705664987 sdr: 16.615180021738933 sdr_imp: 16.464137807433435 sir: 26.860503975131923 sir_imp: 26.709461760826414 sar: 17.18312813480803 sar_imp: -131.99332048277296 stoi: 0.9619940905157323 stoi_imp: 0.2239480672473015 ```
be2064602dd12d5ed1266f68c091ecdd
cc-by-sa-4.0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
License notice: This work "ConvTasNet_WHAM!_sepclean" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A) by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only). "ConvTasNet_WHAM!_sepclean" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Manuel Pariente.
fb00aca23e65bc7261e016d2c0e4ef13
apache-2.0
['generated_from_trainer']
false
distilbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.3611
71c63b76efed4568f367ef60d2ef0ba9
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
dyc0003 Dreambooth model trained by anmol-chawla with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
01dfbb4b7358e22f01ea9ed9094f6423
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need for fine-tuning. Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356) by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper). **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were copied and pasted from the original model card.
29e505e2f4fcbedb6b50d9eabc977a30
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
transformers.WhisperProcessor). The `WhisperProcessor` is used to: 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model) 2. Post-process the model outputs (converting them from tokens to text) The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order: 1. The transcription always starts with the `<|startoftranscript|>` token 2. The second token is the language token (e.g. `<|en|>` for English) 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction Thus, a typical sequence of context tokens might look as follows: ``` <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|> ``` Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps. These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at each position. This allows one to control the output language and task for the Whisper model. If they are un-forced, the Whisper model will automatically predict the output langauge and task itself. The context tokens can be set accordingly: ```python model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe") ``` Which forces the model to predict in English under the task of speech recognition.
d81e309896a363d7b3929d76477f909a
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
English to English In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language (English) and task (transcribe). ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import load_dataset >>>
f684cd45b16d44922167bb17bf0a5500
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> model.config.forced_decoder_ids = None >>>
749bfc1e74f822088b2e4353e108fe57
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
decode token ids to text >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False) ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.'] ``` The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
e9b2b36302c2b72751b2f5295f5ca8df
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
French to French The following example demonstrates French to French transcription by setting the decoder ids appropriately. ```python >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration >>> from datasets import Audio, load_dataset >>>
8240a2ad326a5bd58327cb60d4eb4313
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe") >>>
15f4ce545beb13a5b4517b1637ea2b42
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
load streaming dataset and read first audio sample >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True) >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) >>> input_speech = next(iter(ds))["audio"] >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features >>>
6d83c1ed398f216ad608a4c736c6f7ee
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
decode token ids to text >>> transcription = processor.batch_decode(predicted_ids) ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>'] >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) [' Un vrai travail intéressant va enfin être mené sur ce sujet.'] ```
02ec5421d99de4789e88487bf67bd647
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
load model and processor >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate") >>>
1ca4c56099ced93f6c099a3754c6a8a1
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
Evaluation This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr): ```python >>> from datasets import load_dataset >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor >>> import torch >>> from evaluate import load >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test") >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small") >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda") >>> def map_to_pred(batch): >>> audio = batch["audio"] >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features >>> batch["reference"] = processor.tokenizer._normalize(batch['text']) >>> >>> with torch.no_grad(): >>> predicted_ids = model.generate(input_features.to("cuda"))[0] >>> transcription = processor.decode(predicted_ids) >>> batch["prediction"] = processor.tokenizer._normalize(transcription) >>> return batch >>> result = librispeech_test_clean.map(map_to_pred) >>> wer = load("wer") >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"])) 3.432213777886737 ```
b8d013f4dfd4803bd12261aa80761d23
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. It can also be extended to predict utterance level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="openai/whisper-small", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy())["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>>
56eaa17b51f606c1ad73c8cf26d3d114
apache-2.0
['generated_from_trainer']
false
t5-base-extraction-cnndm_fs0.1-all This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7159
ce10743925085b44016dc93e3bef4fcc
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP
cecc440e9e6383d4aa183e37711bfc72
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2503 | 0.45 | 200 | 1.8495 | | 1.9367 | 0.9 | 400 | 1.7930 | | 1.8669 | 1.35 | 600 | 1.7704 | | 1.8371 | 1.81 | 800 | 1.7481 | | 1.8051 | 2.26 | 1000 | 1.7362 | | 1.7843 | 2.71 | 1200 | 1.7345 | | 1.7669 | 3.16 | 1400 | 1.7159 | | 1.8786 | 3.61 | 1600 | 1.9442 | | 2.0554 | 4.06 | 1800 | 1.9691 | | 2.0521 | 4.51 | 2000 | 1.9731 | | 2.0579 | 4.97 | 2200 | 1.9744 | | 2.0514 | 5.42 | 2400 | 1.9743 |
9b34ab6d14a0faf29fbc948ee1768e54
apache-2.0
['generated_from_trainer']
false
my_ASR_model This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2180 - Wer: 0.2546
c6445e11676364e3eaf570acec41bec7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP
36ac324d7190bc96c8555edee121f319
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.6732 | 20.0 | 100 | 1.5134 | 0.4502 | | 1.1618 | 40.0 | 200 | 1.4121 | 0.3838 | | 0.8533 | 60.0 | 300 | 1.2672 | 0.3616 | | 0.6095 | 80.0 | 400 | 1.8035 | 0.3506 | | 0.4159 | 100.0 | 500 | 2.1305 | 0.3358 | | 0.25 | 120.0 | 600 | 2.3071 | 0.3173 | | 0.2032 | 140.0 | 700 | 2.3467 | 0.3100 | | 0.187 | 160.0 | 800 | 2.1261 | 0.3063 | | 0.1415 | 180.0 | 900 | 2.4187 | 0.3026 | | 0.1268 | 200.0 | 1000 | 2.2731 | 0.2841 | | 0.1158 | 220.0 | 1100 | 2.2680 | 0.2952 | | 0.1112 | 240.0 | 1200 | 2.3492 | 0.2952 | | 0.0965 | 260.0 | 1300 | 2.2798 | 0.2804 | | 0.0857 | 280.0 | 1400 | 2.3569 | 0.2768 | | 0.0839 | 300.0 | 1500 | 2.2247 | 0.2509 | | 0.0732 | 320.0 | 1600 | 2.2106 | 0.2399 | | 0.0798 | 340.0 | 1700 | 2.2425 | 0.2583 | | 0.0862 | 360.0 | 1800 | 2.2891 | 0.2583 | | 0.0654 | 380.0 | 1900 | 2.2015 | 0.2546 | | 0.0731 | 400.0 | 2000 | 2.2180 | 0.2546 |
e1d37ddcd4684c20cdbca0f4419a4287
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-en-to-ro-fp16_off This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.4078 - Bleu: 7.3056 - Gen Len: 18.2556
993e48b7f9dc3efaffaf5b1dcd4256ab
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 0.6037 | 1.0 | 7629 | 1.4078 | 7.3056 | 18.2556 |
ec65fb02fc9b5997017f425d36f6d286
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4436450/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
e24ef57d2e867438c2e94d3d74db8f57
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-base-wikinewssum-english-100 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.6225 - Rouge1: 3.909 - Rouge2: 0.9312 - Rougel: 3.3835 - Rougelsum: 3.7786
133b8c3cf31ec0cbb0437a2bae560c33
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 0.96 | 12 | 14.4949 | 2.7398 | 0.7181 | 2.491 | 2.6561 | | No log | 1.96 | 24 | 10.5056 | 4.4428 | 1.4293 | 3.8469 | 4.2869 | | No log | 2.96 | 36 | 8.9856 | 4.1179 | 1.229 | 3.5726 | 3.9693 | | No log | 3.96 | 48 | 7.7950 | 3.9217 | 1.1339 | 3.4256 | 3.7905 | | No log | 4.96 | 60 | 7.0734 | 3.8004 | 1.0326 | 3.3246 | 3.6766 | | No log | 5.96 | 72 | 6.7897 | 3.6351 | 0.9162 | 3.1839 | 3.5149 | | No log | 6.96 | 84 | 6.6610 | 3.7486 | 0.8829 | 3.2583 | 3.6193 | | No log | 7.96 | 96 | 6.6225 | 3.909 | 0.9312 | 3.3835 | 3.7786 |
bb16431fabb2c79efd0307d04de11db9
apache-2.0
['part-of-speech', 'token-classification']
false
XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: North Sami This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
fe7d0e794ca1012af904dc006adf9129
apache-2.0
['part-of-speech', 'token-classification']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sme") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sme") ```
00f5a434ba5e964da5970617d35e212d
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab-test This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4283 - Wer: 0.3356
019313029ca12eac22079030dce67583
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.7386 | 4.0 | 500 | 2.2419 | 1.0 | | 0.9366 | 8.0 | 1000 | 0.4789 | 0.4807 | | 0.3118 | 12.0 | 1500 | 0.4197 | 0.3973 | | 0.1784 | 16.0 | 2000 | 0.4216 | 0.3614 | | 0.1297 | 20.0 | 2500 | 0.4298 | 0.3507 | | 0.1091 | 24.0 | 3000 | 0.4365 | 0.3437 | | 0.0819 | 28.0 | 3500 | 0.4283 | 0.3356 |
92cf555bfb10335dfbe9abc81f763b5e
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 350.2 - GMACs: 179.2 - Activations (M): 169.0 - Image size: 384 x 384 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k
a33adef38cf77ea872d9c1ce5149a625
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_xlarge.fb_in22k_ft_in1k_384', pretrained=True) model = model.eval()
b600c77b1d3c152f61e0a88c352063e8
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_xlarge.fb_in22k_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval()
9cb7f007bc3af718e3ff2dc3bc1ea9f4
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_xlarge.fb_in22k_ft_in1k_384', pretrained=True, num_classes=0,
7118eeb7b1915b2e7a130988abab0ca1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.3183 | 1.0 | 318 | 3.3075 | 0.7416 | | 2.633 | 2.0 | 636 | 1.8792 | 0.8384 | | 1.5339 | 3.0 | 954 | 1.1514 | 0.8939 | | 1.0038 | 4.0 | 1272 | 0.8567 | 0.9077 | | 0.7868 | 5.0 | 1590 | 0.7730 | 0.9116 |
897ee535c9622fe60e8a6a4b1499bab7
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256") model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-tiny-patch4-window8-256") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
e6192943e6ca9fc24166855de52053e9
mit
['generated_from_keras_callback']
false
Ashraf-kasem/gpt2_frame_text_predictor This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.9203 - Validation Loss: 8.7222 - Epoch: 0
8e7e47de40c64a2b0ffb3f336e766912
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'LinearWarmup', 'config': {'after_warmup_lr_sched': {'initial_learning_rate': 5e-05, 'decay_steps': 16, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'warmup_steps': 1, 'warmup_learning_rate': 0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_float16
c8794b18e72408307227bded43ed7be4
mit
['pytorch', 'diffusers', 'unconditional-audio-generation', 'diffusion-models-class']
false
Model Card for Unit 4 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional audio generation of music in the genre Electronic
a2dee9b6280900e9c95311243f5e48b4
mit
['pytorch', 'diffusers', 'unconditional-audio-generation', 'diffusion-models-class']
false
Usage ```python from IPython.display import Audio from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("johnowhitaker/Electronic_test") output = pipe() display(output.images[0]) display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) ```
35d1a6d12714fa3bf4e46ab337361329
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
datasets) This model transcribes speech into lowercase Cyrillic alphabet including space, and is trained on around 1636 hours of Russian speech data. It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters. See the [model architecture](
3030175cc3fb0253850e9653add950b1
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_ru_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
aa153db34984a98926c4ebddbe799436
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml). The vocabulary we use contains 33 characters: ```python [' ', 'а', 'б', 'в', 'г', 'д', 'е', 'ж', 'з', 'и', 'й', 'к', 'л', 'м', 'н', 'о', 'п', 'р', 'с', 'т', 'у', 'ф', 'х', 'ц', 'ч', 'ш', 'щ', 'ъ', 'ы', 'ь', 'э', 'ю', 'я'] ``` Rare symbols with diacritics were replaced during preprocessing. The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
04032f3df65508651e67022e48705379
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Datasets All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of more than a thousand hours of Russian speech: - Mozilla Common Voice 10.0 (Russian) - train subset [28 hours] - Golos - crowd [1070 hours] and fairfield [111 hours] subsets - Russian LibriSpeech (RuLS) [92 hours] - SOVA - RuAudiobooksDevices [260 hours] and RuDevices [75 hours] subsets
671cfbdf4e0b7a634a9f3f2e732b2c20
cc-by-4.0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
false
Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | MCV 10.0 dev | MCV 10.0 test | GOLOS-crowd test | GOLOS-farfield test | RuLS test | Train Dataset | |---------|-----------------------|-----------------|--------------|---------------|------------------|---------------------|-----------|---------------| | 1.13.0 | SentencePiece Unigram | 1024 | 3.5 | 4.0 | 2.7 | 7.6 | 12.0 | NeMo ASRSET |
70df0dd0ca62c964cb50985ac2b5c1c9
mit
[]
false
This model has been pretrained on MS MARCO passages first, then fine-tuned on the MS MARCO training set following the approach described in the paper **Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval**. The model can be used to reproduce the experimental results associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR. This model is trained with BERT-large as the backbone with 335M hyperparameters.
8670c99395a0aa38bf16a8838faf385f
mit
['sklearn', 'skops', 'tabular-classification']
false
Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameter | Value | |--------------------------------------------|-----------------------------------------------------------------------------------| | memory | | | steps | [('preprocessor', ColumnTransformer(transformers=[('num', Pipeline(steps=[('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler())]), ['MonthlyCharges', 'TotalCharges', 'tenure']), ('cat', OneHotEncoder(handle_unknown='ignore'), ['SeniorCitizen', 'gender', 'Partner', 'Dependents', 'PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod'])])), ('classifier', LogisticRegression(class_weight='balanced', max_iter=300))] | | verbose | False | | preprocessor | ColumnTransformer(transformers=[('num', Pipeline(steps=[('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler())]), ['MonthlyCharges', 'TotalCharges', 'tenure']), ('cat', OneHotEncoder(handle_unknown='ignore'), ['SeniorCitizen', 'gender', 'Partner', 'Dependents', 'PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod'])]) | | classifier | LogisticRegression(class_weight='balanced', max_iter=300) | | preprocessor__n_jobs | | | preprocessor__remainder | drop | | preprocessor__sparse_threshold | 0.3 | | preprocessor__transformer_weights | | | preprocessor__transformers | [('num', Pipeline(steps=[('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler())]), ['MonthlyCharges', 'TotalCharges', 'tenure']), ('cat', OneHotEncoder(handle_unknown='ignore'), ['SeniorCitizen', 'gender', 'Partner', 'Dependents', 'PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling', 'PaymentMethod'])] | | preprocessor__verbose | False | | preprocessor__verbose_feature_names_out | True | | preprocessor__num | Pipeline(steps=[('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler())]) | | preprocessor__cat | OneHotEncoder(handle_unknown='ignore') | | preprocessor__num__memory | | | preprocessor__num__steps | [('imputer', SimpleImputer(strategy='median')), ('std_scaler', StandardScaler())] | | preprocessor__num__verbose | False | | preprocessor__num__imputer | SimpleImputer(strategy='median') | | preprocessor__num__std_scaler | StandardScaler() | | preprocessor__num__imputer__add_indicator | False | | preprocessor__num__imputer__copy | True | | preprocessor__num__imputer__fill_value | | | preprocessor__num__imputer__missing_values | nan | | preprocessor__num__imputer__strategy | median | | preprocessor__num__imputer__verbose | deprecated | | preprocessor__num__std_scaler__copy | True | | preprocessor__num__std_scaler__with_mean | True | | preprocessor__num__std_scaler__with_std | True | | preprocessor__cat__categories | auto | | preprocessor__cat__drop | | | preprocessor__cat__dtype | <class 'numpy.float64'> | | preprocessor__cat__handle_unknown | ignore | | preprocessor__cat__max_categories | | | preprocessor__cat__min_frequency | | | preprocessor__cat__sparse | True | | classifier__C | 1.0 | | classifier__class_weight | balanced | | classifier__dual | False | | classifier__fit_intercept | True | | classifier__intercept_scaling | 1 | | classifier__l1_ratio | | | classifier__max_iter | 300 | | classifier__multi_class | auto | | classifier__n_jobs | | | classifier__penalty | l2 | | classifier__random_state | | | classifier__solver | lbfgs | | classifier__tol | 0.0001 | | classifier__verbose | 0 | | classifier__warm_start | False | </details>
77e56c6b32e99817aaf56a78eb9a3e5e
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;, max_iter=300))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-26" type="checkbox" ><label for="sk-estimator-id-26" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&
c916967bf681f9435240a880a87e2486
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;, max_iter=300))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-27" type="checkbox" ><label for="sk-estimator-id-27" class="sk-toggleable__label sk-toggleable__label-arrow">preprocessor: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[(&
4a8aa6fda0c8560e534cc50d6b1628af
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-28" type="checkbox" ><label for="sk-estimator-id-28" class="sk-toggleable__label sk-toggleable__label-arrow">num</label><div class="sk-toggleable__content"><pre>[&
545407e18687fbfea1e9680c1592e516
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-29" type="checkbox" ><label for="sk-estimator-id-29" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer(strategy=&
20e3f13620896c6bff6644b9efa1bd18
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;)</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-30" type="checkbox" ><label for="sk-estimator-id-30" class="sk-toggleable__label sk-toggleable__label-arrow">StandardScaler</label><div class="sk-toggleable__content"><pre>StandardScaler()</pre></div></div></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-31" type="checkbox" ><label for="sk-estimator-id-31" class="sk-toggleable__label sk-toggleable__label-arrow">cat</label><div class="sk-toggleable__content"><pre>[&
59c2bf50765d4e9afc93988473b5d1bf