license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4061 | 6.41 | 500 | 1.4142 | 0.6282 | | 0.0518 | 12.82 | 1000 | 2.5075 | 0.6101 | | 0.0239 | 19.23 | 1500 | 2.7691 | 0.6209 | | 0.0192 | 25.64 | 2000 | 3.0528 | 0.6354 | | 0.0183 | 32.05 | 2500 | 3.3188 | 0.6137 | | 0.0158 | 38.46 | 3000 | 3.3546 | 0.6137 | | 0.0102 | 44.87 | 3500 | 3.5597 | 0.6245 | | 2de498b807d8419cd8667d2e5af9fa2d |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer'] | false | wav2vec2-large-xlsr-53-german-cv8-dropout This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1111 - Wer: 0.1117 | 7275ef9e7af0a053844ce9f4e0f02b5b |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10.0 - mixed_precision_training: Native AMP | 09751d08a2df48caa91ec7edb1ab4324 |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2081 | 1.0 | 6815 | 0.1784 | 0.1910 | | 0.1686 | 2.0 | 13630 | 0.1621 | 0.1725 | | 0.1515 | 3.0 | 20445 | 0.1569 | 0.1649 | | 0.1426 | 4.0 | 27260 | 0.1466 | 0.1681 | | 0.135 | 5.0 | 34075 | 0.1357 | 0.1410 | | 0.1093 | 6.0 | 40890 | 0.1313 | 0.1436 | | 0.1 | 7.0 | 47705 | 0.1242 | 0.1250 | | 0.0999 | 8.0 | 54520 | 0.1191 | 0.1218 | | 0.084 | 9.0 | 61335 | 0.1134 | 0.1164 | | 0.0752 | 10.0 | 68150 | 0.1111 | 0.1117 | | d1ca664fd07fff7bcd2a8d2901ddaeb5 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Wav2Vec2-Large-XLSR-53-Kannada Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kannada using the [OpenSLR SLR79](http://openslr.org/79/) dataset. When using this model, make sure that your speech input is sampled at 16kHz. | 2ac9fdd66f9a233bab09056f2066f137 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Usage The model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada `sentence` and `path` fields: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor | 51175cfbbaf91466a373ce1ca647aacc |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For a sample, see the Colab link in Training Section. processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn") model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn") resampler = torchaudio.transforms.Resample(48_000, 16_000) | 2607c700cf9ffada8e0404b95a43d195 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` | 724ed7e00557c8ed20202729d51a3b0c |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Evaluation The model can be evaluated as follows on 10% of the Kannada data on OpenSLR. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re | 4ff92f61329eaeaafb38eb11f16bddcc |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn") model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]' resampler = torchaudio.transforms.Resample(48_000, 16_000) | 9c2858636b38b1315cd73d52934c396d |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 27.08 % | 3d67b124843f33ccf632adb2f1ff929b |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Training 90% of the OpenSLR Kannada dataset was used for training. The colab notebook used for training can be found [here](https://colab.research.google.com/github/amoghgopadi/wav2vec2-xlsr-kannada/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Kannada_ASR.ipynb). | 83d7c1fadd3ca2584afe09253f6102da |
cc-by-4.0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | HindSBERT-STS This is a HindSBERT model (l3cube-pune/hindi-sentence-bert-nli) fine-tuned on the STS dataset. <br> Released as a part of project MahaNLP : https://github.com/l3cube-pune/MarathiNLP <br> More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2211.11187) ``` @article{joshi2022l3cubemahasbert, title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi}, author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj}, journal={arXiv preprint arXiv:2211.11187}, year={2022} } ``` This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> | aa58efe2b38269b6c22d32f249a8bd96 |
mit | ['generated_from_keras_callback'] | false | turkishReviews-ds-mini This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.3867 - Validation Loss: 8.3741 - Epoch: 2 | a6e61c0c033a052bb285bfc398d5144b |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -765, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | e547f4589c977211153f497e84375f8a |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.2149 | 9.6891 | 0 | | 9.0695 | 8.7610 | 1 | | 8.3867 | 8.3741 | 2 | | be31708bb8441c9bd638d7e216224b39 |
apache-2.0 | ['generated_from_trainer'] | false | IMDB_DistilBERT_5E This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2309 - Accuracy: 0.9333 | 443fbe4484d40df99708161cfed72fcc |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6706 | 0.03 | 50 | 0.5746 | 0.8533 | | 0.4323 | 0.06 | 100 | 0.2900 | 0.9 | | 0.314 | 0.1 | 150 | 0.2334 | 0.9067 | | 0.3062 | 0.13 | 200 | 0.1884 | 0.94 | | 0.2834 | 0.16 | 250 | 0.1880 | 0.9267 | | 0.2751 | 0.19 | 300 | 0.1944 | 0.94 | | 0.2258 | 0.22 | 350 | 0.2003 | 0.9267 | | 0.2631 | 0.26 | 400 | 0.1507 | 0.9467 | | 0.2661 | 0.29 | 450 | 0.1536 | 0.9467 | | 0.2481 | 0.32 | 500 | 0.1533 | 0.94 | | 0.2746 | 0.35 | 550 | 0.1402 | 0.9533 | | 0.2539 | 0.38 | 600 | 0.1331 | 0.94 | | 0.2673 | 0.42 | 650 | 0.1404 | 0.9467 | | 0.2438 | 0.45 | 700 | 0.1213 | 0.96 | | 0.2355 | 0.48 | 750 | 0.1181 | 0.9533 | | 0.2059 | 0.51 | 800 | 0.1417 | 0.9333 | | 0.2585 | 0.54 | 850 | 0.1257 | 0.9533 | | 0.2331 | 0.58 | 900 | 0.1307 | 0.94 | | 0.2602 | 0.61 | 950 | 0.1172 | 0.9467 | | 0.24 | 0.64 | 1000 | 0.1141 | 0.9533 | | 0.2169 | 0.67 | 1050 | 0.1198 | 0.94 | | 0.2796 | 0.7 | 1100 | 0.1171 | 0.9533 | | 0.2559 | 0.74 | 1150 | 0.1199 | 0.96 | | 0.2377 | 0.77 | 1200 | 0.1359 | 0.9333 | | 0.2268 | 0.8 | 1250 | 0.1235 | 0.9533 | | 0.2422 | 0.83 | 1300 | 0.1439 | 0.9333 | | 0.2101 | 0.86 | 1350 | 0.1333 | 0.9333 | | 0.1875 | 0.9 | 1400 | 0.1206 | 0.9467 | | 0.2279 | 0.93 | 1450 | 0.1136 | 0.96 | | 0.2214 | 0.96 | 1500 | 0.1188 | 0.9467 | | 0.2416 | 0.99 | 1550 | 0.1029 | 0.9467 | | 0.219 | 1.02 | 1600 | 0.1113 | 0.94 | | 0.1806 | 1.06 | 1650 | 0.1095 | 0.9533 | | 0.1343 | 1.09 | 1700 | 0.1630 | 0.94 | | 0.1699 | 1.12 | 1750 | 0.1221 | 0.96 | | 0.1837 | 1.15 | 1800 | 0.1213 | 0.9467 | | 0.1763 | 1.18 | 1850 | 0.1286 | 0.9533 | | 0.1856 | 1.22 | 1900 | 0.1531 | 0.9267 | | 0.1647 | 1.25 | 1950 | 0.1380 | 0.9533 | | 0.2204 | 1.28 | 2000 | 0.1268 | 0.9333 | | 0.1774 | 1.31 | 2050 | 0.1689 | 0.9267 | | 0.2052 | 1.34 | 2100 | 0.1317 | 0.94 | | 0.1728 | 1.38 | 2150 | 0.1286 | 0.9533 | | 0.1816 | 1.41 | 2200 | 0.1280 | 0.9333 | | 0.1574 | 1.44 | 2250 | 0.1363 | 0.94 | | 0.1907 | 1.47 | 2300 | 0.1229 | 0.9533 | | 0.2032 | 1.5 | 2350 | 0.1036 | 0.96 | | 0.1636 | 1.54 | 2400 | 0.1061 | 0.9533 | | 0.1795 | 1.57 | 2450 | 0.1414 | 0.9333 | | 0.1497 | 1.6 | 2500 | 0.1401 | 0.94 | | 0.2026 | 1.63 | 2550 | 0.1462 | 0.9333 | | 0.1797 | 1.66 | 2600 | 0.1355 | 0.9467 | | 0.1612 | 1.7 | 2650 | 0.1283 | 0.9533 | | 0.1922 | 1.73 | 2700 | 0.1235 | 0.9467 | | 0.1321 | 1.76 | 2750 | 0.1336 | 0.9467 | | 0.1908 | 1.79 | 2800 | 0.1518 | 0.94 | | 0.1684 | 1.82 | 2850 | 0.1394 | 0.9533 | | 0.1746 | 1.86 | 2900 | 0.1489 | 0.94 | | 0.141 | 1.89 | 2950 | 0.1063 | 0.9667 | | 0.1906 | 1.92 | 3000 | 0.1213 | 0.9467 | | 0.1613 | 1.95 | 3050 | 0.1364 | 0.9467 | | 0.2177 | 1.98 | 3100 | 0.1263 | 0.9533 | | 0.1458 | 2.02 | 3150 | 0.1208 | 0.9533 | | 0.1435 | 2.05 | 3200 | 0.1195 | 0.96 | | 0.0988 | 2.08 | 3250 | 0.1282 | 0.96 | | 0.1428 | 2.11 | 3300 | 0.1619 | 0.9467 | | 0.1058 | 2.14 | 3350 | 0.1586 | 0.9467 | | 0.149 | 2.18 | 3400 | 0.1502 | 0.9533 | | 0.1188 | 2.21 | 3450 | 0.1954 | 0.9267 | | 0.1482 | 2.24 | 3500 | 0.1797 | 0.94 | | 0.1593 | 2.27 | 3550 | 0.1643 | 0.94 | | 0.1543 | 2.3 | 3600 | 0.1505 | 0.94 | | 0.1417 | 2.34 | 3650 | 0.1393 | 0.9467 | | 0.1074 | 2.37 | 3700 | 0.1479 | 0.94 | | 0.0966 | 2.4 | 3750 | 0.1819 | 0.9267 | | 0.1114 | 2.43 | 3800 | 0.1515 | 0.94 | | 0.1172 | 2.46 | 3850 | 0.1713 | 0.9467 | | 0.0834 | 2.5 | 3900 | 0.1616 | 0.94 | | 0.0987 | 2.53 | 3950 | 0.1986 | 0.9333 | | 0.1317 | 2.56 | 4000 | 0.1889 | 0.94 | | 0.1734 | 2.59 | 4050 | 0.1846 | 0.9533 | | 0.1134 | 2.62 | 4100 | 0.1554 | 0.9333 | | 0.1135 | 2.66 | 4150 | 0.1387 | 0.9533 | | 0.1143 | 2.69 | 4200 | 0.1496 | 0.9533 | | 0.1742 | 2.72 | 4250 | 0.1759 | 0.9467 | | 0.1408 | 2.75 | 4300 | 0.1724 | 0.9333 | | 0.1401 | 2.78 | 4350 | 0.1664 | 0.9467 | | 0.1116 | 2.82 | 4400 | 0.1975 | 0.9267 | | 0.131 | 2.85 | 4450 | 0.1730 | 0.9467 | | 0.1236 | 2.88 | 4500 | 0.1504 | 0.9533 | | 0.1501 | 2.91 | 4550 | 0.1554 | 0.9533 | | 0.1609 | 2.94 | 4600 | 0.1642 | 0.9467 | | 0.1443 | 2.98 | 4650 | 0.2157 | 0.92 | | 0.1233 | 3.01 | 4700 | 0.1900 | 0.9333 | | 0.1171 | 3.04 | 4750 | 0.1507 | 0.9333 | | 0.0639 | 3.07 | 4800 | 0.2017 | 0.9333 | | 0.0935 | 3.1 | 4850 | 0.1952 | 0.94 | | 0.088 | 3.13 | 4900 | 0.2251 | 0.9333 | | 0.0957 | 3.17 | 4950 | 0.1842 | 0.9533 | | 0.1002 | 3.2 | 5000 | 0.1668 | 0.9467 | | 0.0882 | 3.23 | 5050 | 0.1685 | 0.94 | | 0.0579 | 3.26 | 5100 | 0.1653 | 0.9467 | | 0.0912 | 3.29 | 5150 | 0.1735 | 0.9467 | | 0.0811 | 3.33 | 5200 | 0.1832 | 0.9467 | | 0.1104 | 3.36 | 5250 | 0.1755 | 0.9533 | | 0.0785 | 3.39 | 5300 | 0.2030 | 0.9467 | | 0.083 | 3.42 | 5350 | 0.1944 | 0.94 | | 0.0769 | 3.45 | 5400 | 0.2107 | 0.94 | | 0.0877 | 3.49 | 5450 | 0.1847 | 0.9467 | | 0.083 | 3.52 | 5500 | 0.1751 | 0.9467 | | 0.1179 | 3.55 | 5550 | 0.1765 | 0.9467 | | 0.0965 | 3.58 | 5600 | 0.1905 | 0.94 | | 0.0648 | 3.61 | 5650 | 0.2025 | 0.9333 | | 0.0735 | 3.65 | 5700 | 0.2003 | 0.94 | | 0.0857 | 3.68 | 5750 | 0.2074 | 0.94 | | 0.0782 | 3.71 | 5800 | 0.1889 | 0.9467 | | 0.0851 | 3.74 | 5850 | 0.1929 | 0.9533 | | 0.0979 | 3.77 | 5900 | 0.2160 | 0.9333 | | 0.0727 | 3.81 | 5950 | 0.2180 | 0.9333 | | 0.1098 | 3.84 | 6000 | 0.1844 | 0.9467 | | 0.0828 | 3.87 | 6050 | 0.1925 | 0.94 | | 0.0865 | 3.9 | 6100 | 0.1895 | 0.9467 | | 0.07 | 3.93 | 6150 | 0.1910 | 0.9467 | | 0.0984 | 3.97 | 6200 | 0.1954 | 0.9467 | | 0.1123 | 4.0 | 6250 | 0.2012 | 0.94 | | 0.0674 | 4.03 | 6300 | 0.1938 | 0.94 | | 0.1234 | 4.06 | 6350 | 0.2086 | 0.94 | | 0.0599 | 4.09 | 6400 | 0.2169 | 0.9333 | | 0.0603 | 4.13 | 6450 | 0.2116 | 0.94 | | 0.0411 | 4.16 | 6500 | 0.2072 | 0.94 | | 0.0784 | 4.19 | 6550 | 0.1993 | 0.9533 | | 0.0891 | 4.22 | 6600 | 0.2086 | 0.94 | | 0.076 | 4.25 | 6650 | 0.2058 | 0.9333 | | 0.0653 | 4.29 | 6700 | 0.2164 | 0.9333 | | 0.062 | 4.32 | 6750 | 0.2278 | 0.9333 | | 0.0687 | 4.35 | 6800 | 0.2284 | 0.9333 | | 0.0575 | 4.38 | 6850 | 0.2424 | 0.9333 | | 0.0651 | 4.41 | 6900 | 0.2340 | 0.9333 | | 0.0633 | 4.45 | 6950 | 0.2346 | 0.9333 | | 0.109 | 4.48 | 7000 | 0.2319 | 0.9333 | | 0.1 | 4.51 | 7050 | 0.2254 | 0.9333 | | 0.085 | 4.54 | 7100 | 0.2141 | 0.9333 | | 0.068 | 4.57 | 7150 | 0.2154 | 0.94 | | 0.0852 | 4.61 | 7200 | 0.2206 | 0.94 | | 0.0821 | 4.64 | 7250 | 0.2186 | 0.9333 | | 0.0712 | 4.67 | 7300 | 0.2263 | 0.9333 | | 0.0419 | 4.7 | 7350 | 0.2256 | 0.9333 | | 0.0601 | 4.73 | 7400 | 0.2271 | 0.9333 | | 0.0597 | 4.77 | 7450 | 0.2276 | 0.9333 | | 0.0689 | 4.8 | 7500 | 0.2260 | 0.94 | | 0.0437 | 4.83 | 7550 | 0.2261 | 0.9333 | | 0.0636 | 4.86 | 7600 | 0.2289 | 0.9333 | | 0.0982 | 4.89 | 7650 | 0.2302 | 0.9333 | | 0.0392 | 4.93 | 7700 | 0.2316 | 0.9333 | | 0.0438 | 4.96 | 7750 | 0.2311 | 0.9333 | | 0.0753 | 4.99 | 7800 | 0.2309 | 0.9333 | | bff79033b752866bb45e74ba5a5c7d44 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Demo: How to use in ESPnet2 ```bash cd espnet git checkout da1a26652f7d5a019cc24ad1e0e6e844f2b57e1b pip install -e . cd egs2/aishell4/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model Dan_Berrebbi_aishell4_asr ``` <!-- Generated by scripts/utils/show_asr_result.sh --> | 5ad89920a3da396e8ebc24f0d0a9e6d4 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Environments - date: `Tue Sep 21 09:36:01 EDT 2021` - python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a1` - pytorch version: `pytorch 1.9.0` - Git hash: `7887faeabbc2299922267928e190ed89cb032a36` - Commit date: `Mon Sep 20 16:25:02 2021 -0400` | 2ec023e33021321d1ee057ad7c59f0a8 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.7|0.5|0.0|93.2|93.2| |decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.8|0.3|0.0|93.2|93.2| | 955fcfae9901ae19cba4ac2def415542 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|66.9|25.6|7.5|9.8|42.9|93.2| |decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|64.7|27.6|7.7|11.0|46.3|93.2| | e25563e916c264164a45afcc8cf57365 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer5.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_fine_tune5_100ep ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 3 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 10000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_zh_char/train/speech_shape - exp/asr_stats_raw_zh_char/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_zh_char/valid/speech_shape - exp/asr_stats_raw_zh_char/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 51200 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_nodev/wav.scp - speech - sound - - dump/raw/train_nodev/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 4.0 scheduler: noamlr scheduler_conf: model_size: 256 warmup_steps: 25000 token_list: - <blank> - <unk> - , - 的 - 是 - 个 - 这 - 一 - 。 - 就 - 儿 - 嗯 - 们 - 呃 - 我 - 有 - <sil> - 那 - 说 - 不 - 些 - 也 - 他 - 你 - 要 - 后 - 以 - 咱 - 在 - 啊 - 了 - 然 - 家 - 都 - 来 - 还 - 可 - 子 - 下 - 上 - 时 - 比 - 话 - 孩 - 呢 - 去 - 人 - 好 - 对 - 能 - 么 - 吧 - 学 - 多 - 到 - 看 - 为 - 进 - 把 - 大 - 做 - 生 - 种 - 品 - 给 - 没 - 行 - 现 - 小 - 会 - 作 - 较 - 方 - 块 - 业 - 让 - 点 - 定 - 因 - 什 - 长 - 面 - 如 - 安 - 客 - 问 - 过 - 车 - 出 - 啦 - 边 - 候 - 主 - 所 - 题 - 买 - 销 - 天 - 意 - 自 - 全 - 动 - 工 - '&' - 老 - 或 - 者 - 年 - 着 - 实 - 活 - 理 - 包 - 样 - 再 - 区 - 用 - 呀 - 零 - 员 - 发 - 先 - 部 - 放 - 门 - 情 - 像 - 分 - 售 - 很 - 开 - 己 - 十 - 括 - 跟 - 事 - 需 - 更 - 其 - 装 - 市 - 成 - 里 - 物 - 别 - 间 - 第 - 次 - 中 - 提 - 超 - 顾 - 保 - 感 - 加 - 量 - 二 - 和 - 各 - 嘛 - 新 - 每 - 完 - 力 - 消 - 得 - 店 - 本 - 通 - 习 - 觉 - 道 - 心 - 校 - 菜 - 交 - 哪 - 产 - 于 - 位 - 电 - 想 - 三 - 况 - 度 - 期 - 应 - 但 - 教 - 体 - 常 - 师 - 它 - 高 - 前 - 之 - 西 - 特 - 商 - 果 - 场 - 重 - 防 - 管 - 起 - 地 - 该 - 东 - 少 - 打 - 费 - 当 - 带 - 服 - 口 - 购 - 知 - 回 - 同 - 钱 - 外 - 户 - 注 - 促 - 价 - 解 - < | e6f5f5c892d8e49d921fc2898ded6504 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | > - 水 - 百 - 今 - 太 - 最 - 报 - 怎 - 才 - 等 - 及 - 关 - <-> - 肯 - 火 - 机 - 流 - 制 - 送 - 手 - 确 - 法 - 写 - 玩 - 传 - 路 - 班 - 查 - 招 - 卖 - 几 - 正 - 合 - 够 - 五 - 引 - 容 - 只 - 男 - 日 - 四 - 宣 - 反 - 两 - 清 - 处 - 周 - 单 - 首 - 课 - 衣 - 便 - 身 - 气 - 针 - 奶 - 六 - 经 - 接 - 女 - 育 - 鲜 - 赠 - 试 - 停 - 晚 - 类 - 故 - 入 - 性 - 增 - 食 - 满 - 格 - 基 - 备 - 洗 - 培 - 质 - 美 - 明 - 整 - 化 - 公 - 案 - 哎 - 吸 - 原 - 易 - 幺 - 总 - 尽 - 优 - 而 - 建 - 责 - 啥 - 干 - 月 - 使 - 找 - 季 - 望 - 器 - 目 - 识 - 低 - 听 - 烟 - 相 - 早 - 检 - 护 - 摆 - 住 - 直 - 从 - 务 - 希 - 导 - 内 - 八 - 持 - 近 - 配 - 叫 - 见 - 设 - 吗 - 非 - 调 - 程 - 拿 - 训 - <%> - 结 - 标 - 挺 - 花 - <$> - 受 - 式 - 求 - 平 - 换 - 具 - 愿 - 货 - 牌 - 专 - 轻 - 推 - 妈 - 司 - 辆 - 存 - 名 - 且 - 欢 - 喜 - 吃 - 数 - 段 - 议 - 控 - 往 - 礼 - 决 - 走 - 养 - 免 - 惠 - 园 - 档 - 谁 - 真 - 快 - 置 - 幼 - 乐 - 证 - 向 - 厂 - 简 - 声 - 视 - 划 - 绩 - 适 - 集 - 搞 - 办 - 规 - 灾 - 造 - 准 - 必 - 任 - 险 - 响 - 毕 - 群 - 鞋 - 九 - 嘞 - 信 - 库 - 计 - 认 - 奖 - 表 - 无 - 影 - 头 - 卡 - 告 - 考 - 抽 - 竟 - 选 - 帮 - 何 - 修 - 酒 - 尤 - 线 - 穿 - 讲 - 光 - 留 - 讨 - 随 - 请 - 卫 - 系 - 队 - 失 - 双 - 庭 - 强 - 微 - 折 - 色 - 半 - 否 - 立 - 差 - 沟 - 冬 - 批 - 害 - 已 - 危 - 白 - 爆 - 节 - 参 - 逛 - 搭 - 风 - 朋 - 友 - 环 - 验 - 评 - 严 - 般 - 效 - 舞 - 饭 - 境 - 负 - 又 - 底 - 术 - 刚 - 件 - 罚 - 助 - 态 - 状 - 室 - 房 - 游 - 息 - 领 - 难 - 警 - 按 - 级 - 错 - 利 - 与 - 餐 - 陪 - 蹈 - 论 - 记 - 许 - 马 - 算 - 楼 - 型 - 排 - 广 - 值 - 油 - 糕 - 楚 - 步 - 至 - 拉 - 紧 - 灯 - 升 - 七 - 共 - 努 - 除 - 展 - 形 - 元 - 网 - 宜 - 营 - 兴 - 互 - 蛋 - 燃 - 冷 - 条 - 思 - 巡 - 净 - 须 - 遇 - 落 - 禁 - 科 - 款 - 哦 - 止 - 采 - 材 - 介 - 套 - 围 - 维 - 旦 - 切 - 显 - 汇 - 损 - 速 - 越 - 模 - 假 - 精 - 稍 - 书 - 绍 - 父 - 积 - 策 - 示 - 骑 - 改 - 跑 - 运 - 变 - 洁 - 仓 - 鱼 - <space> - 绝 - 诶 - 伤 - 细 - 职 - 离 - 慢 - 素 - 料 - 睡 - 趣 - 爱 - 母 - 眼 - 味 - 列 - 督 - 张 - 率 - 被 - 域 - 语 - 坏 - 资 - 红 - 减 - 励 - 择 - 预 - 层 - 陈 - 根 - 休 - 毒 - 球 - 爸 - 登 - 足 - 取 - 指 - 柜 - 限 - 降 - 概 - 院 - 供 - 支 - 额 - 源 - 始 - 盘 - 饮 - 项 - 液 - 童 - 爷 - 号 - 抓 - 台 - 转 - 观 - 金 - 照 - 滑 - 岁 - 致 - 文 - 她 - 弄 - 站 - 酸 - 音 - 胎 - 投 - 疏 - 乱 - 临 - 允 - 狗 - 疫 - 询 - 、 - 象 - 占 - 坐 - 倒 - 争 - 午 - 亲 - 读 - 演 - 退 - 惯 - 贵 - 达 - 监 - 志 - 绿 - 醒 - 急 - 驾 - 违 - 诉 - 片 - 空 - 势 - 极 - 豆 - 独 - 钟 - 代 - 瓶 - 纸 - 并 - 企 - 映 - 统 - 属 - 省 - 夜 - 障 - 谈 - 避 - 由 - 终 - 频 - 掉 - 估 - 激 - 仅 - 布 - 谢 - 灭 - 忙 - 码 - 伙 - 缺 - 叶 - 功 - 析 - 赖 - 架 - 范 - 签 - D - 待 - 神 - 龄 - 画 - 券 - 居 - 杜 - 堵 - 您 - 勤 - 扫 - 技 - 财 - 隐 - 患 - 例 - 乘 - 摩 - 戏 - 鼓 - 份 - 杂 - 散 - 热 - 铺 - 据 - 肤 - 怕 - 依 - 拖 - 充 - 智 - 偷 - 远 - 挂 - 盗 - 附 - 梯 - 冰 - 联 - 借 - 蹭 - 异 - 蔬 - 绑 - 堂 - 将 - 厨 - 帽 - 破 - 戴 - 皮 - 粉 - 氛 - 仪 - 国 - 益 - 闯 - 惩 - 逃 - 刻 - 突 - 申 - 略 - 顿 - 毛 - 召 - 海 - 黄 - 青 - 士 - 移 - 喝 - 板 - 练 - 歌 - 千 - 床 - 享 - 磨 - 构 - 收 - 万 - 摸 - 圈 - 亮 - 刹 - 逆 - 驶 - 赶 - 松 - 呐 - 压 - 拥 - 辅 - 协 - 托 - 断 - 轮 - 善 - 哈 - 捆 - 座 - 病 - 健 - 牛 - 草 - 释 - 似 - 土 - 补 - 俩 - 堆 - 即 - 密 - 背 - 言 - 街 - 尚 - 窗 - C - 艺 - 纠 - 纷 - 忽 - 句 - 另 - 施 - 政 - 温 - 某 - 翻 - 章 - 守 - 熟 - 民 - 续 - 良 - 挤 - 础 - 字 - 瓜 - 乎 - 竞 - 距 - 际 - 暖 - 凭 - 董 - 碗 - 短 - 渠 - 康 - 藏 - 香 - 虽 - 露 - 厉 - 忘 - 误 - 冒 - 窃 - 络 - 淡 - 腐 - 颜 - 播 - 默 - 锻 - 炼 - 宝 - 组 - 淘 - 则 - 逻 - 垃 - 圾 - 复 - 贴 - 靠 - 潜 - 察 - 晨 - 碰 - 剩 - 峰 - 深 - 偏 - 虑 - 念 - 初 - 闹 - 幸 - 跳 - 米 - 旧 - 蛤 - 虾 - 汽 - 苦 - 螃 - 蟹 - 冲 - 固 - 隔 - 懂 - 卷 - 镜 - 罩 - 暴 - 闭 - 野 - 玻 - 璃 - 义 - B - 煤 - 富 - 踩 - 途 - 闲 - 紫 - 北 - 欲 - 曲 - 榜 - 垒 - 伴 - 累 - 判 - 搜 - 困 - 租 - 键 - 肥 - 社 - 弯 - 角 - 纪 - 律 - 详 - 右 - 刮 - 继 - 撤 - 输 - 普 - 未 - 稳 - 摔 - 访 - 扩 - 扣 - 末 - 票 - 承 - 担 - 丢 - 涉 - 欠 - 创 - 获 - 摊 - 疑 - 蓝 - 答 - 霜 - 录 - 齐 - 烦 - 治 - 粗 - 叛 - 污 - 址 - 若 - 染 - 含 - 药 - 雨 - 此 - 陌 - 研 - 催 - 拨 - 页 - 磕 - 呆 - 脸 - 墙 - 夫 - A - 棉 - 袜 - 填 - 死 - 懒 - 植 - 扇 - 捡 - 遍 - 操 - 摄 - 箱 - ? - 繁 - 城 - 咯 - 左 - 拐 - 悉 - 犯 - 宽 - 伞 - 余 - 糊 - 巧 - 透 - 贪 - 顺 - 局 - 妇 - 私 - 浪 - 岗 - 棋 - 序 - 辛 - V - 握 - 擦 - 扔 - 斤 - 付 - 剐 - 锁 - 麻 - 敢 - 桶 - 佩 - 坠 - 封 - 替 - 塞 - 斗 - 攀 - 爽 - 沉 - 混 - 滋 - 刺 - 潮 - 皿 - 端 - 刷 - 刀 - 巾 - 烫 - 木 - 漏 - 迅 - 织 - 救 - 吹 - 仔 - 称 - 返 - 景 - 聚 - 阶 - 秀 - 涨 - P - 颈 - 肩 - 泥 - I - 侣 - 尔 - 伍 - 甚 - 皂 - 蒙 - 世 - 界 - 嘻 - 辈 - Q - 审 - 尾 - 浇 - 遛 - 馨 - 措 - 邻 - 撒 - 挥 - 遵 - 予 - 击 - 鉴 - 殊 - 哇 - 载 - 添 - 盈 - 盯 - 惊 - 喷 - 荷 - 怠 - 抢 - 喂 - 饱 - 谅 - 团 - 龙 - 冻 - 图 - 掺 - 扑 - 刊 - 葱 - 薄 - 萝 - 卜 - 麦 - 苹 - 触 - 飞 - 艳 - 畅 - 鸡 - 权 - 趟 - 连 - 哭 - 旁 - 漂 - 焊 - 敞 - 叉 - 钢 - 氧 - 溺 - 聊 - 巢 - 衡 - 淀 - 劣 - 虫 - 符 - 均 - 辨 - 菌 - 彻 - 烂 - 厅 - 皱 - 妥 - 拾 - 插 - 携 - 竹 - 碍 - 湿 - 灵 - 忌 - 旅 - 勿 - 宿 - 迷 - 探 - 春 - 劵 - 星 - 耐 - 裤 - 颖 - 韩 - 艾 - 灸 - 邀 - 婚 - 乳 - 芽 - 挑 - 摘 - 阿 - 姨 - 伊 - 慕 - 纯 - 貌 - 嘴 - 偶 - 睛 - 献 - 坚 - 账 - 典 - 唱 - L - E - 贡 - 寒 - 唧 - Y - 尝 - 抹 - 汰 - 腾 - 哼 - 仿 - 英 - 舒 - 扰 - 拒 - 剪 - 夏 - 宠 - 咬 - 派 - 委 - 婉 - 执 - 呗 - 悄 - 搬 - 雪 - 盐 - 暂 - 奸 - 耍 - 僻 - 却 - 署 - 寻 - 串 - 援 - 亏 - 烈 - 印 - 捎 - 幅 - 绘 - 锈 - 闸 - 罪 - 嫌 - 俗 - 歹 - 劳 - 兜 - 喽 - 谓 - 鹤 - 舍 - 克 - 徇 - 倍 - 敏 - 丝 - 纺 - 拭 - 融 - 蔫 - 掂 - 测 - T - 众 - 卸 - 暗 - 赔 - 偿 - 举 - 劲 - 篮 - 储 - 乙 - 炔 - 软 - 侵 - 诱 - 浊 - 蚀 - 秽 - 炸 - 泽 - 闻 - 鼻 - 甜 - 澈 - 脏 - 官 - 凝 - 芳 - 灰 - 卵 - 农 - 烧 - 肉 - 桌 - 椅 - 垫 - 硬 - 叠 - 瓷 - 碎 - 柄 - 屉 - 拳 - 撞 - 铝 - 歇 - 遗 - 炮 - 掌 - 妨 - 静 - 浸 - 涂 - 凉 - 炫 - 耀 - 姓 - 究 - 奏 - 缆 - 脚 - 酿 - 抄 - 慌 - 戚 - 燥 - 毯 - 挽 - 诺 - 济 - 旺 - 抖 - 郊 - 疗 - 巴 - 痧 - 脊 - 膜 - 晒 - 润 - 掏 - 笔 - 鞭 - 博 - 捧 - 函 - 胡 - 锅 - 雾 - 疯 - 狂 - 趋 - 膏 - 妆 - 尘 - 袋 - 贝 - 俺 - 耽 - 怀 - 恐 - 赋 - 脑 - 焉 - 愣 - 呵 - 噼 - 啪 - 虚 - 河 - 归 - 绊 - 械 - 扬 - 筒 - 靴 - 束 - 彩 - 荐 - 沙 - 迎 - 荡 - 凌 - 昂 - 碑 - 蹦 - 扉 - 泼 - 丰 - 滴 - 沾 - 亭 - 粘 - 奇 - 饼 - 牙 - 娃 - 杯 - 踢 - 嘿 - 抛 - 枯 - 剔 - 苗 - 纹 - 永 - 津 - 唉 - 趁 - 屡 - 逮 - 戒 - 肃 - 仁 - 肇 - 醉 - 糟 - 馈 - 横 - 扭 - 盔 - 侧 - 鲁 - 莽 - 飙 - 稿 - 逐 - 谋 - 京 - 苏 - 宁 - 驻 - 咨 - 旷 - 拓 - 杆 - 秤 - 叮 - 嘱 - 咋 - 炊 - 怪 - 婆 - 阎 - 王 - 饿 - 鬼 - 惨 - 渡 - 坎 - 囤 - 甲 - 蛙 - 鲤 - 桂 - 石 - 玉 - 溪 - 华 - 窝 - 截 - 秩 - 嗨 - 芹 - 梨 - 蕉 - S - 煲 - 汤 - 鲫 - 揽 - 挡 - 柚 - 瑞 - 匹 - '2' - 踹 - 吵 - 凶 - 矩 - 迟 - 脾 - 纳 - 朵 - 墨 - 袖 - 链 - 钩 - 笼 - 熄 - 盆 - 殴 - 欺 - 诈 - 厕 - 娱 - 爬 - 威 - 胁 - 阅 - 赌 - 拢 - 症 - 伪 - 脂 - 堪 - 盛 - 蚊 - 蝇 - 煎 - 晰 - 柔 - 涩 - 汁 - 腹 - 胃 - 痉 - 挛 - 颗 - 粒 - 匀 - 败 - 历 - 佳 - 乏 - 寄 - 残 - 杀 - 剂 - 疾 - 衍 - 溅 - 倘 - 褶 - 席 - 启 - 遮 - 槽 - 递 - 橱 - 迹 - 镁 - 泄 - 阀 - 柴 - 阻 - 恋 - 盲 - 浓 - 捂 - 腰 - 姿 - 缝 - 肿 - 焦 - 骗 - 伺 - 嘘 - 掩 - 褥 - 帘 - 籍 - 锥 - 锋 - 尖 - 锐 - 祸 - 秒 - 李 - 伸 - 浏 - 览 - 航 - 讯 - 谨 - 慎 - 匪 - 劫 - 医 - 族 - 忧 - 孤 - 拜 - 窄 - 唯 - 搁 - 朝 - 尺 - 盟 - 波 - 隆 - 词 - 村 - 娶 - 媳 - 县 - 聘 - 醇 - 泡 - 坨 - 淋 - 延 - 柱 - 肾 - 蒸 - 槛 - 赚 - 凡 - 恩 - 厚 - 赞 - 茎 - 蒜 - 苔 - 甘 - 菠 - 涮 - 霾 - 仍 - 云 - 追 - 丽 - 盖 - 欧 - 莱 - 雅 - 婴 - 孕 - 敲 - 约 - 惰 - 谱 - 射 - 惑 - 睹 - 奉 - 诚 - 惶 - 卓 - 勉 - 聪 - 疼 - 弃 - 奴 - 隶 - 嚷 - 眠 - 躺 - 乒 - 乓 - 琴 - 挖 - 掘 - 阵 - 浆 - 索 - 呼 - 古 - 弥 - 熔 - 抱 - 怨 - 猫 - 笑 - 挣 - 黑 - 猛 - 令 - 核 - 磊 - 橙 - 吨 - 吊 - 蘸 - 氮 - 罐 - 战 - 懈 - 渐 - 胜 - 命 - 抬 - 缘 - 睦 - 扮 - 珠 - 颁 - 蔼 - 凳 - 饰 - 缤 - 晶 - 抵 - 遥 - 腿 - 拍 - 妻 - 羽 - 绒 - 梳 - 袄 - 述 - 跆 - 屈 - 脱 - 朗 - 劝 - 胆 - 腔 - 圆 - 亚 - 宴 - 编 - 肢 - 壶 - 暑 - 怒 - 描 - 绕 - 悦 - 忆 - 嗓 - 胖 - 疙 - 瘩 - 哒 - 碴 - 棱 - 炒 - 井 - 漫 - 烘 - 焙 - 涤 - 船 - 纱 - 君 - 茉 - 莉 - 钙 - 瞩 - <_> - 塌 - 嗷 - 屁 - 股 - 绪 - 勇 - 奋 - 荣 - 诲 - 卑 - 挫 - 昧 - 疲 - 惫 - 册 - 呈 - 僵 - 熬 - 敬 - 呦 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 attention_dropout_rate: 0.0 pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish macaron_style: true use_cnn_module: true cnn_module_kernel: 15 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.3a1 distributed: false ``` </details> | c515e59bc8f1c43645f761c8374ac8c8 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | LM config <details><summary>expand</summary> ``` config: conf/train_lm_transformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/lm_nuit ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 15 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 2000000 valid_batch_bins: null train_shape_file: - exp/lm_stats_zh_char/train/text_shape.char valid_shape_file: - exp/lm_stats_zh_char/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/lm_train.txt - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.005 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - , - 的 - 是 - 个 - 这 - 一 - 。 - 就 - 儿 - 嗯 - 们 - 呃 - 我 - 有 - <sil> - 那 - 说 - 不 - 些 - 也 - 他 - 你 - 要 - 后 - 以 - 咱 - 在 - 啊 - 了 - 然 - 家 - 都 - 来 - 还 - 可 - 子 - 下 - 上 - 时 - 比 - 话 - 孩 - 呢 - 去 - 人 - 好 - 对 - 能 - 么 - 吧 - 学 - 多 - 到 - 看 - 为 - 进 - 把 - 大 - 做 - 生 - 种 - 品 - 给 - 没 - 行 - 现 - 小 - 会 - 作 - 较 - 方 - 块 - 业 - 让 - 点 - 定 - 因 - 什 - 长 - 面 - 如 - 安 - 客 - 问 - 过 - 车 - 出 - 啦 - 边 - 候 - 主 - 所 - 题 - 买 - 销 - 天 - 意 - 自 - 全 - 动 - 工 - '&' - 老 - 或 - 者 - 年 - 着 - 实 - 活 - 理 - 包 - 样 - 再 - 区 - 用 - 呀 - 零 - 员 - 发 - 先 - 部 - 放 - 门 - 情 - 像 - 分 - 售 - 很 - 开 - 己 - 十 - 括 - 跟 - 事 - 需 - 更 - 其 - 装 - 市 - 成 - 里 - 物 - 别 - 间 - 第 - 次 - 中 - 提 - 超 - 顾 - 保 - 感 - 加 - 量 - 二 - 和 - 各 - 嘛 - 新 - 每 - 完 - 力 - 消 - 得 - 店 - 本 - 通 - 习 - 觉 - 道 - 心 - 校 - 菜 - 交 - 哪 - 产 - 于 - 位 - 电 - 想 - 三 - 况 - 度 - 期 - 应 - 但 - 教 - 体 - 常 - 师 - 它 - 高 - 前 - 之 - 西 - 特 - 商 - 果 - 场 - 重 - 防 - 管 - 起 - 地 - 该 - 东 - 少 - 打 - 费 - 当 - 带 - 服 - 口 - 购 - 知 - 回 - 同 - 钱 - 外 - 户 - 注 - 促 - 价 - 解 - < | bba684c3873ee4ce10feb622910dab29 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | > - 水 - 百 - 今 - 太 - 最 - 报 - 怎 - 才 - 等 - 及 - 关 - <-> - 肯 - 火 - 机 - 流 - 制 - 送 - 手 - 确 - 法 - 写 - 玩 - 传 - 路 - 班 - 查 - 招 - 卖 - 几 - 正 - 合 - 够 - 五 - 引 - 容 - 只 - 男 - 日 - 四 - 宣 - 反 - 两 - 清 - 处 - 周 - 单 - 首 - 课 - 衣 - 便 - 身 - 气 - 针 - 奶 - 六 - 经 - 接 - 女 - 育 - 鲜 - 赠 - 试 - 停 - 晚 - 类 - 故 - 入 - 性 - 增 - 食 - 满 - 格 - 基 - 备 - 洗 - 培 - 质 - 美 - 明 - 整 - 化 - 公 - 案 - 哎 - 吸 - 原 - 易 - 幺 - 总 - 尽 - 优 - 而 - 建 - 责 - 啥 - 干 - 月 - 使 - 找 - 季 - 望 - 器 - 目 - 识 - 低 - 听 - 烟 - 相 - 早 - 检 - 护 - 摆 - 住 - 直 - 从 - 务 - 希 - 导 - 内 - 八 - 持 - 近 - 配 - 叫 - 见 - 设 - 吗 - 非 - 调 - 程 - 拿 - 训 - <%> - 结 - 标 - 挺 - 花 - <$> - 受 - 式 - 求 - 平 - 换 - 具 - 愿 - 货 - 牌 - 专 - 轻 - 推 - 妈 - 司 - 辆 - 存 - 名 - 且 - 欢 - 喜 - 吃 - 数 - 段 - 议 - 控 - 往 - 礼 - 决 - 走 - 养 - 免 - 惠 - 园 - 档 - 谁 - 真 - 快 - 置 - 幼 - 乐 - 证 - 向 - 厂 - 简 - 声 - 视 - 划 - 绩 - 适 - 集 - 搞 - 办 - 规 - 灾 - 造 - 准 - 必 - 任 - 险 - 响 - 毕 - 群 - 鞋 - 九 - 嘞 - 信 - 库 - 计 - 认 - 奖 - 表 - 无 - 影 - 头 - 卡 - 告 - 考 - 抽 - 竟 - 选 - 帮 - 何 - 修 - 酒 - 尤 - 线 - 穿 - 讲 - 光 - 留 - 讨 - 随 - 请 - 卫 - 系 - 队 - 失 - 双 - 庭 - 强 - 微 - 折 - 色 - 半 - 否 - 立 - 差 - 沟 - 冬 - 批 - 害 - 已 - 危 - 白 - 爆 - 节 - 参 - 逛 - 搭 - 风 - 朋 - 友 - 环 - 验 - 评 - 严 - 般 - 效 - 舞 - 饭 - 境 - 负 - 又 - 底 - 术 - 刚 - 件 - 罚 - 助 - 态 - 状 - 室 - 房 - 游 - 息 - 领 - 难 - 警 - 按 - 级 - 错 - 利 - 与 - 餐 - 陪 - 蹈 - 论 - 记 - 许 - 马 - 算 - 楼 - 型 - 排 - 广 - 值 - 油 - 糕 - 楚 - 步 - 至 - 拉 - 紧 - 灯 - 升 - 七 - 共 - 努 - 除 - 展 - 形 - 元 - 网 - 宜 - 营 - 兴 - 互 - 蛋 - 燃 - 冷 - 条 - 思 - 巡 - 净 - 须 - 遇 - 落 - 禁 - 科 - 款 - 哦 - 止 - 采 - 材 - 介 - 套 - 围 - 维 - 旦 - 切 - 显 - 汇 - 损 - 速 - 越 - 模 - 假 - 精 - 稍 - 书 - 绍 - 父 - 积 - 策 - 示 - 骑 - 改 - 跑 - 运 - 变 - 洁 - 仓 - 鱼 - <space> - 绝 - 诶 - 伤 - 细 - 职 - 离 - 慢 - 素 - 料 - 睡 - 趣 - 爱 - 母 - 眼 - 味 - 列 - 督 - 张 - 率 - 被 - 域 - 语 - 坏 - 资 - 红 - 减 - 励 - 择 - 预 - 层 - 陈 - 根 - 休 - 毒 - 球 - 爸 - 登 - 足 - 取 - 指 - 柜 - 限 - 降 - 概 - 院 - 供 - 支 - 额 - 源 - 始 - 盘 - 饮 - 项 - 液 - 童 - 爷 - 号 - 抓 - 台 - 转 - 观 - 金 - 照 - 滑 - 岁 - 致 - 文 - 她 - 弄 - 站 - 酸 - 音 - 胎 - 投 - 疏 - 乱 - 临 - 允 - 狗 - 疫 - 询 - 、 - 象 - 占 - 坐 - 倒 - 争 - 午 - 亲 - 读 - 演 - 退 - 惯 - 贵 - 达 - 监 - 志 - 绿 - 醒 - 急 - 驾 - 违 - 诉 - 片 - 空 - 势 - 极 - 豆 - 独 - 钟 - 代 - 瓶 - 纸 - 并 - 企 - 映 - 统 - 属 - 省 - 夜 - 障 - 谈 - 避 - 由 - 终 - 频 - 掉 - 估 - 激 - 仅 - 布 - 谢 - 灭 - 忙 - 码 - 伙 - 缺 - 叶 - 功 - 析 - 赖 - 架 - 范 - 签 - D - 待 - 神 - 龄 - 画 - 券 - 居 - 杜 - 堵 - 您 - 勤 - 扫 - 技 - 财 - 隐 - 患 - 例 - 乘 - 摩 - 戏 - 鼓 - 份 - 杂 - 散 - 热 - 铺 - 据 - 肤 - 怕 - 依 - 拖 - 充 - 智 - 偷 - 远 - 挂 - 盗 - 附 - 梯 - 冰 - 联 - 借 - 蹭 - 异 - 蔬 - 绑 - 堂 - 将 - 厨 - 帽 - 破 - 戴 - 皮 - 粉 - 氛 - 仪 - 国 - 益 - 闯 - 惩 - 逃 - 刻 - 突 - 申 - 略 - 顿 - 毛 - 召 - 海 - 黄 - 青 - 士 - 移 - 喝 - 板 - 练 - 歌 - 千 - 床 - 享 - 磨 - 构 - 收 - 万 - 摸 - 圈 - 亮 - 刹 - 逆 - 驶 - 赶 - 松 - 呐 - 压 - 拥 - 辅 - 协 - 托 - 断 - 轮 - 善 - 哈 - 捆 - 座 - 病 - 健 - 牛 - 草 - 释 - 似 - 土 - 补 - 俩 - 堆 - 即 - 密 - 背 - 言 - 街 - 尚 - 窗 - C - 艺 - 纠 - 纷 - 忽 - 句 - 另 - 施 - 政 - 温 - 某 - 翻 - 章 - 守 - 熟 - 民 - 续 - 良 - 挤 - 础 - 字 - 瓜 - 乎 - 竞 - 距 - 际 - 暖 - 凭 - 董 - 碗 - 短 - 渠 - 康 - 藏 - 香 - 虽 - 露 - 厉 - 忘 - 误 - 冒 - 窃 - 络 - 淡 - 腐 - 颜 - 播 - 默 - 锻 - 炼 - 宝 - 组 - 淘 - 则 - 逻 - 垃 - 圾 - 复 - 贴 - 靠 - 潜 - 察 - 晨 - 碰 - 剩 - 峰 - 深 - 偏 - 虑 - 念 - 初 - 闹 - 幸 - 跳 - 米 - 旧 - 蛤 - 虾 - 汽 - 苦 - 螃 - 蟹 - 冲 - 固 - 隔 - 懂 - 卷 - 镜 - 罩 - 暴 - 闭 - 野 - 玻 - 璃 - 义 - B - 煤 - 富 - 踩 - 途 - 闲 - 紫 - 北 - 欲 - 曲 - 榜 - 垒 - 伴 - 累 - 判 - 搜 - 困 - 租 - 键 - 肥 - 社 - 弯 - 角 - 纪 - 律 - 详 - 右 - 刮 - 继 - 撤 - 输 - 普 - 未 - 稳 - 摔 - 访 - 扩 - 扣 - 末 - 票 - 承 - 担 - 丢 - 涉 - 欠 - 创 - 获 - 摊 - 疑 - 蓝 - 答 - 霜 - 录 - 齐 - 烦 - 治 - 粗 - 叛 - 污 - 址 - 若 - 染 - 含 - 药 - 雨 - 此 - 陌 - 研 - 催 - 拨 - 页 - 磕 - 呆 - 脸 - 墙 - 夫 - A - 棉 - 袜 - 填 - 死 - 懒 - 植 - 扇 - 捡 - 遍 - 操 - 摄 - 箱 - ? - 繁 - 城 - 咯 - 左 - 拐 - 悉 - 犯 - 宽 - 伞 - 余 - 糊 - 巧 - 透 - 贪 - 顺 - 局 - 妇 - 私 - 浪 - 岗 - 棋 - 序 - 辛 - V - 握 - 擦 - 扔 - 斤 - 付 - 剐 - 锁 - 麻 - 敢 - 桶 - 佩 - 坠 - 封 - 替 - 塞 - 斗 - 攀 - 爽 - 沉 - 混 - 滋 - 刺 - 潮 - 皿 - 端 - 刷 - 刀 - 巾 - 烫 - 木 - 漏 - 迅 - 织 - 救 - 吹 - 仔 - 称 - 返 - 景 - 聚 - 阶 - 秀 - 涨 - P - 颈 - 肩 - 泥 - I - 侣 - 尔 - 伍 - 甚 - 皂 - 蒙 - 世 - 界 - 嘻 - 辈 - Q - 审 - 尾 - 浇 - 遛 - 馨 - 措 - 邻 - 撒 - 挥 - 遵 - 予 - 击 - 鉴 - 殊 - 哇 - 载 - 添 - 盈 - 盯 - 惊 - 喷 - 荷 - 怠 - 抢 - 喂 - 饱 - 谅 - 团 - 龙 - 冻 - 图 - 掺 - 扑 - 刊 - 葱 - 薄 - 萝 - 卜 - 麦 - 苹 - 触 - 飞 - 艳 - 畅 - 鸡 - 权 - 趟 - 连 - 哭 - 旁 - 漂 - 焊 - 敞 - 叉 - 钢 - 氧 - 溺 - 聊 - 巢 - 衡 - 淀 - 劣 - 虫 - 符 - 均 - 辨 - 菌 - 彻 - 烂 - 厅 - 皱 - 妥 - 拾 - 插 - 携 - 竹 - 碍 - 湿 - 灵 - 忌 - 旅 - 勿 - 宿 - 迷 - 探 - 春 - 劵 - 星 - 耐 - 裤 - 颖 - 韩 - 艾 - 灸 - 邀 - 婚 - 乳 - 芽 - 挑 - 摘 - 阿 - 姨 - 伊 - 慕 - 纯 - 貌 - 嘴 - 偶 - 睛 - 献 - 坚 - 账 - 典 - 唱 - L - E - 贡 - 寒 - 唧 - Y - 尝 - 抹 - 汰 - 腾 - 哼 - 仿 - 英 - 舒 - 扰 - 拒 - 剪 - 夏 - 宠 - 咬 - 派 - 委 - 婉 - 执 - 呗 - 悄 - 搬 - 雪 - 盐 - 暂 - 奸 - 耍 - 僻 - 却 - 署 - 寻 - 串 - 援 - 亏 - 烈 - 印 - 捎 - 幅 - 绘 - 锈 - 闸 - 罪 - 嫌 - 俗 - 歹 - 劳 - 兜 - 喽 - 谓 - 鹤 - 舍 - 克 - 徇 - 倍 - 敏 - 丝 - 纺 - 拭 - 融 - 蔫 - 掂 - 测 - T - 众 - 卸 - 暗 - 赔 - 偿 - 举 - 劲 - 篮 - 储 - 乙 - 炔 - 软 - 侵 - 诱 - 浊 - 蚀 - 秽 - 炸 - 泽 - 闻 - 鼻 - 甜 - 澈 - 脏 - 官 - 凝 - 芳 - 灰 - 卵 - 农 - 烧 - 肉 - 桌 - 椅 - 垫 - 硬 - 叠 - 瓷 - 碎 - 柄 - 屉 - 拳 - 撞 - 铝 - 歇 - 遗 - 炮 - 掌 - 妨 - 静 - 浸 - 涂 - 凉 - 炫 - 耀 - 姓 - 究 - 奏 - 缆 - 脚 - 酿 - 抄 - 慌 - 戚 - 燥 - 毯 - 挽 - 诺 - 济 - 旺 - 抖 - 郊 - 疗 - 巴 - 痧 - 脊 - 膜 - 晒 - 润 - 掏 - 笔 - 鞭 - 博 - 捧 - 函 - 胡 - 锅 - 雾 - 疯 - 狂 - 趋 - 膏 - 妆 - 尘 - 袋 - 贝 - 俺 - 耽 - 怀 - 恐 - 赋 - 脑 - 焉 - 愣 - 呵 - 噼 - 啪 - 虚 - 河 - 归 - 绊 - 械 - 扬 - 筒 - 靴 - 束 - 彩 - 荐 - 沙 - 迎 - 荡 - 凌 - 昂 - 碑 - 蹦 - 扉 - 泼 - 丰 - 滴 - 沾 - 亭 - 粘 - 奇 - 饼 - 牙 - 娃 - 杯 - 踢 - 嘿 - 抛 - 枯 - 剔 - 苗 - 纹 - 永 - 津 - 唉 - 趁 - 屡 - 逮 - 戒 - 肃 - 仁 - 肇 - 醉 - 糟 - 馈 - 横 - 扭 - 盔 - 侧 - 鲁 - 莽 - 飙 - 稿 - 逐 - 谋 - 京 - 苏 - 宁 - 驻 - 咨 - 旷 - 拓 - 杆 - 秤 - 叮 - 嘱 - 咋 - 炊 - 怪 - 婆 - 阎 - 王 - 饿 - 鬼 - 惨 - 渡 - 坎 - 囤 - 甲 - 蛙 - 鲤 - 桂 - 石 - 玉 - 溪 - 华 - 窝 - 截 - 秩 - 嗨 - 芹 - 梨 - 蕉 - S - 煲 - 汤 - 鲫 - 揽 - 挡 - 柚 - 瑞 - 匹 - '2' - 踹 - 吵 - 凶 - 矩 - 迟 - 脾 - 纳 - 朵 - 墨 - 袖 - 链 - 钩 - 笼 - 熄 - 盆 - 殴 - 欺 - 诈 - 厕 - 娱 - 爬 - 威 - 胁 - 阅 - 赌 - 拢 - 症 - 伪 - 脂 - 堪 - 盛 - 蚊 - 蝇 - 煎 - 晰 - 柔 - 涩 - 汁 - 腹 - 胃 - 痉 - 挛 - 颗 - 粒 - 匀 - 败 - 历 - 佳 - 乏 - 寄 - 残 - 杀 - 剂 - 疾 - 衍 - 溅 - 倘 - 褶 - 席 - 启 - 遮 - 槽 - 递 - 橱 - 迹 - 镁 - 泄 - 阀 - 柴 - 阻 - 恋 - 盲 - 浓 - 捂 - 腰 - 姿 - 缝 - 肿 - 焦 - 骗 - 伺 - 嘘 - 掩 - 褥 - 帘 - 籍 - 锥 - 锋 - 尖 - 锐 - 祸 - 秒 - 李 - 伸 - 浏 - 览 - 航 - 讯 - 谨 - 慎 - 匪 - 劫 - 医 - 族 - 忧 - 孤 - 拜 - 窄 - 唯 - 搁 - 朝 - 尺 - 盟 - 波 - 隆 - 词 - 村 - 娶 - 媳 - 县 - 聘 - 醇 - 泡 - 坨 - 淋 - 延 - 柱 - 肾 - 蒸 - 槛 - 赚 - 凡 - 恩 - 厚 - 赞 - 茎 - 蒜 - 苔 - 甘 - 菠 - 涮 - 霾 - 仍 - 云 - 追 - 丽 - 盖 - 欧 - 莱 - 雅 - 婴 - 孕 - 敲 - 约 - 惰 - 谱 - 射 - 惑 - 睹 - 奉 - 诚 - 惶 - 卓 - 勉 - 聪 - 疼 - 弃 - 奴 - 隶 - 嚷 - 眠 - 躺 - 乒 - 乓 - 琴 - 挖 - 掘 - 阵 - 浆 - 索 - 呼 - 古 - 弥 - 熔 - 抱 - 怨 - 猫 - 笑 - 挣 - 黑 - 猛 - 令 - 核 - 磊 - 橙 - 吨 - 吊 - 蘸 - 氮 - 罐 - 战 - 懈 - 渐 - 胜 - 命 - 抬 - 缘 - 睦 - 扮 - 珠 - 颁 - 蔼 - 凳 - 饰 - 缤 - 晶 - 抵 - 遥 - 腿 - 拍 - 妻 - 羽 - 绒 - 梳 - 袄 - 述 - 跆 - 屈 - 脱 - 朗 - 劝 - 胆 - 腔 - 圆 - 亚 - 宴 - 编 - 肢 - 壶 - 暑 - 怒 - 描 - 绕 - 悦 - 忆 - 嗓 - 胖 - 疙 - 瘩 - 哒 - 碴 - 棱 - 炒 - 井 - 漫 - 烘 - 焙 - 涤 - 船 - 纱 - 君 - 茉 - 莉 - 钙 - 瞩 - <_> - 塌 - 嗷 - 屁 - 股 - 绪 - 勇 - 奋 - 荣 - 诲 - 卑 - 挫 - 昧 - 疲 - 惫 - 册 - 呈 - 僵 - 熬 - 敬 - 呦 - <sos/eos> init: null model_conf: ignore_id: 0 use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt cleaner: null g2p: null lm: transformer lm_conf: pos_enc: null embed_unit: 128 att_unit: 512 head: 8 unit: 2048 layer: 16 dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.3a1 distributed: false ``` </details> | 65c59ad0642f40023a620d6249d4fe3f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.5023 | 0.8116 | | 2e94b980a69604edace5fe3ce6256831 |
apache-2.0 | ['generated_from_trainer'] | false | openai/whisper-medium.en This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1748 - Wer: 2.7097 | a695a1e07a48e3a7f83639640ac9d00c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0329 | 5.0 | 500 | 0.1343 | 4.0125 | | 0.0013 | 10.01 | 1000 | 0.1531 | 2.8810 | | 0.0002 | 15.01 | 1500 | 0.1609 | 2.7321 | | 0.0002 | 20.01 | 2000 | 0.1608 | 2.7544 | | 0.0001 | 25.01 | 2500 | 0.1688 | 2.7321 | | 0.0002 | 30.02 | 3000 | 0.1722 | 2.7172 | | 0.0001 | 35.02 | 3500 | 0.1742 | 2.7172 | | 0.0001 | 40.02 | 4000 | 0.1748 | 2.7097 | | cfb1ee08da51c2ca8f34912e5a28a646 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8434 - Matthews Correlation: 0.5567 | 15570f8c2a753c2841b357c46d959c99 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5224 | 1.0 | 535 | 0.5360 | 0.4275 | | 0.3498 | 2.0 | 1070 | 0.5205 | 0.5078 | | 0.2383 | 3.0 | 1605 | 0.6466 | 0.5318 | | 0.1739 | 4.0 | 2140 | 0.7723 | 0.5532 | | 0.1276 | 5.0 | 2675 | 0.8434 | 0.5567 | | 58a16a66d0205656c1b52520267c48e9 |
apache-2.0 | ['generated_from_keras_callback'] | false | Arandine/bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5695 - Epoch: 2 | ee19e7a26b15bbe9de6354c946fd317c |
mit | ['exbert'] | false | How to use Here is how to use the ONNX models of gpt2 to get the features of a given text: Example using transformers.pipelines: ```python from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2") model = ORTModelForCausalLM.from_pretrained("gpt2", from_transformers=True) onnx_gen = pipeline("text-generation", model=model, tokenizer=tokenizer) text = "My name is Philipp and I live in Germany." gen = onnx_gen(text) ``` Example of text generation: ```python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2") model = ORTModelForCausalLM.from_pretrained("optimum/gpt2") inputs = tokenizer("My name is Arthur and I live in", return_tensors="pt") gen_tokens = model.generate(**inputs,do_sample=True,temperature=0.9, min_length=20,max_length=20) tokenizer.batch_decode(gen_tokens) ``` | 137163cd54084a80a535ff6fc1c53c39 |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'xlsr-fine-tuning-week', 'hf-asr-leaderboard'] | false | Serbian wav2vec2-xls-r-300m-sr-cv8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.7302 - Wer: 0.4825 - Cer: 0.1847 Evaluation on mozilla-foundation/common_voice_8_0 gave the following results: - WER: 0.48530097993467103 - CER: 0.18413288165227845 Evaluation on speech-recognition-community-v2/dev_data gave the following results: - WER: 0.9718373107518604 - CER: 0.8302740620263108 The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-sr-cv8 --dataset mozilla-foundation/common-voice_8_0 --split test --config sr ``` | c9789b534ccb2631595c39808fd471a5 |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'xlsr-fine-tuning-week', 'hf-asr-leaderboard'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 800 - mixed_precision_training: Native AMP | 27b1226b6cd8bc5362fc388af7d9300d |
apache-2.0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'xlsr-fine-tuning-week', 'hf-asr-leaderboard'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 5.6536 | 15.0 | 1200 | 2.9744 | 1.0 | 1.0 | | 2.7935 | 30.0 | 2400 | 1.6613 | 0.8998 | 0.4670 | | 1.6538 | 45.0 | 3600 | 0.9248 | 0.6918 | 0.2699 | | 1.2446 | 60.0 | 4800 | 0.9151 | 0.6452 | 0.2398 | | 1.0766 | 75.0 | 6000 | 0.9110 | 0.5995 | 0.2207 | | 0.9548 | 90.0 | 7200 | 1.0273 | 0.5921 | 0.2149 | | 0.8919 | 105.0 | 8400 | 0.9929 | 0.5646 | 0.2117 | | 0.8185 | 120.0 | 9600 | 1.0850 | 0.5483 | 0.2069 | | 0.7692 | 135.0 | 10800 | 1.1001 | 0.5394 | 0.2055 | | 0.7249 | 150.0 | 12000 | 1.1018 | 0.5380 | 0.1958 | | 0.6786 | 165.0 | 13200 | 1.1344 | 0.5114 | 0.1941 | | 0.6432 | 180.0 | 14400 | 1.1516 | 0.5054 | 0.1905 | | 0.6009 | 195.0 | 15600 | 1.3149 | 0.5324 | 0.1991 | | 0.5773 | 210.0 | 16800 | 1.2468 | 0.5124 | 0.1903 | | 0.559 | 225.0 | 18000 | 1.2186 | 0.4956 | 0.1922 | | 0.5298 | 240.0 | 19200 | 1.4483 | 0.5333 | 0.2085 | | 0.5136 | 255.0 | 20400 | 1.2871 | 0.4802 | 0.1846 | | 0.4824 | 270.0 | 21600 | 1.2891 | 0.4974 | 0.1885 | | 0.4669 | 285.0 | 22800 | 1.3283 | 0.4942 | 0.1878 | | 0.4511 | 300.0 | 24000 | 1.4502 | 0.5002 | 0.1994 | | 0.4337 | 315.0 | 25200 | 1.4714 | 0.5035 | 0.1911 | | 0.4221 | 330.0 | 26400 | 1.4971 | 0.5124 | 0.1962 | | 0.3994 | 345.0 | 27600 | 1.4473 | 0.5007 | 0.1920 | | 0.3892 | 360.0 | 28800 | 1.3904 | 0.4937 | 0.1887 | | 0.373 | 375.0 | 30000 | 1.4971 | 0.4946 | 0.1902 | | 0.3657 | 390.0 | 31200 | 1.4208 | 0.4900 | 0.1821 | | 0.3559 | 405.0 | 32400 | 1.4648 | 0.4895 | 0.1835 | | 0.3476 | 420.0 | 33600 | 1.4848 | 0.4946 | 0.1829 | | 0.3276 | 435.0 | 34800 | 1.5597 | 0.4979 | 0.1873 | | 0.3193 | 450.0 | 36000 | 1.7329 | 0.5040 | 0.1980 | | 0.3078 | 465.0 | 37200 | 1.6379 | 0.4937 | 0.1882 | | 0.3058 | 480.0 | 38400 | 1.5878 | 0.4942 | 0.1921 | | 0.2987 | 495.0 | 39600 | 1.5590 | 0.4811 | 0.1846 | | 0.2931 | 510.0 | 40800 | 1.6001 | 0.4825 | 0.1849 | | 0.276 | 525.0 | 42000 | 1.7388 | 0.4942 | 0.1918 | | 0.2702 | 540.0 | 43200 | 1.7037 | 0.4839 | 0.1866 | | 0.2619 | 555.0 | 44400 | 1.6704 | 0.4755 | 0.1840 | | 0.262 | 570.0 | 45600 | 1.6042 | 0.4751 | 0.1865 | | 0.2528 | 585.0 | 46800 | 1.6402 | 0.4821 | 0.1865 | | 0.2442 | 600.0 | 48000 | 1.6693 | 0.4886 | 0.1862 | | 0.244 | 615.0 | 49200 | 1.6203 | 0.4765 | 0.1792 | | 0.2388 | 630.0 | 50400 | 1.6829 | 0.4830 | 0.1828 | | 0.2362 | 645.0 | 51600 | 1.8100 | 0.4928 | 0.1888 | | 0.2224 | 660.0 | 52800 | 1.7746 | 0.4932 | 0.1899 | | 0.2218 | 675.0 | 54000 | 1.7752 | 0.4946 | 0.1901 | | 0.2201 | 690.0 | 55200 | 1.6775 | 0.4788 | 0.1844 | | 0.2147 | 705.0 | 56400 | 1.7085 | 0.4844 | 0.1851 | | 0.2103 | 720.0 | 57600 | 1.7624 | 0.4848 | 0.1864 | | 0.2101 | 735.0 | 58800 | 1.7213 | 0.4783 | 0.1835 | | 0.1983 | 750.0 | 60000 | 1.7452 | 0.4848 | 0.1856 | | 0.2015 | 765.0 | 61200 | 1.7525 | 0.4872 | 0.1869 | | 0.1969 | 780.0 | 62400 | 1.7443 | 0.4844 | 0.1852 | | 0.2043 | 795.0 | 63600 | 1.7302 | 0.4825 | 0.1847 | | 3c5001f9f6f7e6ef4c94ec72b50d4d36 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Matthews Correlation: 0.5422 | 49767004cfa29d0c0e312e2b4b3ae0d4 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.42 | 1.0 | 535 | 0.4631 | 0.5242 | | 0.2823 | 2.0 | 1070 | 0.5755 | 0.5056 | | 0.1963 | 3.0 | 1605 | 0.6767 | 0.5478 | | 0.1441 | 4.0 | 2140 | 0.7742 | 0.5418 | | 0.1069 | 5.0 | 2675 | 0.7994 | 0.5422 | | 62eeadb33468972600a93e5463878e46 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 415 | 5.6748 | | 6.3694 | 2.0 | 830 | 5.4214 | | 5.413 | 3.0 | 1245 | 5.3563 | | 9d4d9ed8a9d95a8c4ce37b89a0b82f49 |
mit | [] | false | Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `raw-label-epoch-3` | e13a53914814d67903b86a425d15c3e2 |
mit | [] | false | Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'raw-label-epoch-3' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts)) | 60cae3f991e133b69f5784c2990f9345 |
apache-2.0 | ['generated_from_trainer'] | false | nmt-mpst-id-en-lr_1e-05-ep_10-seq_128_bs-32 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9022 - Bleu: 0.0284 - Meteor: 0.1159 | 145f8ea2529aa537b732129cdc38ce17 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | No log | 1.0 | 202 | 3.2021 | 0.0126 | 0.0683 | | No log | 2.0 | 404 | 3.0749 | 0.0219 | 0.0958 | | 3.559 | 3.0 | 606 | 3.0147 | 0.0252 | 0.1059 | | 3.559 | 4.0 | 808 | 2.9738 | 0.0262 | 0.1094 | | 3.2602 | 5.0 | 1010 | 2.9476 | 0.027 | 0.1113 | | 3.2602 | 6.0 | 1212 | 2.9309 | 0.0278 | 0.1138 | | 3.2602 | 7.0 | 1414 | 2.9153 | 0.0278 | 0.1139 | | 3.1839 | 8.0 | 1616 | 2.9083 | 0.0285 | 0.116 | | 3.1839 | 9.0 | 1818 | 2.9041 | 0.0284 | 0.1158 | | 3.1574 | 10.0 | 2020 | 2.9022 | 0.0284 | 0.1159 | | c68d4c34e956bb8013c3dcdb9b2c78dc |
apache-2.0 | ['translation', 'wmt16', 'allenai'] | false | How to use ```python from transformers import FSMTForConditionalGeneration, FSMTTokenizer mname = "allenai/wmt16-en-de-dist-12-1" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) input = "Machine learning is great, isn't it?" input_ids = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) | 13fa87657f0d1ec741efeb9431691b38 |
apache-2.0 | ['translation', 'wmt16', 'allenai'] | false | Eval results Here are the BLEU scores: model | fairseq | transformers -------|---------|---------- wmt16-en-de-dist-12-1 | 28.3 | 27.52 The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs. The score was calculated using this code: ```bash git clone https://github.com/huggingface/transformers cd transformers export PAIR=en-de export DATA_DIR=data/$PAIR export SAVE_DIR=data/$PAIR export BS=8 export NUM_BEAMS=5 mkdir -p $DATA_DIR sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target echo $PAIR PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS ``` | 4efe6cba1ca97a2a29ca15d2113e73cd |
apache-2.0 | ['generated_from_trainer'] | false | finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3061 - Accuracy: 0.8733 - F1: 0.8742 | eba647d231aa529924d2bbcb6f20c481 |
apache-2.0 | ['generated_from_trainer'] | false | swin-tiny-patch4-window7-224-finetuned-respirator This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2124 - Accuracy: 0.9082 | 79ba403befba3ea04094ef5a3b124598 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4872 | 0.98 | 37 | 0.2124 | 0.9082 | | 0.4828 | 1.98 | 74 | 0.2124 | 0.9082 | | 0.4772 | 2.98 | 111 | 0.2124 | 0.9082 | | b8b7a54e815f50e347c33ee19ec5c793 |
mit | ['generated_from_trainer'] | false | wnli_roberta-base_144_v2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6920 - Accuracy: 0.5634 | 525b23242c524b53eab37af7690bb3cb |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2178 - Accuracy: 0.9285 - F1: 0.9289 | 417a1e62332de49bd338ccaeba6e5cf3 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8227 | 1.0 | 250 | 0.3212 | 0.8985 | 0.8932 | | 0.2463 | 2.0 | 500 | 0.2178 | 0.9285 | 0.9289 | | 4a22785fc8aa7825c6558ae38d05d480 |
apache-2.0 | ['automatic-speech-recognition', 'ru'] | false | exp_w2v2t_ru_unispeech_s607 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 971ce719100fabb968d29efb046c7bd5 |
apache-2.0 | ['generated_from_trainer'] | false | small-mlm-imdb-target-rotten_tomatoes This model is a fine-tuned version of [muhtasham/small-mlm-wikitext](https://huggingface.co/muhtasham/small-mlm-wikitext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3909 - Accuracy: 0.8021 - F1: 0.8017 | 7f38f4bc800da22b3f35b3694a681337 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | Model description CycleGAN for unpaired image-to-image translation. Given two image domains A and B, the following components are trained end2end to translate between such domains: - A generator A to B, named G_AB conditioned on an image from A - A generator B to A, named G_BA conditioned on an image from B - A domain classifier D_A, associated with G_AB - A domain classifier D_B, associated with G_BA At inference time, G_AB or G_BA are relevant to translate images, respectively A to B or B to A. In the general setting, this technique provides style transfer functionalities between the selected image domains A and B. This allows to obtain a generated translation by G_AB, of an image from domain A that resembles the distribution of the images from domain B, and viceversa for the generator G_BA. Under these framework, these aspects have been used to perform style transfer between synthetic data obtained from a simulated driving dataset, GTA5, and the real driving data from Cityscapes. This is of paramount importance to develop autonomous driving perception deep learning models, as this allows to generate synthetic data with automatic annotations which resembles real world images, without requiring the intervention of a human annotator. This is fundamental because a manual annotator has been shown to require 1.5 to 3.3 hours to create semantic and instance segmentation masks for a single images. These have been provided in the original [cityscapes paper (Cordts et al 2016)](https://arxiv.org/abs/2104.13395) and the [adverse condition dataset (Sakaridis et al. 2021)](https://arxiv.org/abs/2104.13395) paper. Hence the CycleGAN provides forward and backward translation between synthetic and real world data. This has showed to allows high quality translation even in absence of paired sample-ground-truth data. The idea behind such model is that as the synthetic data distribution gets closer to the real world one, deep models do not suffer from degraded performance due to the domain shift issue. A broad literature is available on the minimization of the domain shift, under the research branch of domain adaptation and transfer learning, of which image translation models provide an alternative approach | e49f16945d67bf39edfd2e05081acf99 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | How to use ```python import os from PIL import Image from torchvision import transforms as T from torchvision.transforms import Compose, Resize, ToTensor, Normalize, RandomCrop, RandomHorizontalFlip from torchvision.utils import make_grid from torch.utils.data import DataLoader from huggan.pytorch.cyclegan.modeling_cyclegan import GeneratorResNet import torch.nn as nn import torch import gradio as gr import glob def pred_pipeline(img, transforms): orig_shape = img.shape input = transforms(img) input = input.unsqueeze(0) output = model(input) out_img = make_grid(output, | 3af5e234d4660151305e1241ab6e4721 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | .detach().cpu(), nrow=1, normalize=True) out_transform = Compose([ T.Resize(orig_shape[:2]), T.ToPILImage() ]) return out_transform(out_img) n_channels = 3 image_size = 512 input_shape = (image_size, image_size) transform = Compose([ T.ToPILImage(), T.Resize(input_shape), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) model = GeneratorResNet.from_pretrained('Chris1/sim2real', input_shape=(n_channels, image_size, image_size), num_residual_blocks=9) real_images = model(synthetic_images) ``` | b76354141c2854a06976e41f82daf117 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | Limitations and bias Due to the absence of paired data, some background parts of the synthetic images are seldom wrongly translated, e.g. sky is translated to vegetation. Additional pretext tasks in parallel to the discriminative classifier of fake and real samples could improve the result. One easy improvement is the use of an additional parallel branch that performs semantic segmentation on the synthetic data, in order to learn features which are common to sky and vegetation, thus disentangling their representations as separate classes. | 23a8feb443323cbc3026e4dcbe5d8942 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | Training data The CycleGAN model is trained on an unpaired dataset of samples from synthetic and real driving data, respectively from the GTA5 and Cityscapes datasets. To this end, the synthetic-to-real dataset can be loaded by means of the function load_dataset in the huggingface library, as follows. ```python from datasets import load_dataset unpaired_dataset = load_dataset("huggan/sim2real_gta5_to_cityscapes") ``` This dataset contains two columns, imageA and imageB representing respectively the GTA5 and Cityscapes data. Due to the fact that the two columns have to be of the same length, GTA5 is subsampled in order to reach the same number of samples provided by the Cityscapes train split (2975) | c9a4acad6425804f78ec051e585eff44 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | Preprocessing The following transformations are applied to each input sample of synthetic and real data. The input size is fixed to RGB images of height, width = 512, 512. This choice has been made in order to limit the impact of upsampling the translated images to higher resolutions. ```python n_channels = 3 image_size = 512 input_shape = (image_size, image_size) transform = Compose([ T.ToPILImage(), T.Resize(input_shape), ToTensor(), Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ]) ``` | 73c9d9e63dcad47f21296f7ff9b3c513 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | Hyperparameters The following configuration has been kept fixed for all translation models: - learning rate 0.0002 - number of epochs 200 - learning rate decay activation at epoch 100 - number of residual blocks of the cyclegan 9 - image size 512x512 - number of channels=3 - cycle loss weight 10.0 - identity loss weight 5.0 - optimizer ADAM with beta1 0.5 and beta2 0.999 - batch size 8 - NO mixed precision training | dfcdbe27fe3d30f131abb00bb1fd1a48 |
mit | ['conditional-image-generation', 'image-to-image', 'gan', 'cyclegan'] | false | Generated Images In the provided images, row0 and row2 represent the synthetic and real images from the respective datasets. Row1 is the translation of the immediate above images in row0(synthetic) by means of the G_AB translation model, to the real world style. Row3 is the translation of the immediate above images in row2(real) by means of the G_BA translation model, to the synthetic world style. Visualization over the training iterations for [synthetic (GTA5) to real (Cityscapes) translation](https://wandb.ai/chris1nexus/experiments_cyclegan_s2r_hp_opt--10/reports/CycleGAN-sim2real-training-results--VmlldzoxODUyNTk4?accessToken=tow3v4vp02aurzodedrdht15ig1cx69v5mited4dm8bgnup0z192wri0xtftaeqj) | 59bc67c9e1f32bf4591142c4049d8278 |
apache-2.0 | [] | false | Results: ``` ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.9094 eval_loss = 0.3514 eval_runtime = 0:00:03.60 eval_samples = 872 eval_samples_per_second = 242.129 eval_steps_per_second = 30.266 ``` | e5bd77330de0031e435b5362a32391ee |
mit | [] | false | spider-gwen on Stable Diffusion This is the `<spider-gwen>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:                 | 5be90ec6622198873c5a105d8b302090 |
apache-2.0 | ['image-classification', 'keras'] | false | Train a Vision Transformer on small datasets Author: [Jónathan Heras](https://twitter.com/_Jonathan_Heras) [Keras Blog](https://keras.io/examples/vision/vit_small_ds/) | [Colab Notebook](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/vit_small_ds.ipynb) In the academic paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929), the authors mention that Vision Transformers (ViT) are data-hungry. Therefore, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models. The self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets. In the academic paper [Vision Transformer for Small-Size Datasets](https://arxiv.org/abs/2112.13492v1), the authors set out to tackle the problem of locality inductive bias in ViTs. The main ideas are: - Shifted Patch Tokenization - Locality Self Attention | 8121f8c3ef6a7f334ed49398e62a5552 |
apache-2.0 | ['image-classification', 'keras'] | false | ARCHITECTURE LAYER_NORM_EPS = 1e-6 TRANSFORMER_LAYERS = 8 PROJECTION_DIM = 64 NUM_HEADS = 4 TRANSFORMER_UNITS = [ PROJECTION_DIM * 2, PROJECTION_DIM, ] MLP_HEAD_UNITS = [ 2048, 1024 ] ``` I have used the `AdamW` optimizer with cosine decay learning schedule. You can find the entire implementation in the keras blog post. To use the pretrained model: ```python loaded_model = from_pretrained_keras("keras-io/vit_small_ds_v2") ``` | cdbb3a7a8cf726963064b16a2f053e58 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-multilingual-cased-finetuned-squad This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2269 | b539e53ce971dc327ea799561fd2033c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1694 | 1.0 | 5555 | 1.2091 | | 0.9263 | 2.0 | 11110 | 1.1691 | | 0.7769 | 3.0 | 16665 | 1.2269 | | 8f05b1e494c4f8e59b40f3d41a01d36e |
cc-by-4.0 | ['question generation'] | false | Model Card of `lmqg/t5-small-squad-qg` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). | 112a8d30a26ae7f6184dc0e56206c158 |
cc-by-4.0 | ['question generation'] | false | model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` | 9cdc3b7e7d7afb0ec006711b115be709 |
cc-by-4.0 | ['question generation'] | false | Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56.86 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 40.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 31.05 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 24.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.84 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 51.43 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 95.14 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 69.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 95.19 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 70.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 95.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 69.51 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/t5-small-squad-ae`](https://huggingface.co/lmqg/t5-small-squad-ae). [raw metric file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_t5-small-squad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 63.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 63.92 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 92.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 63.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metrics (Question Generation, Out-of-Domain)*** | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link | |:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:| | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | amazon | 89.94 | 5.45 | 20.75 | 59.79 | 22.97 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.amazon.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | new_wiki | 92.61 | 10.48 | 26.21 | 65.05 | 28.11 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | nyt | 91.71 | 6.97 | 23.66 | 62.86 | 23.03 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | reddit | 89.57 | 4.75 | 19.8 | 59.23 | 20.1 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 87.4 | 0.0 | 12.3 | 55.34 | 10.88 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | electronics | 87.12 | 1.16 | 15.49 | 55.55 | 15.62 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | grocery | 87.22 | 0.52 | 14.95 | 57.12 | 12.63 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | movies | 86.84 | 0.0 | 12.11 | 55.01 | 12.63 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | restaurants | 87.49 | 0.0 | 12.67 | 55.04 | 11.53 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) | | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | tripadvisor | 88.4 | 1.46 | 15.53 | 55.91 | 14.24 | [link](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) | | d58dfe8f82b0dc39852b2292e1145067 |
cc-by-4.0 | ['question generation'] | false | Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 9 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squad-qg/raw/main/trainer_config.json). | 1b38827c72c6781de5555274396f6e75 |
apache-2.0 | ['BERTweet'] | false | BERTweet-FA: A pre-trained language model for Persian (a.k.a Farsi) Tweets --- BERTweet-FA is a transformer-based model trained on 20665964 Persian tweets. The model has been trained on the data only for 1 epoch (322906 steps), and yet it has the ability to recognize the meaning of most of the conversational sentences used in Farsi. Note that the architecture of this model follows the original BERT [[Devlin et al.](https://arxiv.org/abs/1810.04805)]. How to use the Model --- ```python from transformers import BertForMaskedLM, BertTokenizer, pipeline model = BertForMaskedLM.from_pretrained('arm-on/BERTweet-FA') tokenizer = BertTokenizer.from_pretrained('arm-on/BERTweet-FA') fill_sentence = pipeline('fill-mask', model=model, tokenizer=tokenizer) fill_sentence('اینجا جمله مورد نظر خود را بنویسید و کلمه موردنظر را [MASK] کنید') ``` The Training Data --- The first version of the model was trained on the "[Large Scale Colloquial Persian Dataset](https://iasbs.ac.ir/~ansari/lscp/)" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020. Evaluation --- | Training Loss | Epoch | Step | |:-------------:|:-----:|:-----:| | 0.0036 | 1.0 | 322906 | Contributors --- - Arman Malekzadeh [[Github](https://github.com/arm-on)] | 7690fcc9178dc2c5d03b31cbe470a046 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-hindi-epochs15-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 3.5705 - Wer: 1.0 | 977b15129ca2bae0aa4441225ea2f405 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 15 - mixed_precision_training: Native AMP | 0b647796db5bb6bcd85afbe11be7c3be |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 20.2764 | 5.53 | 50 | 8.1197 | 1.0 | | 5.2964 | 11.11 | 100 | 3.5705 | 1.0 | | 74ff2a9e0f13e0ce92074a07be6e0bc0 |
mit | ['roberta-base', 'roberta-base-epoch_59'] | false | RoBERTa, Intermediate Checkpoint - Epoch 59 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_59. | d76d3c6f77e67b065969f8354a4d70ae |
apache-2.0 | ['automatic-speech-recognition', 'es'] | false | exp_w2v2r_es_xls-r_age_teens-5_sixties-5_s530 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 36354806dd2c5c28f3f9d0d9e0222636 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Small Vi - Shiv Kumar Ganesh This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7220 - Wer: 46.6769 | 106643686a8f4a7ff24383b2b1d66168 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1200 - mixed_precision_training: Native AMP | 130683e8a1d49658e46d2d5463d2ec44 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7433 | 1.02 | 100 | 1.6824 | 155.0559 | | 0.5929 | 2.04 | 200 | 0.8475 | 55.5824 | | 0.1188 | 3.05 | 300 | 0.6646 | 47.2801 | | 0.0672 | 5.0 | 400 | 0.7099 | 61.3292 | | 0.0317 | 6.02 | 500 | 0.6951 | 49.9013 | | 0.0169 | 7.04 | 600 | 0.7658 | 62.8866 | | 0.0089 | 8.06 | 700 | 0.6681 | 34.2509 | | 0.004 | 10.01 | 800 | 0.6875 | 43.8364 | | 0.0015 | 11.03 | 900 | 0.7129 | 46.8195 | | 0.0011 | 12.04 | 1000 | 0.7194 | 47.4775 | | 0.0011 | 13.06 | 1100 | 0.7217 | 46.1505 | | 0.001 | 15.01 | 1200 | 0.7220 | 46.6769 | | d88235d8a2c22cc83e1f08e00fcb3da9 |
mit | ['roberta-base', 'roberta-base-epoch_20'] | false | RoBERTa, Intermediate Checkpoint - Epoch 20 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_20. | ffac281dabf8222dbf3b02d2be58e65c |
cc-by-4.0 | [] | false | MalayalamBERT MalayalamBERT is a Malayalam BERT model trained on publicly available Malayalam monolingual datasets. Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] . Citing: ``` @article{joshi2022l3cubehind, title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages}, author={Joshi, Raviraj}, journal={arXiv preprint arXiv:2211.11418}, year={2022} } ``` | 528b7eaa039a024ed3c33e4263ff1463 |
apache-2.0 | ['finnish', 't5', 't5x', 'seq2seq', 'ul2'] | false | UL2-base-nl36 for Finnish Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer). The UL2 objective was introduced in [this paper](https://arxiv.org/abs/2205.05131) and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2). **Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text. | d9368f66dc3d04d928037b821c3e7ce0 |
apache-2.0 | ['finnish', 't5', 't5x', 'seq2seq', 'ul2'] | false | t511) improvements compared to the original T5 model during the pretraining: - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202) - Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning - Pretrained on self-supervised objective only without mixing in the downstream tasks - No parameter sharing between embedding and classifier layer This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially. This model uses the [t5-efficient-base-nl36](https://huggingface.co/google/t5-efficient-base-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "base" model's architecture of 12 transformer layers. In total, this model has 814 million parameters. | 504111d99da4673fe95222bfcb1dcde7 |
apache-2.0 | ['finnish', 't5', 't5x', 'seq2seq', 'ul2'] | false | How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-base-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-base-nl36-finnish") ``` and in TensorFlow: ```python from transformers import T5Tokenizer, TFT5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-base-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-base-nl36-finnish", from_pt=True) ``` | 029d3b048225ee10d5e4cc7b8aaeec11 |
apache-2.0 | ['finnish', 't5', 't5x', 'seq2seq', 'ul2'] | false | Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps with a batch size of 64 (in total 33B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after. Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere). The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`. UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both). | 30b674696c21e8ad1bb4e2e789df2417 |
apache-2.0 | ['finnish', 't5', 't5x', 'seq2seq', 'ul2'] | false | Evaluation results Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text. When fine-tuned on those datasets, this model (the fifth row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 | |Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 | |Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 | |Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 | |Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 | Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 | |Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 | |Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 | |Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 | |Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 | |Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** | |Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 | Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |google/mt5-small | 301 million |91.51 |64.10 | |google/mt5-base | 583 million |92.71 |68.40 | | 87444ecb480727eb11df712b422504dc |
mit | ['generated_from_keras_callback'] | false | ishaankul67/Wayback_Machine-clustered This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2638 - Train End Logits Accuracy: 0.9444 - Train Start Logits Accuracy: 0.9167 - Validation Loss: 0.6762 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 0.6667 - Epoch: 0 | 4aa219a08ad339b3b95f5aef20cfa94a |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.2638 | 0.9444 | 0.9167 | 0.6762 | 1.0 | 0.6667 | 0 | | 8ce10afdf70d0dba990cd0f7bc6a681e |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xlsr-53-demo1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9692 - Wer: 0.8462 | 8e1052b1950bb78e5e41b329668a8404 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 5 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 | 0695e421d884dd4fdb96afe63e33932d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 12.978 | 0.06 | 100 | 3.5377 | 1.0 | | 3.5026 | 0.13 | 200 | 3.4366 | 1.0 | | 3.4084 | 0.19 | 300 | 3.3831 | 1.0 | | 3.3551 | 0.26 | 400 | 3.2563 | 1.0 | | 3.2668 | 0.32 | 500 | 3.2109 | 1.0 | | 2.9398 | 0.38 | 600 | 2.4548 | 0.9987 | | 2.2204 | 0.45 | 700 | 1.8870 | 1.0135 | | 1.7401 | 0.51 | 800 | 1.6816 | 1.0247 | | 1.5748 | 0.57 | 900 | 1.4741 | 0.9953 | | 1.4539 | 0.64 | 1000 | 1.4573 | 0.9852 | | 1.3612 | 0.7 | 1100 | 1.3534 | 0.9529 | | 1.3328 | 0.77 | 1200 | 1.3380 | 0.9320 | | 1.2459 | 0.83 | 1300 | 1.2984 | 0.9247 | | 1.1976 | 0.89 | 1400 | 1.2515 | 0.9252 | | 1.1593 | 0.96 | 1500 | 1.2345 | 0.9030 | | 1.1094 | 1.02 | 1600 | 1.2135 | 0.9305 | | 1.0485 | 1.09 | 1700 | 1.2045 | 0.9121 | | 0.9893 | 1.15 | 1800 | 1.1876 | 0.8990 | | 1.0099 | 1.21 | 1900 | 1.1663 | 0.8889 | | 0.982 | 1.28 | 2000 | 1.1674 | 0.8901 | | 0.9975 | 1.34 | 2100 | 1.1181 | 0.8812 | | 0.952 | 1.4 | 2200 | 1.1119 | 0.8817 | | 0.9311 | 1.47 | 2300 | 1.0786 | 0.8773 | | 0.9398 | 1.53 | 2400 | 1.1016 | 0.8720 | | 0.9148 | 1.6 | 2500 | 1.0878 | 0.8778 | | 0.9114 | 1.66 | 2600 | 1.1004 | 0.8712 | | 0.902 | 1.72 | 2700 | 1.0223 | 0.8744 | | 0.8978 | 1.79 | 2800 | 1.0616 | 0.8459 | | 0.8675 | 1.85 | 2900 | 1.0974 | 0.8643 | | 0.8373 | 1.92 | 3000 | 1.0389 | 0.8547 | | 0.8575 | 1.98 | 3100 | 1.0388 | 0.8480 | | 0.8313 | 2.04 | 3200 | 1.0001 | 0.8648 | | 0.7357 | 2.11 | 3300 | 1.0222 | 0.8705 | | 0.743 | 2.17 | 3400 | 1.0859 | 0.8765 | | 0.7306 | 2.23 | 3500 | 1.0109 | 0.8515 | | 0.7525 | 2.3 | 3600 | 0.9942 | 0.8619 | | 0.7308 | 2.36 | 3700 | 1.0004 | 0.8578 | | 0.7266 | 2.43 | 3800 | 1.0003 | 0.8497 | | 0.737 | 2.49 | 3900 | 1.0146 | 0.8505 | | 0.7202 | 2.55 | 4000 | 1.0172 | 0.8653 | | 0.6945 | 2.62 | 4100 | 0.9894 | 0.8415 | | 0.6633 | 2.68 | 4200 | 0.9894 | 0.8496 | | 0.6972 | 2.75 | 4300 | 0.9805 | 0.8505 | | 0.6872 | 2.81 | 4400 | 0.9939 | 0.8509 | | 0.7238 | 2.87 | 4500 | 0.9740 | 0.8532 | | 0.6847 | 2.94 | 4600 | 0.9692 | 0.8462 | | 269dc20b2aa07a507356ec89039a942e |
apache-2.0 | ['generated_from_trainer'] | false | distilbert_sa_GLUE_Experiment_logit_kd_wnli_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3439 - Accuracy: 0.5634 | e255a5ef3c3fecf9208f8b29ec857767 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3584 | 1.0 | 3 | 0.3470 | 0.5634 | | 0.3544 | 2.0 | 6 | 0.3488 | 0.4366 | | 0.348 | 3.0 | 9 | 0.3445 | 0.5634 | | 0.3513 | 4.0 | 12 | 0.3439 | 0.5634 | | 0.3477 | 5.0 | 15 | 0.3483 | 0.4507 | | 0.3494 | 6.0 | 18 | 0.3487 | 0.3099 | | 0.3493 | 7.0 | 21 | 0.3449 | 0.5634 | | 0.3472 | 8.0 | 24 | 0.3444 | 0.5634 | | 0.3484 | 9.0 | 27 | 0.3449 | 0.5634 | | 6a26f81d256f8bd0b9ccfd95baef2579 |
apache-2.0 | ['generated_from_trainer'] | false | resnet-50-finetuned-eurosat This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.7123 - Accuracy: 0.5630 | 525cff9b975ce7a943d3ef384ea61fa1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7579 | 1.0 | 190 | 1.7123 | 0.5630 | | 285f4fd99e80955e335051049f7f9874 |
apache-2.0 | ['generated_from_keras_callback'] | false | whisper3_0010 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7159 - Train Accuracy: 0.0297 - Validation Loss: 0.7918 - Validation Accuracy: 0.0300 - Epoch: 9 | ba6ec12cddf044d9773a82d3ad53d11d |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 5.0832 | 0.0116 | 4.4298 | 0.0124 | 0 | | 4.3130 | 0.0131 | 4.0733 | 0.0141 | 1 | | 3.9211 | 0.0146 | 3.6762 | 0.0157 | 2 | | 3.5505 | 0.0159 | 3.3453 | 0.0171 | 3 | | 3.1592 | 0.0175 | 2.8062 | 0.0199 | 4 | | 2.2581 | 0.0220 | 1.7622 | 0.0252 | 5 | | 1.4671 | 0.0259 | 1.2711 | 0.0276 | 6 | | 1.0779 | 0.0278 | 1.0220 | 0.0288 | 7 | | 0.8591 | 0.0290 | 0.8836 | 0.0295 | 8 | | 0.7159 | 0.0297 | 0.7918 | 0.0300 | 9 | | ccdcdd13aa13d3043e4549c472af24cb |
apache-2.0 | ['automatic-speech-recognition', 'de'] | false | exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s278 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | ebe53598adfdc11c4c609290d0a096c2 |
mit | ['generated_from_trainer'] | false | gpt2-finetuned-comp2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7788 - Precision: 0.3801 - Recall: 0.6854 - F1: 0.4800 - Accuracy: 0.4800 | f16631f7aaf4b534322c5302232fb454 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.