license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16
be5cb9d712c668db6a023fb85f8ada51
cc-by-sa-4.0
['belarusian', 'bulgarian', 'macedonian', 'russian', 'serbian', 'ukrainian', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a BERT model pre-trained with Slavic-Cyrillic ([UD_Belarusian](https://universaldependencies.org/be/) [UD_Bulgarian](https://universaldependencies.org/bg/) [UD_Russian](https://universaldependencies.org/ru/) [UD_Serbian](https://universaldependencies.org/treebanks/sr_set/) [UD_Ukrainian](https://universaldependencies.org/treebanks/uk_iu/)) for POS-tagging and dependency-parsing, derived from [ruBert-base](https://huggingface.co/sberbank-ai/ruBert-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
a3cdd455c70041ff5c257086c0fe61a7
cc-by-sa-4.0
['belarusian', 'bulgarian', 'macedonian', 'russian', 'serbian', 'ukrainian', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/bert-base-slavic-cyrillic-upos") ```
9793aa9f7ace2adf1f1cac34489856c2
apache-2.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16
6e4904548c9b968bdddb9c0fd376f106
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EU dataset. It achieves the following results on the evaluation set: - Loss: 0.2278 - Wer: 0.1787
636814af433e27233018932fd57a2aa1
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2548 | 4.24 | 500 | 0.2470 | 0.3663 | | 0.1435 | 8.47 | 1000 | 0.2000 | 0.2791 | | 0.1158 | 12.71 | 1500 | 0.2030 | 0.2652 | | 0.1094 | 16.95 | 2000 | 0.2096 | 0.2605 | | 0.1004 | 21.19 | 2500 | 0.2150 | 0.2477 | | 0.0945 | 25.42 | 3000 | 0.2072 | 0.2369 | | 0.0844 | 29.66 | 3500 | 0.1981 | 0.2328 | | 0.0877 | 33.89 | 4000 | 0.2041 | 0.2425 | | 0.0741 | 38.14 | 4500 | 0.2353 | 0.2421 | | 0.0676 | 42.37 | 5000 | 0.2092 | 0.2213 | | 0.0623 | 46.61 | 5500 | 0.2217 | 0.2250 | | 0.0574 | 50.84 | 6000 | 0.2152 | 0.2179 | | 0.0583 | 55.08 | 6500 | 0.2207 | 0.2186 | | 0.0488 | 59.32 | 7000 | 0.2225 | 0.2159 | | 0.0456 | 63.56 | 7500 | 0.2293 | 0.2031 | | 0.041 | 67.79 | 8000 | 0.2277 | 0.2013 | | 0.0379 | 72.03 | 8500 | 0.2287 | 0.1991 | | 0.0381 | 76.27 | 9000 | 0.2233 | 0.1954 | | 0.0308 | 80.51 | 9500 | 0.2195 | 0.1835 | | 0.0291 | 84.74 | 10000 | 0.2266 | 0.1825 | | 0.0266 | 88.98 | 10500 | 0.2285 | 0.1801 | | 0.0266 | 93.22 | 11000 | 0.2292 | 0.1801 | | 0.0262 | 97.46 | 11500 | 0.2278 | 0.1788 |
c00b2ff2ae7b6f9dce54cb39078a3863
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-issues-128 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7582
cb4d47a729aac8d2a084273b425bfaf5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7
f87ed17f3b5d848eccf7de09233bd4da
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4041 | 1.0 | 8 | 1.8568 | | 2.1982 | 2.0 | 16 | 2.0790 | | 1.7184 | 3.0 | 24 | 1.9246 | | 1.7248 | 4.0 | 32 | 1.8485 | | 1.5016 | 5.0 | 40 | 1.8484 | | 1.4943 | 6.0 | 48 | 1.8691 | | 1.526 | 7.0 | 56 | 1.7582 |
64e9ceb936844eea3920ab1a441a0d5b
apache-2.0
['translation']
false
opus-mt-de-da * source languages: de * target languages: da * OPUS readme: [de-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-da/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.zip) * test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.test.txt) * test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.eval.txt)
b91cd94c300349a50bef557deb44b6bd
apache-2.0
['generated_from_trainer']
false
eval_masked_v4_cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6890 - Matthews Correlation: 0.5551
875abf014c201e679c4690a4a1fa32e8
mit
['generated_from_trainer']
false
BERiT_2000_custom_architecture_2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.9854
214ea873373bbe17a6af71324449d94e
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
3314d7728b32dfcb8a68ccb81606ae24
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 16.4316 | 0.19 | 500 | 9.0685 | | 8.2958 | 0.39 | 1000 | 7.6483 | | 7.4324 | 0.58 | 1500 | 7.1707 | | 7.0054 | 0.77 | 2000 | 6.8592 | | 6.8522 | 0.97 | 2500 | 6.7710 | | 6.7538 | 1.16 | 3000 | 6.5845 | | 6.634 | 1.36 | 3500 | 6.4525 | | 6.5784 | 1.55 | 4000 | 6.3129 | | 6.5135 | 1.74 | 4500 | 6.3312 | | 6.4552 | 1.94 | 5000 | 6.2546 | | 6.4685 | 2.13 | 5500 | 6.2857 | | 6.4356 | 2.32 | 6000 | 6.2285 | | 6.3566 | 2.52 | 6500 | 6.2295 | | 6.394 | 2.71 | 7000 | 6.1790 | | 6.3412 | 2.9 | 7500 | 6.1880 | | 6.3115 | 3.1 | 8000 | 6.2130 | | 6.3163 | 3.29 | 8500 | 6.1831 | | 6.2978 | 3.49 | 9000 | 6.1945 | | 6.3082 | 3.68 | 9500 | 6.1485 | | 6.2729 | 3.87 | 10000 | 6.1752 | | 6.307 | 4.07 | 10500 | 6.1331 | | 6.2494 | 4.26 | 11000 | 6.1082 | | 6.2523 | 4.45 | 11500 | 6.2110 | | 6.2455 | 4.65 | 12000 | 6.1326 | | 6.2399 | 4.84 | 12500 | 6.1779 | | 6.2297 | 5.03 | 13000 | 6.1587 | | 6.2374 | 5.23 | 13500 | 6.1458 | | 6.2265 | 5.42 | 14000 | 6.1370 | | 6.2222 | 5.62 | 14500 | 6.1511 | | 6.2209 | 5.81 | 15000 | 6.1320 | | 6.2146 | 6.0 | 15500 | 6.1124 | | 6.214 | 6.2 | 16000 | 6.1439 | | 6.1907 | 6.39 | 16500 | 6.0981 | | 6.2119 | 6.58 | 17000 | 6.1465 | | 6.1858 | 6.78 | 17500 | 6.1594 | | 6.1552 | 6.97 | 18000 | 6.0742 | | 6.1926 | 7.16 | 18500 | 6.1176 | | 6.1813 | 7.36 | 19000 | 6.0107 | | 6.1812 | 7.55 | 19500 | 6.0852 | | 6.1852 | 7.75 | 20000 | 6.0845 | | 6.1945 | 7.94 | 20500 | 6.1260 | | 6.1542 | 8.13 | 21000 | 6.1032 | | 6.1685 | 8.33 | 21500 | 6.0650 | | 6.1619 | 8.52 | 22000 | 6.1028 | | 6.1279 | 8.71 | 22500 | 6.1269 | | 6.1575 | 8.91 | 23000 | 6.0793 | | 6.1401 | 9.1 | 23500 | 6.1479 | | 6.159 | 9.3 | 24000 | 6.0319 | | 6.1227 | 9.49 | 24500 | 6.0677 | | 6.1201 | 9.68 | 25000 | 6.0527 | | 6.1473 | 9.88 | 25500 | 6.1305 | | 6.1539 | 10.07 | 26000 | 6.1079 | | 6.091 | 10.26 | 26500 | 6.1219 | | 6.1015 | 10.46 | 27000 | 6.1317 | | 6.1048 | 10.65 | 27500 | 6.1149 | | 6.0955 | 10.84 | 28000 | 6.1216 | | 6.129 | 11.04 | 28500 | 6.0427 | | 6.1007 | 11.23 | 29000 | 6.1289 | | 6.1266 | 11.43 | 29500 | 6.0564 | | 6.1203 | 11.62 | 30000 | 6.1143 | | 6.1038 | 11.81 | 30500 | 6.0957 | | 6.0989 | 12.01 | 31000 | 6.0707 | | 6.0571 | 12.2 | 31500 | 6.0013 | | 6.1017 | 12.39 | 32000 | 6.1356 | | 6.0649 | 12.59 | 32500 | 6.0981 | | 6.0704 | 12.78 | 33000 | 6.0588 | | 6.088 | 12.97 | 33500 | 6.0796 | | 6.1112 | 13.17 | 34000 | 6.0809 | | 6.0888 | 13.36 | 34500 | 6.0776 | | 6.0482 | 13.56 | 35000 | 6.0710 | | 6.0588 | 13.75 | 35500 | 6.0877 | | 6.0517 | 13.94 | 36000 | 6.0650 | | 6.0832 | 14.14 | 36500 | 5.9890 | | 6.0655 | 14.33 | 37000 | 6.0445 | | 6.0705 | 14.52 | 37500 | 6.0037 | | 6.0789 | 14.72 | 38000 | 6.0777 | | 6.0645 | 14.91 | 38500 | 6.0475 | | 6.0347 | 15.1 | 39000 | 6.1148 | | 6.0478 | 15.3 | 39500 | 6.0639 | | 6.0638 | 15.49 | 40000 | 6.0373 | | 6.0377 | 15.69 | 40500 | 6.0116 | | 6.0402 | 15.88 | 41000 | 6.0483 | | 6.0382 | 16.07 | 41500 | 6.1025 | | 6.039 | 16.27 | 42000 | 6.0488 | | 6.0232 | 16.46 | 42500 | 6.0219 | | 5.9946 | 16.65 | 43000 | 6.0541 | | 6.063 | 16.85 | 43500 | 6.0436 | | 6.0141 | 17.04 | 44000 | 6.0609 | | 6.0196 | 17.23 | 44500 | 6.0551 | | 6.0331 | 17.43 | 45000 | 6.0576 | | 6.0174 | 17.62 | 45500 | 6.0498 | | 6.0366 | 17.82 | 46000 | 6.0782 | | 6.0299 | 18.01 | 46500 | 6.0196 | | 6.0009 | 18.2 | 47000 | 6.0262 | | 5.9758 | 18.4 | 47500 | 6.0824 | | 6.0285 | 18.59 | 48000 | 6.0799 | | 6.025 | 18.78 | 48500 | 5.9511 | | 5.9806 | 18.98 | 49000 | 6.0086 | | 5.9915 | 19.17 | 49500 | 6.0089 | | 5.9957 | 19.36 | 50000 | 6.0330 | | 6.0311 | 19.56 | 50500 | 6.0083 | | 5.995 | 19.75 | 51000 | 6.0394 | | 6.0034 | 19.95 | 51500 | 5.9854 |
5881edbb8e015ef3819740d025e59754
apache-2.0
['translation']
false
tgl-eng * source group: Tagalog * target group: English * OPUS readme: [tgl-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.eval.txt)
213e355bdea4bef9c2c264046c0e9bed
apache-2.0
['translation']
false
System Info: - hf_name: tgl-eng - source_languages: tgl - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'en'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: eng - short_pair: tl-en - chrF2_score: 0.542 - bleu: 35.0 - brevity_penalty: 0.975 - ref_len: 18168.0 - src_name: Tagalog - tgt_name: English - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: en - prefer_old: False - long_pair: tgl-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
d84701e38c561290a187b9e9a1537e97
creativeml-openrail-m
[]
false
Basic explanation Token and Class words are what guide the AI to produce images similar to the trained style/object/character. Include any mix of these words in the prompt to produce verying results, or exclude them to have a less pronounced effect. There is usually at least a slight stylistic effect even without the words, but it is recommended to include at least one. Adding token word/phrase class word/phrase at the start of the prompt in that order produces results most similar to the trained concept, but they can be included elsewhere as well. Some models produce better results when not including all token/class words. 3k models are are more flexible, while 5k models produce images closer to the trained concept. I recommend 2k/3k models for normal use, and 5k/6k models for model merging and use without token/class words. However it can be also very prompt specific. I highly recommend self-experimentation.
81bc85100665910aef879726caff8517
creativeml-openrail-m
[]
false
Comparison Aeolian and aeolian_3000 are quite similar with slight differences. Epoch 5 and 6 versions were earlier in the waifu diffusion 1.3 training process, so it is easier to produce more varied, non anime, results.
734133f32c58eb3133e0b6702b6f6264
creativeml-openrail-m
[]
false
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
429e70e73adce57d14bd3e162889a748
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
fastbooth-jsjessy-1200 Dreambooth model trained by eicu with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
92f047fb88ce375718c68cae2dc65c1c
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_unispeech-sat_s655 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
3981a3261bd1f67d0da66c18fdab0d8e
apache-2.0
['automatic-speech-recognition', 'ru']
false
exp_w2v2t_ru_vp-es_s664 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
848f02a4dc31814b114d1b49214affd4
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0637 - Precision: 0.9335 - Recall: 0.9500 - F1: 0.9417 - Accuracy: 0.9862
26ba22cec7664bff426de7c7e6b18a14
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0888 | 1.0 | 1756 | 0.0636 | 0.9195 | 0.9366 | 0.9280 | 0.9830 | | 0.0331 | 2.0 | 3512 | 0.0667 | 0.9272 | 0.9490 | 0.9380 | 0.9855 | | 0.0167 | 3.0 | 5268 | 0.0637 | 0.9335 | 0.9500 | 0.9417 | 0.9862 |
30c2ffbb12024b8927ebf9000d2c1d81
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-pytorch-test This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1006 - Rouge1: 22.0585 - Rouge2: 9.4908 - Rougel: 18.3044 - Rougelsum: 20.9764 - Gen Len: 19.0
b6053e2db851902aaf55996763ca90d2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 15 | 2.1859 | 21.551 | 8.7109 | 18.07 | 20.2469 | 19.0 | | No log | 2.0 | 30 | 2.1194 | 22.348 | 9.6498 | 18.7701 | 21.1714 | 19.0 | | No log | 3.0 | 45 | 2.1006 | 22.0585 | 9.4908 | 18.3044 | 20.9764 | 19.0 |
f7f2fe4b33c7a439d073c2b5acd7168a
mit
['generated_from_trainer']
false
donut-base-mysterybox This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0075
d78e22beb01b07f72068e7534f65fab4
apache-2.0
['generated_from_trainer']
false
test-mlm This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6481
04d0672315e640b67dd0f4fc5ed5e875
apache-2.0
['pytorch', 'causal-lm', 'pythia']
false
The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/EleutherAI). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models match or exceed the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were re-named in January 2023. For clarity, a <a href="
06169de4ef4c1817fbce24e39488ece8
apache-2.0
['pytorch', 'causal-lm', 'pythia']
false
Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change over the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-160M for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-160M as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
ac993afe2a3e3e0ba5f2b5dd006c19d9
apache-2.0
['pytorch', 'causal-lm', 'pythia']
false
Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product, and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-160M has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-160M will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions.
58b0624d26b6ae60472c7f0cba2db298
apache-2.0
['pytorch', 'causal-lm', 'pythia']
false
Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-160M to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-160M may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-160M.
6a76c7a17e3f45ff99d3f77051220f49
apache-2.0
['pytorch', 'causal-lm', 'pythia']
false
Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/). The Pile was **not** deduplicated before being used to train Pythia-160M.
f21e3c0cbd0608593339f85a2badd107
apache-2.0
['pytorch', 'causal-lm', 'pythia']
false
Naming convention and parameter count *Pythia* models were re-named in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
77f2ecbd3fdd5942dac1ee353cfec5f7
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-EU Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Basque using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
d82a70558501d91887666279cad49301
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eu", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") resampler = torchaudio.transforms.Resample(48_000, 16_000)
e1d1426e399bc01175ba148ecbef8c9d
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
5a72fb7970f030032c371322a822eb7b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Basque test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "eu", split="test") wer = load_metric("wer") model_name = "pcuenq/wav2vec2-large-xlsr-53-eu" processor = Wav2Vec2Processor.from_pretrained(model_name) model = Wav2Vec2ForCTC.from_pretrained(model_name) model.to("cuda")
07a287aa702cae6f0fb925933f7c9a5b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Text pre-processing chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]' chars_to_ignore_pattern = re.compile(chars_to_ignore_regex) def remove_special_characters(batch): batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " " return batch
dd227cfcbbba92d4ec1e2bb9720a510b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Audio pre-processing import librosa def speech_file_to_array_fn(batch): speech_array, sample_rate = torchaudio.load(batch["path"]) batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000) return batch
fa07f4432012f6041608bb918deb68f5
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Number of CPUs or None num_proc = 16 test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8)
6d938609c0829f2709ede470c8789b7c
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training The Common Voice `train` and `validation` datasets were used for training. Training was performed for 22 + 20 epochs with the following parameters: - Batch size 16, 2 gradient accumulation steps. - Learning rate: 2.5e-4 - Activation dropout: 0.05 - Attention dropout: 0.1 - Hidden dropout: 0.05 - Feature proj. dropout: 0.05 - Mask time probability: 0.08 - Layer dropout: 0.05
d22d753108754c009e9627035d04ef4b
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Elldreth's OG 4060 mix: Source(s): [CivitAI](https://civitai.com/models/1259/elldreths-og-4060-mix) This mixed model is a combination of my all-time favorites. A genuine simple mix of a very popular anime model and the powerful and Zeipher's fantastic f222. What's it good at? Realistic portraits Stylized characters Landscapes Fantasy Sci-Fi Anime Horror It's an all-around easy-to-prompt general purpose semi-realistic to realistic model that cranks out some really nice images. No trigger words required. All models were scanned prior to mixing and totally safe.
dd7a7e9cc211c942d30ada25fb00bc6d
apache-2.0
['generated_from_trainer']
false
language-detection-Bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2231 - Accuracy: 0.9512
7b21b482df78e301262acd97f60c989e
cc-by-4.0
['question generation', 'answer extraction']
false
Model Card of `lmqg/mt5-small-itquad-qg-ae` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation and answer extraction jointly on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
4e0d0ae6dc2f3215ff2b0c4b56eec478
cc-by-4.0
['question generation', 'answer extraction']
false
Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** it - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
946e7b58ee3241edac2b6cd3a1a46cb3
cc-by-4.0
['question generation', 'answer extraction']
false
model prediction question_answer_pairs = model.generate_qa("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-itquad-qg-ae")
3dd581b220d128e861f57c004a185a2a
cc-by-4.0
['question generation', 'answer extraction']
false
question generation question = pipe("extract answers: <hl> Il 6 ottobre 1973 , la Siria e l' Egitto, con il sostegno di altre nazioni arabe, lanciarono un attacco a sorpresa su Israele, su Yom Kippur. <hl> Questo rinnovo delle ostilità nel conflitto arabo-israeliano ha liberato la pressione economica sottostante sui prezzi del petrolio. All' epoca, l' Iran era il secondo esportatore mondiale di petrolio e un vicino alleato degli Stati Uniti. Settimane più tardi, lo scià d' Iran ha detto in un' intervista: Naturalmente[il prezzo del petrolio] sta andando a salire Certamente! E come! Avete[Paesi occidentali] aumentato il prezzo del grano che ci vendete del 300 per cento, e lo stesso per zucchero e cemento.") ```
8a0090c2c172955944da4fef1284f9a7
cc-by-4.0
['question generation', 'answer extraction']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 80.61 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 22.53 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 14.75 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 10.19 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 7.25 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 17.5 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 56.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 21.84 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 81.81 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedF1Score (MoverScore) | 56.02 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (BERTScore) | 81.17 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedPrecision (MoverScore) | 55.76 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (BERTScore) | 82.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | QAAlignedRecall (MoverScore) | 56.32 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_itquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 57.85 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | AnswerF1Score | 72.09 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | BERTScore | 90.24 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_1 | 39.33 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_2 | 33.64 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_3 | 29.59 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | Bleu_4 | 26.01 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | METEOR | 42.68 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | MoverScore | 81.17 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | | ROUGE_L | 45.15 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
ced374fefcafad5e75808c3a0291f5ff
cc-by-4.0
['question generation', 'answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_itquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 13 - batch: 16 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/trainer_config.json).
872dcfc7fa74ae2295e90a1fa6a65d23
apache-2.0
['generated_from_trainer']
false
swin-tiny-finetuned-cifar100 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar100 dataset. It achieves the following results on the evaluation set: - Loss: 0.4223 - Accuracy: 0.8735
f6eb10126e030cddf0b50eb6549d47bf
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 (with early stopping)
e431018421b4dcdd710f2f53780568bf
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.6439 | 1.0 | 781 | 0.8138 | 0.6126 | | 0.6222 | 2.0 | 1562 | 0.8393 | 0.5094 | | 0.2912 | 3.0 | 2343 | 0.861 | 0.4452 | | 0.2234 | 4.0 | 3124 | 0.8679 | 0.4330 | | 0.121 | 5.0 | 3905 | 0.8735 | 0.4223 | | 0.2589 | 6.0 | 4686 | 0.8622 | 0.4775 | | 0.1419 | 7.0 | 5467 | 0.8642 | 0.4900 | | 0.1513 | 8.0 | 6248 | 0.8667 | 0.4956 |
9e125bc0f517854540f1b8af5784c6ed
apache-2.0
[]
false
Model description **CAMeLBERT-Mix POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model. For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset . Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
fc4da3d183dc750b61a272eacd9f3f91
apache-2.0
[]
false
How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf') >>> text = 'شلونك ؟ شخبارك ؟' >>> pos(text) [{'entity': 'pron_interrog', 'score': 0.82657206, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.9771731, 'index': 2, 'word': '
716ac63363e92753e10719d8f3391667
apache-2.0
[]
false
ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999568, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9977217, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99993783, 'index': 5, 'word': '
c7ed0a954244a7352c2cd6dc5e463bf1
apache-2.0
[]
false
ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999575, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
79f893226bdf657297090a903c8e5476
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0625 - Precision: 0.9267 - Recall: 0.9359 - F1: 0.9313 - Accuracy: 0.9836
bfa096e79d58e8becdf5d7d5cf6a5a8f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2395 | 1.0 | 878 | 0.0709 | 0.9148 | 0.9186 | 0.9167 | 0.9809 | | 0.0538 | 2.0 | 1756 | 0.0628 | 0.9228 | 0.9332 | 0.9280 | 0.9828 | | 0.03 | 3.0 | 2634 | 0.0625 | 0.9267 | 0.9359 | 0.9313 | 0.9836 |
4a3451f83018488adfe3de9d22656a7f
creativeml-openrail-m
[]
false
A Dreambooth model created with the sole purpose of generating the rarest and dankest pepes. StableDiffusion 1.5 was used as a base for this model. 22 instance images, 400 class images, 2.2k steps at a 1.3e-6 learning rate. Use the phrase 'pepestyle person' <img src="https://huggingface.co/SpiteAnon/Pepestyle/resolve/main/pepestylev2.png" alt="pepestylev2" width="400"/> <img src="https://huggingface.co/SpiteAnon/Pepestyle/resolve/main/pepestylev2-drawing.png" alt="pepestylev2-drawing" width="400"/> <img src="https://huggingface.co/SpiteAnon/Pepestyle/resolve/main/pepestylev2-suit-hat.png" alt="pepestylev2-suit" width="400"/>
6c9594e950f27c0f106a141739eeb86d
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-sdg This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OSDG dataset. It achieves the following results on the evaluation set: - Loss: 0.3094 - Acc: 0.9195
f21a645d02e0a88817305efd2fda6129
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
61515a716c6dec5407db0af8dfd4f444
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Acc | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3768 | 1.0 | 269 | 0.3758 | 0.8933 | | 0.2261 | 2.0 | 538 | 0.3088 | 0.9095 | | 0.1038 | 3.0 | 807 | 0.3094 | 0.9195 |
79f5755e5bd670f59f1c7c2a7226af81
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
**EpicSpaceMachine** This is the fine-tuned Stable Diffusion model trained on epic pictures of space ships and space stations. Use the tokens **_EpicSpaceMachine_** in your prompts for the effect. It generates OK, spaceships and space stations, including via img2img, but produces awesome images when given prompts that generate complex mechanical shapes such as the internals of car engines. **Examples rendered with the model:** Prompt: Photo of a futuristic space liner, 4K, award winning in EpicSpaceMachine style ![Photo of a futuristic space liner, 4K, award winning in EpicSpaceMachine style](./Photo%20of%20a%20futuristic%20space%20liner%2C%204K%2C%20award%20winning%20in%20EpicSpaceMachine%20style.jpg) Prompt: Photo of a GPU , 4K, close up in EpicSpaceMachine style ![Photo of a GPU , 4K, close up in EpicSpaceMachine style.jpg](./Photo%20of%20a%20GPU%20%2C%204K%2C%20close%20up%20%20in%20EpicSpaceMachine%20style.jpg) Propmt: Engine of a F1 race car, close up, 8K, in EpicSpaceMachine style ![Engine of a F1 race car, close up, 8K, in EpicSpaceMachine style](./Engine%20of%20a%20F1%20race%20car%2C%20close%20up%2C%208K%2C%20%20in%20EpicSpaceMachine%20style.jpg) Prompt: A pile of paper clips, close up, 8K, in EpicSpaceMachine style ![A pile of paper clips, close up, 8K, in EpicSpaceMachine style](./A%20pile%20of%20paper%20clips%2C%20close%20up%2C%208K%2C%20%20in%20EpicSpaceMachine%20style.jpg) Prompt: A photo of the insides of a mechanical watch, close up, 8K, in EpicSpaceMachine style ![A photo of the insides of a mechanical watch, close up, 8K, in EpicSpaceMachine style](./A%20photo%20of%20the%20insides%20of%20a%20mechanical%20watch%2C%20close%20up%2C%208K%2C%20%20in%20EpicSpaceMachine%20style.jpg) Prompt: Photo of a mother board, close up, 4K in EpicSpaceMachine style ![Photo of a mother board, close up, 4K in EpicSpaceMachine style](./Photo%20of%20a%20mother%20board%2C%20close%20up%2C%204K%20%20in%20EpicSpaceMachine%20style.jpg) Prompt: Photo of a large excavator engine in EpicSpaceMachine style ![Photo of a large excavator engine in EpicSpaceMachine style](./Photo%20of%20a%20large%20excavator%20engine%20%20in%20EpicSpaceMachine%20style.jpg) Prompt: Photo of A10 Warthog, 4K, award winning in EpicSpaceMachine style ![Photo of A10 Warthog, 4K, award winning in EpicSpaceMachine style](./Photo%20of%20A10%20Warthog%2C%204K%2C%20award%20winning%20in%20EpicSpaceMachine%20style.jpg) Prompt: A photo of a tangle of wires, close up, 8K, in EpicSpaceMachine style ![A photo of a tangle of wires, close up, 8K, in EpicSpaceMachine style](./A%20photo%20of%20a%20tangle%20of%20wires%2C%20close%20up%2C%208K%2C%20%20in%20EpicSpaceMachine%20style.jpg)
65c52d213837f5bc1b016372fd3f87c0
gpl-3.0
['bicleaner-ai']
false
Bicleaner AI full model for en-fr Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0). Sentence pairs considered very noisy are scored with 0. Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
60f3312e6e42632a14e7d17686a9b4e5
gpl-3.0
['object-detection', 'computer-vision', 'vision', 'yolo', 'yolov5']
false
save results into "results/" folder results.save(save_dir='results/') ``` - Finetune the model on your custom dataset: ```bash yolov5 train --img 640 --batch 16 --weights kadirnar/deprem_model_v1 --epochs 10 --device cuda:0 ```
c02ed1efc1ae94d71baeec97ad8e11ff
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-cnndm2-wikihow2 This model is a fine-tuned version of [Chikashi/t5-small-finetuned-cnndm2-wikihow1](https://huggingface.co/Chikashi/t5-small-finetuned-cnndm2-wikihow1) on the wikihow dataset. It achieves the following results on the evaluation set: - Loss: 2.3311 - Rouge1: 27.0962 - Rouge2: 10.3575 - Rougel: 23.1099 - Rougelsum: 26.4664 - Gen Len: 18.5197
2e7757b5b6c4c942c85efd9e5d0d8f07
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.517 | 1.0 | 39313 | 2.3311 | 27.0962 | 10.3575 | 23.1099 | 26.4664 | 18.5197 |
51063e988132fd48ffd34f2084cfbf3b
['cc0-1.0']
['computer-vision', 'image-classification']
false
Image Classification using MobileViT This repo contains the model and the notebook [to this Keras example on MobileViT](https://keras.io/examples/vision/mobilevit/). Full credits to: [Sayak Paul](https://twitter.com/RisingSayak)
9291e622ea293c6bedbb1cec5b082b81
['cc0-1.0']
['computer-vision', 'image-classification']
false
Background Information MobileViT architecture (Mehta et al.), combines the benefits of Transformers (Vaswani et al.) and convolutions. With Transformers, we can capture long-range dependencies that result in global representations. With convolutions, we can capture spatial relationships that model locality. Besides combining the properties of Transformers and convolutions, the authors introduce MobileViT as a general-purpose mobile-friendly backbone for different image recognition tasks. Their findings suggest that, performance-wise, MobileViT is better than other models with the same or higher complexity (MobileNetV3, for example), while being efficient on mobile devices.
8c89e1fb3215eeaca17a8fb9430621ec
mit
[]
false
Model description It is RoBERTa-base model pre-trained with indonesian Wikipedia using a masked language modeling (MLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
b6c35fd9fb6c7ac063e4badc5744794b
mit
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/roberta-base-indonesian-522M') >>> unmasker("Ibu ku sedang bekerja <mask> supermarket") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = RobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import RobertaTokenizer, TFRobertaModel model_name='cahya/roberta-base-indonesian-522M' tokenizer = RobertaTokenizer.from_pretrained(model_name) model = TFRobertaModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
f7187eaa87ea753c6ec87c2a48feadf0
mit
[]
false
Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```<s> Sentence A </s> Sentence B </s>```
22e62564e882f39437c4f58f02398322
mit
['bio', 'infrastructure', 'funding', 'natural language processing', 'BERT']
false
Biodata Resource Inventory This repository holds the fine-tuned models used in the biodata resource inventory conducted in 2022 by the [Global Biodata Coalition](https://globalbiodata.org/) in collaboration with [Chan Zuckerberg Initiative](https://chanzuckerberg.com/).
dbbdfbab6875f6df2c6a45cf01f7fc8d
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_stsb_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.2586 - Pearson: -0.0814 - Spearmanr: -0.0816 - Combined Score: -0.0815
60c0f8661b78f56553c1f9ed8658b5e1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 6.966 | 1.0 | 23 | 4.0539 | -0.0244 | -0.0244 | -0.0244 | | 4.4237 | 2.0 | 46 | 3.1176 | -0.0508 | -0.0503 | -0.0505 | | 3.3768 | 3.0 | 69 | 2.5232 | -0.1303 | -0.1323 | -0.1313 | | 2.6486 | 4.0 | 92 | 2.2586 | -0.0814 | -0.0816 | -0.0815 | | 2.2539 | 5.0 | 115 | 2.3547 | 0.0512 | 0.0505 | 0.0508 | | 2.1692 | 6.0 | 138 | 2.3367 | 0.0642 | 0.0568 | 0.0605 | | 2.1268 | 7.0 | 161 | 2.4285 | 0.0444 | 0.0649 | 0.0546 | | 1.9924 | 8.0 | 184 | 2.6031 | 0.0781 | 0.0846 | 0.0814 | | 1.8254 | 9.0 | 207 | 2.6306 | 0.1155 | 0.1187 | 0.1171 |
ec2e276643230cc071cb5b8015be0966
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1709 - Accuracy: 0.9305 - F1: 0.9306
0d5d461700d4960f7fa5a365f42a2b96
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1755 | 1.0 | 250 | 0.1831 | 0.925 | 0.9249 | | 0.1118 | 2.0 | 500 | 0.1709 | 0.9305 | 0.9306 |
3c17b89f3078a596a2c583e51e22c5c9
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-wandb-week-3-complaints-classifier-256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the consumer-finance-complaints dataset. It achieves the following results on the evaluation set: - Loss: 0.5453 - Accuracy: 0.8235 - F1: 0.8176 - Recall: 0.8235 - Precision: 0.8171
bf686c42a6ebf24cdca20af424d3db02
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.097565552226687e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 256 - num_epochs: 2 - mixed_precision_training: Native AMP
d395964f4a89cbd5d0b10b3c340a395c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.6691 | 0.61 | 1500 | 0.6475 | 0.7962 | 0.7818 | 0.7962 | 0.7875 | | 0.5361 | 1.22 | 3000 | 0.5794 | 0.8161 | 0.8080 | 0.8161 | 0.8112 | | 0.4659 | 1.83 | 4500 | 0.5453 | 0.8235 | 0.8176 | 0.8235 | 0.8171 |
5b6bfa1ed79170b62a47cd800d8141db
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-emotions-augmented This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6063 - Accuracy: 0.7789 - F1: 0.7770
b4b9f53974b0de9924e47b93944ccfb3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
b2832ce494646efc1f5a49717fe6e8e7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.855 | 1.0 | 819 | 0.6448 | 0.7646 | 0.7606 | | 0.5919 | 2.0 | 1638 | 0.6067 | 0.7745 | 0.7730 | | 0.5077 | 3.0 | 2457 | 0.6063 | 0.7789 | 0.7770 | | 0.4364 | 4.0 | 3276 | 0.6342 | 0.7725 | 0.7687 | | 0.3698 | 5.0 | 4095 | 0.6832 | 0.7693 | 0.7686 | | 0.3153 | 6.0 | 4914 | 0.7364 | 0.7636 | 0.7596 | | 0.2723 | 7.0 | 5733 | 0.7578 | 0.7661 | 0.7648 | | 0.2429 | 8.0 | 6552 | 0.7816 | 0.7623 | 0.7599 |
bb008d9ee1d056ed26937eb6303c7631
openrail
['nsfw', 'stable diffusion']
false
PoV Skin Textures - Dreamlike r34 [pov-skin-texture-dreamlike-r34](https://civitai.com/models/4481/pov-skin-texture-dreamlike-r34) This version has vae-ft-mse-840000-ema-pruned.ckpt baked in. Due to using Dreamlike Diffusion 1.0, this model has the following license: License This model is licensed under a modified CreativeML OpenRAIL-M license. - You can't host or use the model or its derivatives on websites/apps/etc., from which you earn, will earn, or plan to earn revenue or donations. If you want to, please email us at contact@dreamlike.art - You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0) - You are free to host the model or its derivatives on completely non-commercial websites/apps/etc (Meaning you are not getting ANY revenue or donations). Please state the full model name (Dreamlike Diffusion 1.0) and include a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0) - You are free to use the outputs of the model or the outputs of the model's derivatives for commercial purposes in teams of 10 or less - You can't use the model to deliberately produce nor share illegal or harmful outputs or content - The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license - You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0/blob/main/LICENSE.md
c1574146cc4f329791c681f3acde70a7
apache-2.0
[]
false
Model description This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Spanish-Catalan datasets, up to 92 million sentences. Additionally, the model is evaluated on several public datasecomprising 5 different domains (general, adminstrative, technology, biomedical, and news).
fddee50ed051ef54b070c0992807f0ea
apache-2.0
[]
false
Usage Required libraries: ```bash pip install ctranslate2 pyonmttok ``` Translate a sentence using python ```python import ctranslate2 import pyonmttok from huggingface_hub import snapshot_download model_dir = snapshot_download(repo_id="PlanTL-GOB-ES/mt-plantl-es-ca", revision="main") tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model") tokenized=tokenizer.tokenize("Bienvenido al Proyecto PlanTL!") translator = ctranslate2.Translator(model_dir) translated = translator.translate_batch([tokenized[0]]) print(tokenizer.detokenize(translated[0][0]['tokens'])) ```
11eec6c9c9550aae3510c6949358ffee
apache-2.0
[]
false
Training data The model was trained on a combination of the following datasets: | Dataset | Sentences | Tokens | |-------------------|----------------|-------------------| | DOCG v2 | 8.472.786 | 188.929.206 | | El Periodico | 6.483.106 | 145.591.906 | | EuroParl | 1.876.669 | 49.212.670 | | WikiMatrix | 1.421.077 | 34.902.039 | | Wikimedia | 335.955 | 8.682.025 | | QED | 71.867 | 1.079.705 | | TED2020 v1 | 52.177 | 836.882 | | CCMatrix v1 | 56.103.820 | 1.064.182.320 | | MultiCCAligned v1 | 2.433.418 | 48.294.144 | | ParaCrawl | 15.327.808 | 334.199.408 | | **Total** | **92.578.683** | **1.875.910.305** |
94fa640f9a3e42b9df0c63dbe7721b46
apache-2.0
[]
false
Data preparation All datasets are concatenated and filtered using the [mBERT Gencata parallel filter](https://huggingface.co/projecte-aina/mbert-base-gencata) and cleaned using the clean-corpus-n.pl script from [moses](https://github.com/moses-smt/mosesdecoder), allowing sentences between 5 and 150 words. Before training, the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
5eea0ad4a9ca0316348cee930fda1610
apache-2.0
[]
false
Hyperparameters The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf) The following hyperparamenters were set on the Fairseq toolkit: | Hyperparameter | Value | |------------------------------------|----------------------------------| | Architecture | transformer_vaswani_wmt_en_de_bi | | Embedding size | 1024 | | Feedforward size | 4096 | | Number of heads | 16 | | Encoder layers | 24 | | Decoder layers | 6 | | Normalize before attention | True | | --share-decoder-input-output-embed | True | | --share-all-embeddings | True | | Effective batch size | 96.000 | | Optimizer | adam | | Adam betas | (0.9, 0.980) | | Clip norm | 0.0 | | Learning rate | 1e-3 | | Lr. schedurer | inverse sqrt | | Warmup updates | 4000 | | Dropout | 0.1 | | Label smoothing | 0.1 | The model was trained using shards of 10 million sentences, for a total of 8.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 6 checkpoints.
905957339f16bb3bb32f8b9b7cccc9dc
apache-2.0
[]
false
Evaluation results Below are the evaluation results on the machine translation from Spanish to Catalan compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es): | Test set | SoftCatalà | Google Translate |mt-plantl-es-ca| |----------------------|------------|------------------|---------------| | Spanish Constitution | **63,6** | 61,7 | 63,0 | | United Nations | 73,8 | 74,8 | **74,9** | | Flores 101 dev | 22 | **23,1** | 22,5 | | Flores 101 devtest | 22,7 | **23,6** | 23,1 | | Cybersecurity | 61,4 | **69,5** | 67,3 | | wmt 19 biomedical | 60,2 | 59,7 | **60,6** | | wmt 13 news | 21,3 | **22,4** | 22,0 | | Average | 46,4 | **47,8** | 47,6 |
42c361296545a6f2b383c26fa2df7862
mit
['generated_from_trainer']
false
bertimbau-base-lener-br-finetuned-lener-br This model is a fine-tuned version of [Luciano/bertimbau-base-finetuned-lener-br](https://huggingface.co/Luciano/bertimbau-base-finetuned-lener-br) on the lener_br dataset. It achieves the following results on the evaluation set: - Loss: nan - Precision: 0.8943 - Recall: 0.8970 - F1: 0.8956 - Accuracy: 0.9696
46f3c02c122276c27216d916fbfb6c30
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15
51de41106e8bb8e97441d2da4fa503c8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0678 | 1.0 | 1957 | nan | 0.8148 | 0.8882 | 0.8499 | 0.9689 | | 0.0371 | 2.0 | 3914 | nan | 0.8347 | 0.9022 | 0.8671 | 0.9671 | | 0.0242 | 3.0 | 5871 | nan | 0.8491 | 0.8905 | 0.8693 | 0.9716 | | 0.0197 | 4.0 | 7828 | nan | 0.9014 | 0.8772 | 0.8892 | 0.9780 | | 0.0135 | 5.0 | 9785 | nan | 0.8651 | 0.9060 | 0.8851 | 0.9765 | | 0.013 | 6.0 | 11742 | nan | 0.8882 | 0.9054 | 0.8967 | 0.9767 | | 0.0084 | 7.0 | 13699 | nan | 0.8559 | 0.9097 | 0.8820 | 0.9751 | | 0.0069 | 8.0 | 15656 | nan | 0.8916 | 0.8828 | 0.8872 | 0.9696 | | 0.0047 | 9.0 | 17613 | nan | 0.8964 | 0.8931 | 0.8948 | 0.9716 | | 0.0028 | 10.0 | 19570 | nan | 0.8864 | 0.9047 | 0.8955 | 0.9691 | | 0.0023 | 11.0 | 21527 | nan | 0.8860 | 0.9011 | 0.8935 | 0.9693 | | 0.0009 | 12.0 | 23484 | nan | 0.8952 | 0.8987 | 0.8970 | 0.9686 | | 0.0014 | 13.0 | 25441 | nan | 0.8929 | 0.8985 | 0.8957 | 0.9699 | | 0.0025 | 14.0 | 27398 | nan | 0.8914 | 0.8981 | 0.8947 | 0.9700 | | 0.001 | 15.0 | 29355 | nan | 0.8943 | 0.8970 | 0.8956 | 0.9696 |
02695619ee1f91563bc0829486e7fea9
mit
['generated_from_trainer']
false
deberta-base-combined-squad1-aqa-1epoch-and-newsqa-1epoch This model is a fine-tuned version of [stevemobs/deberta-base-combined-squad1-aqa-1epoch](https://huggingface.co/stevemobs/deberta-base-combined-squad1-aqa-1epoch) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6807
39f86d2697588d1662d6c4ef3e87556d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
27f181431c55058b95aa07809df41f53
cc-by-4.0
['automatic-speech-recognition', 'speech', 'ASR', 'Kinyarwanda', 'Swahili', 'Luganda', 'Multilingual', 'audio', 'CTC', 'Conformer', 'Transformer', 'NeMo', 'pytorch']
false
Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="yonas/stt_rw_sw_lg_conformer_ctc_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ```
4028daead998335d3b94fe276b0a506c
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_unispeech-ml_s246 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
813014c8ca971ff34bb1b52cc04c5df7
apache-2.0
['translation']
false
opus-mt-de-ht * source languages: de * target languages: ht * OPUS readme: [de-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ht/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ht/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ht/opus-2020-01-20.eval.txt)
839d0f27f988b12beeb52c48d8eb4172
apache-2.0
['generated_from_trainer']
false
whisper-tiny-ar This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8394 - Wer: 86.0500
e69f669a99c0193598c38cb951c22358