license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['translation']
false
opus-mt-en-ceb * source languages: en * target languages: ceb * OPUS readme: [en-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ceb/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.eval.txt)
acf1b66e0b643366b2457a786a9bbde7
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-swedisch-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.6439 - Wer: 0.9678
a9f771bc714f6fa57569bae7dbb356b3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8953 | 1.83 | 400 | 1.6439 | 0.9678 |
765c44a1efe48131060d4e56259cc576
apache-2.0
['generated_from_trainer']
false
bert-small-finetuned-finetuned-parsed-longer50 This model is a fine-tuned version of [muhtasham/bert-small-finetuned-parsed20](https://huggingface.co/muhtasham/bert-small-finetuned-parsed20) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9278
6376d1b57636e6476759cfccc90181c0
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30
153f4cab131a0126322d4f2ff6df6ede
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 4 | 2.9807 | | No log | 2.0 | 8 | 2.7267 | | No log | 3.0 | 12 | 3.3484 | | No log | 4.0 | 16 | 2.7573 | | No log | 5.0 | 20 | 2.7063 | | No log | 6.0 | 24 | 2.7353 | | No log | 7.0 | 28 | 3.1290 | | No log | 8.0 | 32 | 2.9371 | | No log | 9.0 | 36 | 3.4265 | | No log | 10.0 | 40 | 3.0537 | | No log | 11.0 | 44 | 3.1382 | | No log | 12.0 | 48 | 3.1454 | | No log | 13.0 | 52 | 2.8379 | | No log | 14.0 | 56 | 3.2760 | | No log | 15.0 | 60 | 3.0504 | | No log | 16.0 | 64 | 2.9001 | | No log | 17.0 | 68 | 2.8892 | | No log | 18.0 | 72 | 3.1837 | | No log | 19.0 | 76 | 2.6404 | | No log | 20.0 | 80 | 3.0600 | | No log | 21.0 | 84 | 3.1432 | | No log | 22.0 | 88 | 2.9608 | | No log | 23.0 | 92 | 3.0513 | | No log | 24.0 | 96 | 3.1038 | | No log | 25.0 | 100 | 3.0975 | | No log | 26.0 | 104 | 2.8977 | | No log | 27.0 | 108 | 2.9416 | | No log | 28.0 | 112 | 2.9015 | | No log | 29.0 | 116 | 2.7947 | | No log | 30.0 | 120 | 2.9278 |
609a25d84614be7b51d5f62083184bf9
apache-2.0
['generated_from_trainer']
false
door_inner_with_SA-bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1513
89d15bbdc6571ea0429e74ee28319ad1
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12
962a78ee5ab1374cbf3f81ecbef4870b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5492 | 1.0 | 96 | 2.3831 | | 2.4031 | 2.0 | 192 | 2.2963 | | 2.3391 | 3.0 | 288 | 2.2000 | | 2.2951 | 4.0 | 384 | 2.2505 | | 2.2151 | 5.0 | 480 | 2.1691 | | 2.2237 | 6.0 | 576 | 2.1855 | | 2.1984 | 7.0 | 672 | 2.2558 | | 2.1749 | 8.0 | 768 | 2.2019 | | 2.1475 | 9.0 | 864 | 2.1310 | | 2.1446 | 10.0 | 960 | 2.1334 | | 2.1374 | 11.0 | 1056 | 2.1909 | | 2.1117 | 12.0 | 1152 | 2.2028 |
b495c309694cd89e10610379d5b1e5c7
mit
[]
false
Arona [Arona / アロナ / 아로나 / 阿罗娜](https://huggingface.co/khanon/lora-training/blob/main/arona/README.md) [![Arona](arona/chara-arona-v1.png)](https://huggingface.co/khanon/lora-training/blob/main/arona/README.md)
35c860ae83b74bde5e1584112d1f4065
mit
[]
false
Koharu [Shimoe Koharu / 下江コハル / 시모에 코하루 / 下江小春](https://huggingface.co/khanon/lora-training/blob/main/koharu/README.md) [![Koharu](koharu/chara-koharu-v3.png)](https://huggingface.co/khanon/lora-training/blob/main/koharu/README.md)
16afbaff98349606ab0bfae4fc590113
mit
[]
false
Kokona [Sunohara Kokona / 春原ココナ / 스노하라 코코나 / 春原心奈](https://huggingface.co/khanon/lora-training/blob/main/kokona/README.md) [![Kokona](kokona/chara-kokona.png)](https://huggingface.co/khanon/lora-training/blob/main/kokona/README.md)
fe0d593954cf1e163d7eb1beaba6472c
mit
[]
false
Mari [Iochi Mari / 伊落マリー / 이오치 마리 / 伊落玛丽](https://huggingface.co/khanon/lora-training/blob/main/mari/README.md) [![Mari](mari/chara-mari-v5b.png)](https://huggingface.co/khanon/lora-training/blob/main/mari/README.md)
4af74896f387d57cb8a7cb5b48dfb1da
mit
[]
false
Reisa [Uzawa Reisa / 宇沢レイサ / 우자와 레이사](https://huggingface.co/khanon/lora-training/blob/main/reisa/README.md) [![Reisa](reisa/chara-reisa-v3.png)](https://huggingface.co/khanon/lora-training/blob/main/reisa/README.md)
2ec0e2bddcb0219942f645cf47d0f0db
mit
[]
false
Seia [Yurizono Seia / 百合園セイア / 유리조노 세이아 / 百合園圣娅](https://huggingface.co/khanon/lora-training/blob/main/seia/README.md) [![Seia](seia/chara-seia-v1.png)](https://huggingface.co/khanon/lora-training/blob/main/seia/README.md)
d569d12cdfa62f16b97d49a45a5c6c4e
mit
[]
false
Shizuko [Kawawa Shizuko / 河和シズコ / 카와와 시즈코 / 河和静子](https://huggingface.co/khanon/lora-training/blob/main/shizuko/README.md) [![Shizuko](shizuko/chara-shizuko.png)](https://huggingface.co/khanon/lora-training/blob/main/shizuko/README.md)
9b56a75333b218afe4aec978c8161c46
mit
[]
false
Sora [Sora / ソラ / 소라 / 空](https://huggingface.co/khanon/lora-training/blob/main/sora/README.md) [![Sora](sora/chara-sora-v3.png)](https://huggingface.co/khanon/lora-training/blob/main/sora/README.md)
5dabca8ab9fa6a39fda9415a8b95208c
mit
[]
false
Negative embedding I frequently use these negative embeddings in my prompts to improve the output quality. I recommend lowering the attention to ~0.75. - `bad-artist`, `bad-artist-anime` - https://huggingface.co/nick-x-hacker/bad-artist - `badpromptv2` - https://huggingface.co/datasets/Nerfgun3/bad_prompt - `bad-image-v2` (not sure of the original author) - [bad-image-v2.pt](https://huggingface.co/khanon/lora-training/blob/main/bad-image-v2.pt)
6336ca5d7599de1a9e7db28a37cf4038
apache-2.0
['generated_from_trainer']
false
recipe-lr2e05-wd0.05-bs32 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2776 - Rmse: 0.5269 - Mse: 0.2776 - Mae: 0.4290
903d0fce8c7e752d5e6bc0e90c55d6cc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2768 | 1.0 | 1245 | 0.2737 | 0.5232 | 0.2737 | 0.4163 | | 0.274 | 2.0 | 2490 | 0.2779 | 0.5271 | 0.2779 | 0.4234 | | 0.2721 | 3.0 | 3735 | 0.2776 | 0.5269 | 0.2776 | 0.4290 |
d9cc21c53c18f2ebf0e21070b02a31aa
mit
['generated_from_trainer']
false
mdeberta-hate-final This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6223 - Accuracy: 0.7424 - Precision: 0.7410 - Recall: 0.7424 - F1: 0.7363
d9ba71cf54373c8cd3d54bf9eb2a8ec1
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.5309 | 0.7519 | 0.7685 | 0.7519 | 0.7357 | | 0.5358 | 2.0 | 592 | 0.5228 | 0.7510 | 0.7663 | 0.7510 | 0.7351 | | 0.5358 | 3.0 | 888 | 0.5565 | 0.7510 | 0.7513 | 0.7510 | 0.7438 | | 0.4295 | 4.0 | 1184 | 0.5639 | 0.7481 | 0.7488 | 0.7481 | 0.7403 | | 0.4295 | 5.0 | 1480 | 0.5941 | 0.7510 | 0.7531 | 0.7510 | 0.7423 | | 0.3701 | 6.0 | 1776 | 0.6223 | 0.7424 | 0.7410 | 0.7424 | 0.7363 |
e7148e3f2962a6d0e6b8960109c1cc7d
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-mri This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0608 - Accuracy: 0.9807
2fd10b0899558553df39e340d5d009fe
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP
952ed6da5b7afcca0105a5b19d36f272
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0592 | 1.0 | 447 | 0.0823 | 0.9695 | | 0.0196 | 2.0 | 894 | 0.0761 | 0.9739 | | 0.0058 | 3.0 | 1341 | 0.0608 | 0.9807 |
f75b707e762284fd8a56419e7f9fcdbb
apache-2.0
['generated_from_keras_callback']
false
Haakf/distilbert-base-uncased-padded_left_allsides_news_e20 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9438 - Validation Loss: 1.8632 - Epoch: 19
cc540a1ac9d4ec6b50473197e803e4f4
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9974 | 1.9203 | 0 | | 1.9860 | 1.9246 | 1 | | 1.9588 | 1.8601 | 2 | | 1.9598 | 1.8668 | 3 | | 1.9330 | 1.8913 | 4 | | 1.9250 | 1.8517 | 5 | | 1.9200 | 1.8525 | 6 | | 1.9447 | 1.8755 | 7 | | 1.9331 | 1.8627 | 8 | | 1.9318 | 1.9064 | 9 | | 1.9304 | 1.8507 | 10 | | 1.9325 | 1.8616 | 11 | | 1.9397 | 1.8491 | 12 | | 1.9535 | 1.8660 | 13 | | 1.9327 | 1.8341 | 14 | | 1.9403 | 1.8686 | 15 | | 1.9488 | 1.8585 | 16 | | 1.9378 | 1.8515 | 17 | | 1.9293 | 1.8645 | 18 | | 1.9438 | 1.8632 | 19 |
cf23600435622d8b7a6d56b75df9f4bf
apache-2.0
['translation']
false
spa-glg * source group: Spanish * target group: Galician * OPUS readme: [spa-glg](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md) * model: transformer-align * source language(s): spa * target language(s): glg * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.eval.txt)
d65942b911993f019f110738bcf611ab
apache-2.0
['translation']
false
System Info: - hf_name: spa-glg - source_languages: spa - target_languages: glg - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'gl'] - src_constituents: {'spa'} - tgt_constituents: {'glg'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt - src_alpha3: spa - tgt_alpha3: glg - short_pair: es-gl - chrF2_score: 0.8079999999999999 - bleu: 67.6 - brevity_penalty: 0.993 - ref_len: 16581.0 - src_name: Spanish - tgt_name: Galician - train_date: 2020-06-16 - src_alpha2: es - tgt_alpha2: gl - prefer_old: False - long_pair: spa-glg - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
82656367263fea473e2bb80c0a24e91c
apache-2.0
['generated_from_trainer']
false
barthez-deft-chimie This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset. **Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms) It achieves the following results on the evaluation set: - Loss: 2.0710 - Rouge1: 31.8947 - Rouge2: 16.7563 - Rougel: 23.5428 - Rougelsum: 23.4918 - Gen Len: 38.5256
5a70c30594e5524f98f8897fc0e02457
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 3.8022 | 1.0 | 118 | 2.5491 | 16.8208 | 7.0027 | 13.957 | 14.0479 | 19.1538 | | 2.9286 | 2.0 | 236 | 2.3074 | 17.5356 | 7.8717 | 14.4874 | 14.5044 | 19.9487 | | 2.5422 | 3.0 | 354 | 2.2322 | 19.6491 | 9.4156 | 15.9467 | 15.9433 | 19.7051 | | 2.398 | 4.0 | 472 | 2.1500 | 18.7166 | 9.859 | 15.7535 | 15.8036 | 19.9231 | | 2.2044 | 5.0 | 590 | 2.1372 | 19.978 | 10.6235 | 16.1348 | 16.1274 | 19.6154 | | 1.9405 | 6.0 | 708 | 2.0992 | 20.226 | 10.551 | 16.6928 | 16.7211 | 19.9744 | | 1.8544 | 7.0 | 826 | 2.0841 | 19.8869 | 10.8456 | 16.1072 | 16.097 | 19.8846 | | 1.7536 | 8.0 | 944 | 2.0791 | 19.3017 | 9.4921 | 16.1541 | 16.2167 | 19.859 | | 1.6914 | 9.0 | 1062 | 2.0710 | 21.3848 | 10.4088 | 17.1963 | 17.2254 | 19.8846 | | 1.654 | 10.0 | 1180 | 2.1069 | 22.3811 | 10.7987 | 18.7595 | 18.761 | 19.9231 | | 1.5899 | 11.0 | 1298 | 2.0919 | 20.8546 | 10.6958 | 16.8637 | 16.9499 | 19.8077 | | 1.4661 | 12.0 | 1416 | 2.1065 | 22.3677 | 11.7472 | 18.262 | 18.3 | 19.9744 | | 1.4205 | 13.0 | 1534 | 2.1164 | 20.5845 | 10.7825 | 16.9972 | 17.0216 | 19.9359 | | 1.3797 | 14.0 | 1652 | 2.1240 | 22.2561 | 11.303 | 17.5064 | 17.5815 | 19.9744 | | 1.3724 | 15.0 | 1770 | 2.1187 | 23.2825 | 11.912 | 18.5208 | 18.5499 | 19.9359 | | 1.3404 | 16.0 | 1888 | 2.1394 | 22.1305 | 10.5258 | 17.772 | 17.8202 | 19.9744 | | 1.2846 | 17.0 | 2006 | 2.1502 | 21.567 | 11.0557 | 17.2562 | 17.2974 | 20.0 | | 1.2871 | 18.0 | 2124 | 2.1572 | 22.5871 | 11.702 | 18.2906 | 18.3826 | 19.9744 | | 1.2422 | 19.0 | 2242 | 2.1613 | 23.0935 | 11.6824 | 18.6087 | 18.6777 | 19.9744 | | 1.2336 | 20.0 | 2360 | 2.1581 | 22.6789 | 11.4363 | 18.1661 | 18.2346 | 19.9487 |
067b102e3d0db39aacabe78c9a76aaf8
apache-2.0
['generated_from_keras_callback']
false
whisper_nosp_0015 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3829 - Train Accuracy: 0.0216 - Validation Loss: 0.8190 - Validation Accuracy: 0.0202 - Epoch: 14
94304584384d71e414c6988298b9d56b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 7.5559 | 0.0010 | 6.3853 | 0.0013 | 0 | | 6.3227 | 0.0021 | 5.7023 | 0.0038 | 1 | | 4.9825 | 0.0063 | 3.6302 | 0.0109 | 2 | | 2.9413 | 0.0126 | 2.1959 | 0.0154 | 3 | | 1.9349 | 0.0157 | 1.6630 | 0.0172 | 4 | | 1.4741 | 0.0171 | 1.3813 | 0.0181 | 5 | | 1.1975 | 0.0181 | 1.2161 | 0.0186 | 6 | | 1.0048 | 0.0188 | 1.0990 | 0.0191 | 7 | | 0.8598 | 0.0194 | 1.0165 | 0.0194 | 8 | | 0.7431 | 0.0199 | 0.9603 | 0.0196 | 9 | | 0.6489 | 0.0203 | 0.9106 | 0.0198 | 10 | | 0.5682 | 0.0207 | 0.8787 | 0.0199 | 11 | | 0.4985 | 0.0210 | 0.8548 | 0.0200 | 12 | | 0.4372 | 0.0213 | 0.8352 | 0.0201 | 13 | | 0.3829 | 0.0216 | 0.8190 | 0.0202 | 14 |
7c7d974c5098c452372971ad5c2c7f48
apache-2.0
['generated_from_trainer']
false
base-mlm-tweet-target-tweet This model is a fine-tuned version of [muhtasham/base-mlm-tweet](https://huggingface.co/muhtasham/base-mlm-tweet) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.9081 - Accuracy: 0.7674 - F1: 0.7679
ab6e8bfd1c339bc11987a9025af974da
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3371 | 4.9 | 500 | 1.0062 | 0.7888 | 0.7891 | | 0.038 | 9.8 | 1000 | 1.4896 | 0.7754 | 0.7802 | | 0.0165 | 14.71 | 1500 | 1.6711 | 0.7834 | 0.7830 | | 0.018 | 19.61 | 2000 | 1.9081 | 0.7674 | 0.7679 |
ac127e8522d89d125d1757d9965d1de4
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2421 - F1: 0.8248
0565cfe395465416f90a6e8c1063a3e6
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.809 | 1.0 | 70 | 0.3380 | 0.7183 | | 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 | | 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
67a7cfcf9aa959ec39ba68fe29e6e61a
apache-2.0
['translation']
false
opus-mt-fr-st * source languages: fr * target languages: st * OPUS readme: [fr-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-st/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.eval.txt)
ddc7023a1996ad003a35b0a44ced717e
apache-2.0
['generated_from_trainer']
false
test-mlm This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2729 - Accuracy: 0.7100
e19d590775792a9145e8b7d4c36be707
unknown
[]
false
A Big thank you to everyone in the community who did the work on Training the custom Models I used in this. My Mixes can be found a Civitai as well https://civitai.com/models/1348/ultimatemix https://civitai.com/models/1544/cartoon-mix I lost my Original Reciepe for Ultimate Mix 1&2, So I wont be able to upload them in checkpoint format. However I have Created a New Mix Umix3 that is probabaly better in all respects ![xy_grid-0096-3275590074-beautiful goddess of love, blonde long curly hair, elf eared, gorgeous detailed face, perfect body, pink off shoulder dress, jew.png](https://s3.amazonaws.com/moonup/production/uploads/1670945660355-6318923c2a09bf43f6f83b28.png) >**Prompt** >- beautiful goddess of love, blonde long curly hair, elf eared, gorgeous detailed face, perfect body, pink off shoulder dress, jewelry, cinematic lighting, fantasy garden, hyper detailed illustration, gorgeous, elegant, intricate, alluring, stunning, award winning, realistic, sharp focus, 8k high definition, > >**Neg Prompt** >- monochrome, censored, bad anatomy, lowres, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, simple background, anatomical nonsense > >**Steps: 20, CFG scale: 12, Seed: 3275590074, Size: 512x512** ------------------- ![xy_grid-0094-172159747-(Seductive nude witcher man) underwear, fierce looking face, long white wavy hair, white beard, ominous dark forest, dark fantas.png](https://s3.amazonaws.com/moonup/production/uploads/1670945684252-6318923c2a09bf43f6f83b28.png) >**Prompt** >- (Seductive nude witcher man) underwear, fierce looking face, long white wavy hair, white beard, ominous dark forest, dark fantasy, character portrait, sexy man, 8K, hdr >**No Negitive words** > >**Steps: 20, CFG scale: 12, Seed: 172159747, Size: 512x512** ------------------- ![xy_grid-0099-3275590074-a beautiful empress crystal quartz portrait, with a brilliant, impossible striking big shiny crystal headpiece, quartz, clothes.png](https://s3.amazonaws.com/moonup/production/uploads/1670945498815-6318923c2a09bf43f6f83b28.png) >**Prompt** >- a beautiful empress crystal quartz portrait, with a brilliant, impossible striking big shiny crystal headpiece, quartz, clothes entirely made out of crystal quartz, black hair, crystal background, symmetrical, dramatic studio lighting, rococo, baroque, hyperrealism, closeup, fantasy, intricate, elegant, highly detailed, asian, digital painting > >**Negitive words** >- lowres, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, artist name, (((ugly))), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((blurry)), ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), bad proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers) > >**Steps: 20, CFG scale: 12, Seed: 3275590074, Size: 512x512**
ca3459cc4a233fa1010891dd635cb92f
cc-by-sa-4.0
[]
false
How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='nlp-waseda/gpt2-small-japanese-wikipedia') >>> set_seed(42) >>> generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5) [{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 1969 年 に は 同 大学院 を 修了 。 東京 芝浦 電気 株式 会社 に 就職 後 、 情報 処理'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 帰国 後 は 立教 大学 理学部 助手 を 務めた 。 1978 年 に 神奈川 県立 湘南 高等 学校 校長 に 就任'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 研究 。 1972 年 に 早稲田 大学 文学部 ドイツ 文学 専攻 を 卒業 し 、 同 年 から 1979 年 まで 上智 大学'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 する 。 1979 年 東京 農工 大学 農学 部 卒業 。 1980 年 同 大学院 農学 研究 科 修士 課程 修了 。'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 し ながら 、 日本 で 活動 する 自然 言語 研究 家 。 大学 時代 は 東京 大学 理学部 の 助手 を 務め'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import ReformerTokenizer, GPT2Model tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese-wikipedia') model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese-wikipedia') text = "早稲田 大学 で 自然 言語 処理 を" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
920950a7977af3738175029703bf51eb
cc-by-sa-4.0
[]
false
Preprocessing The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining. The model was trained on 8 NVIDIA A100 GPUs.
ede86d1d51e160c0ee2cdb9de6253b1d
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-travel-6-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1384 - Accuracy: 0.4289
a4f89aa0780a5471b777d20183a7e14b
apache-2.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-12 - train_batch_size: 256 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08 - lr_scheduler: cosine - lr_warmup_steps: 500 - ema_inv_gamma: 1.0 - ema_inv_gamma: 0.75 - ema_inv_gamma: 0.9999 - mixed_precision: fp16
be883ed36829284c3f9687e9dd3fa314
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2r_es_xls-r_accent_surpeninsular-5_nortepeninsular-5_s463 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5ebe7d1d527e1fee9cb24e81cf5fa486
creativeml-openrail-m
['text-to-image']
false
CoalForest Dreambooth model trained by DrEsker with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: CoalForestSIM (use that on your prompt) ![CoalForestSIM 0](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%281%29.jpg)![CoalForestSIM 1](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%282%29.jpg)![CoalForestSIM 2](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%283%29.jpg)![CoalForestSIM 3](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%284%29.jpg)![CoalForestSIM 4](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%285%29.jpg)![CoalForestSIM 5](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%286%29.jpg)![CoalForestSIM 6](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%287%29.jpg)![CoalForestSIM 7](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%288%29.jpg)![CoalForestSIM 8](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%289%29.jpg)![CoalForestSIM 9](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%2810%29.jpg)![CoalForestSIM 10](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%2811%29.jpg)![CoalForestSIM 11](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%2812%29.jpg)![CoalForestSIM 12](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%2813%29.jpg)![CoalForestSIM 13](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%2814%29.jpg)![CoalForestSIM 14](https://huggingface.co/DrEsker/coalforest/resolve/main/concept_images/CoalForestSIM_%2815%29.jpg)
f46631017df1357a2cfb6e0b1c35aef6
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/repvgg_a1").eval() img = Image.open(path_to_an_image).convert("RGB")
2214a2d253364146ac912d12f714b703
apache-2.0
['generated_from_keras_callback']
false
TestZee/t5-small-finetuned-xlsum-india-test This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.9172 - Validation Loss: 2.5929 - Epoch: 0
57f1a21f96365ce4b7f1f0ae82b15a76
mit
['generated_from_trainer']
false
MiniLM-L12-H384-uncased-finetuned-imdb This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 3.9328
4cc6670f8c65cb4059ea49d48329f634
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.2464 | 1.0 | 391 | 4.2951 | | 4.2302 | 2.0 | 782 | 4.0023 | | 4.0726 | 3.0 | 1173 | 3.9328 |
81cef8b5156b1b25c8308188c81281b3
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
anything-berry-30.ckpt [Re-uploaded from](https://huggingface.co/misobarisic/anything-berrymix) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.30 | Anything V3 | Berry Mix | n/a | **anything-berry-30**
0e0078aaec4f800ef00ab2dfacd9f978
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
anything-f222-15.ckpt [Recipe Source](https://www.reddit.com/r/WaifuDiffusion/comments/zdbs3r/comment/iz0nr48/?utm_source=reddit&utm_medium=web2x&context=3) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.15 | Anything V3 | Zeipher F222 | n/a | **anything-f222-15**
1f6382543613b133187a851c30f6164e
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
anything-f222-15-elysiumv2-10.ckpt [Recipe Source](https://www.reddit.com/r/WaifuDiffusion/comments/zg1d8x/comment/izei93c/?utm_source=reddit&utm_medium=web2x&context=3) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.10 | anything-f222-15 | Elysium Anime v2 | n/a | **anything-f222-15-elysiumv2-10**
ebdfd478d23ae761758fe821adeb3640
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
berrymix-v3-535d98a3) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.05 | AnythingV3.0 | Stable Diffusion 1.5 | n/a | Anything Fix 2 | Add Difference @ 1 | Anything fix | Zeipher F222 | Stable Diffusion 1.5 | berrymix3 lite 3 | Weighted Sum @ 0.25 | berrymix3 lite |r34_e4 | n/a | **berrymix V3**
ded03a249a8aa607f4efd77a7b9f3e19
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
blossom-extract.safetensors [Recipe Source](https://www.reddit.com/r/StableDiffusion/comments/zk8y50/comment/izyhn8w/?utm_source=reddit&utm_medium=web2x&context=3) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Add Difference @ 1 | Anything V3 | Zeipher F222 | Stable Diffusion 1.4 | **blossom-extract**
d2b0209091992561c29586b226eee91e
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
hentai-elysium-50.safetensors [Recipe Source](https://www.reddit.com/r/WaifuDiffusion/comments/zn6wdb/comment/j0fabe6/?utm_source=reddit&utm_medium=web2x&context=3) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.5 | Hentai Diffusion 17 | Elysium Anime v2 | n/a | **hentai-elysium-50**
fcf0977200dc2ead25fff1a2df58c58c
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
nutmegmix-aa3e502b) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.05 | NovelAI | Stable Diffusion 1.5 | n/a | nutmegmix-part1 2 | Weighted Sum @ 0.05 | nutmegmix-part1 | Zeipher F222 | n/a | nutmegmix-part2 3 | Weighted Sum @ 0.05 | nutmegmix-part2 | r34_e4 | n/a | nutmegmix-part3 4 | Weighted Sum @ 0.05 | nutmegmix-part3 | SmirkingFace | n/a | nutmegmix-part4 5 | Weighted Sum @ 0.3 | AnythingV3.0 | nutmegmix-part4 | n/a | **nutmeg-mix**
1b152555351385bd228a70b4559a2615
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
raspberry-mix-4d202242) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.25 | AnythingV3.0 | Stable Diffusion 1.5 | n/a | AnyV3-SD1.5 2 | Add Difference @ 1 | AnyV3-SD1.5 | Zeipher F222 | Stable Diffusion 1.4 | raspberry-lite 3 | Weighted Sum @ 0.15 | raspberry-lite | r34_e4 | n/a | **raspberry mix**
c4990ba3f33115ec2c83fbaba047864b
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
strawberry-mix-e043dfc5) Step | Interpolation Method | Primary Model | Secondary model | Tertiary Model | Merge Name --- | --- | --- | --- | --- | --- 1 | Weighted Sum @ 0.25 | AnythingV3.0 | Stable Diffusion 1.4 | n/a | AnyV3-SD1.4 2 | Add Difference @ 1 | AnyV3-SD1.4 | Zeipher F111 | Stable Diffusion 1.4 | strawberry-lite 3 | Weighted Sum @ 0.15 | strawberry-lite | r34_e4 | n/a | **strawberry mix**
99ddcd5da058548c7115f6c1dcebddda
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-sst2-shake-wiki This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0096 - Accuracy: 0.9994
ee6a2ffee71d34346ae626520d08b032
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.001 | 1.0 | 5029 | 0.0120 | 0.9988 | | 0.0017 | 2.0 | 10058 | 0.0028 | 0.9996 | | 0.0 | 3.0 | 15087 | 0.0094 | 0.9992 | | 0.0 | 4.0 | 20116 | 0.0091 | 0.9994 | | 0.0 | 5.0 | 25145 | 0.0096 | 0.9994 |
b916eb12be1dd768ad1cd0aaba0ca5e0
apache-2.0
['t5', 'contrastive learning', 'ranking', 'decoding', 'metric learning', 'pytorch', 'text generation', 'retrieval']
false
Method-2: Loading the model with HuggingFace APIs ``` from transformers import T5Tokenizer, AutoModel tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-base") model = AutoModel.from_pretrained("kalpeshk2011/rankgen-t5-base-all", trust_remote_code=True) ```
3b2ef37c693c8101e798f0f6ed8fc525
apache-2.0
['text-classification', 'generated_from_trainer']
false
distilroberta-base-mrpc-glue-tadeous This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.6243 - Accuracy: 0.8211 - F1: 0.8726
be4206d88c0d7af2cb9adb23d31e9159
apache-2.0
['text-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3219 | 1.09 | 500 | 0.6243 | 0.8211 | 0.8726 | | 0.3173 | 2.18 | 1000 | 0.6243 | 0.8211 | 0.8726 |
2630041ebcb31001fa2f0df711ad9d4c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-stsb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5644 - Pearson: 0.8666 - Spearmanr: 0.8636
924da22ad9fc19fa5e4fd60808682f98
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 360 | 0.6366 | 0.8537 | 0.8516 | | 1.0464 | 2.0 | 720 | 0.6171 | 0.8632 | 0.8626 | | 0.4002 | 3.0 | 1080 | 0.6082 | 0.8663 | 0.8643 | | 0.4002 | 4.0 | 1440 | 0.5644 | 0.8666 | 0.8636 | | 0.2479 | 5.0 | 1800 | 0.5780 | 0.8654 | 0.8624 |
fd11e31b6c165259de5d4fe1b33ec7a1
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
supastazzz Dreambooth model trained by supastazz with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
8d4e4a3595c4861aea81e5f912c7f546
mit
[]
false
hebrew-gpt_neo-tiny Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
1ebdbd00147218d34ba3f1bafafd9324
mit
[]
false
4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
4ec314cc5393cb741433c0bc8fabe95d
mit
[]
false
Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-tiny", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 1024: max_len = 1024 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True)
7e92afce87a62f5482514a8e33666c32
apache-2.0
['irish']
false
BERTreach ([beirtreach](https://www.teanglann.ie/en/fgb/beirtreach) means 'oyster bed') **Model size:** 84M **Training data:** * [PARSEME 1.2](https://gitlab.com/parseme/parseme_corpus_ga/-/blob/master/README.md) * Newscrawl 300k portion of the [Leipzig Corpora](https://wortschatz.uni-leipzig.de/en/download/irish) * Private news corpus crawled with [Corpus Crawler](https://github.com/google/corpuscrawler) (2125804 sentences, 47419062 tokens, as reckoned by wc) ``` from transformers import pipeline fill_mask = pipeline("fill-mask", model="jimregan/BERTreach", tokenizer="jimregan/BERTreach") ```
9c2ac4cc92c4121e659c20c0dd7647e8
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_pretrain_sst2 This model is a fine-tuned version of [gokuls/distilbert_add_pre-training-complete](https://huggingface.co/gokuls/distilbert_add_pre-training-complete) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.7266 - Accuracy: 0.8085
ebfec7f46e574456467a482bf68e1553
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9277 | 1.0 | 264 | 0.7266 | 0.8085 | | 0.4581 | 2.0 | 528 | 0.9527 | 0.7844 | | 0.3282 | 3.0 | 792 | 0.8676 | 0.8142 | | 0.2532 | 4.0 | 1056 | 0.7918 | 0.8039 | | 0.1926 | 5.0 | 1320 | 0.8852 | 0.7982 | | 0.1573 | 6.0 | 1584 | 1.0020 | 0.7947 |
4abd0bc347edbaa246aa33f300359894
apache-2.0
[]
false
Question Answering model for Hindi and Tamil This model is part of the ensemble that ranked 4/943 in the [Hindi and Tamil Question Answering](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition held by Google Research India at Kaggle. ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("Yuchen/muril-large-cased-hita-qa") model = AutoModelForQuestionAnswering.from_pretrained("Yuchen/muril-large-cased-hita-qa") ```
292abb5a29157ec10958821a722cb4a7
apache-2.0
['generated_from_trainer']
false
Tagged_One_250v8_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one250v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3389 - Precision: 0.5352 - Recall: 0.4795 - F1: 0.5058 - Accuracy: 0.8947
e2854803a0e62bda2f7e5b9b0221c0f4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 95 | 0.4305 | 0.3497 | 0.1814 | 0.2389 | 0.8488 | | No log | 2.0 | 190 | 0.3469 | 0.4995 | 0.4281 | 0.4611 | 0.8875 | | No log | 3.0 | 285 | 0.3389 | 0.5352 | 0.4795 | 0.5058 | 0.8947 |
adfc76c232b572ce1f943488cff1dd80
mit
['generated_from_keras_callback']
false
javilonso/classificationPolEsp1 This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3728 - Validation Loss: 0.6217 - Epoch: 2
4ac2b83a8f42677b9fbe3305f23ef2ec
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6282 | 0.6017 | 0 | | 0.5129 | 0.6177 | 1 | | 0.3728 | 0.6217 | 2 |
9e469785c089f692c932e7d4a81f42fc
mit
['russian', 'fill-mask', 'pretraining', 'embeddings', 'masked-lm', 'tiny', 'feature-extraction', 'sentence-similarity']
false
This is an updated version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny): a small Russian BERT-based encoder with high-quality sentence embeddings. This [post in Russian](https://habr.com/ru/post/669674/) gives more details. The differences from the previous version include: - a larger vocabulary: 83828 tokens instead of 29564; - larger supported sequences: 2048 instead of 512; - sentence embeddings approximate LaBSE closer than before; - meaningful segment embeddings (tuned on the NLI task) - the model is focused only on Russian. The model should be used as is to produce sentence embeddings (e.g. for KNN classification of short texts) or fine-tuned for a downstream task. Sentence embeddings can be produced as follows: ```python
b61cee9c523542c3cbcfa0fd318f5e6e
mit
['russian', 'fill-mask', 'pretraining', 'embeddings', 'masked-lm', 'tiny', 'feature-extraction', 'sentence-similarity']
false
pip install transformers sentencepiece import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny2") model = AutoModel.from_pretrained("cointegrated/rubert-tiny2")
92ea2953163607ddd161d012d61476e8
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_hubert_s507 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5666fe1a5ca5745f35b19727e5763663
mit
['vision', 'video-classification']
false
X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained fully-supervised on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Ni et al. and first released in [this repository](https://github.com/microsoft/VideoX/tree/master/X-CLIP). This model was trained using 8 frames per video, at a resolution of 224x224. Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
9e127671c90f64a17b82f83fcd0947e9
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0661 - Precision: 0.9318 - Recall: 0.9495 - F1: 0.9406 - Accuracy: 0.9854
3c1fd26f3b7f84ba938cdf5ddcbc7d0b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0885 | 1.0 | 1756 | 0.0664 | 0.9189 | 0.9327 | 0.9257 | 0.9820 | | 0.0346 | 2.0 | 3512 | 0.0650 | 0.9260 | 0.9456 | 0.9357 | 0.9856 | | 0.017 | 3.0 | 5268 | 0.0661 | 0.9318 | 0.9495 | 0.9406 | 0.9854 |
5b3b3cf3012681d33df9063a393de059
apache-2.0
['translation']
false
opus-mt-fr-fj * source languages: fr * target languages: fj * OPUS readme: [fr-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.eval.txt)
57d932a49c5b0b1d5239f6a8abcb065b
mit
[]
false
transformers-ud-japanese-electra-ginza (sudachitra-wordpiece, mC4 Japanese) - [MIYAGINO](https://www.ntj.jac.go.jp/assets/images/member/pertopics/image/per100510_3.jpg) This is an [ELECTRA](https://github.com/google-research/electra) model pretrained on approximately 200M Japanese sentences. The input text is tokenized by [SudachiTra](https://github.com/WorksApplications/SudachiTra) with the WordPiece subword tokenizer. See `tokenizer_config.json` for the setting details.
ced3f526036ec0b0cb59b5c5be96c9f1
mit
[]
false
How to use ```python from transformers import ElectraModel from sudachitra import ElectraSudachipyTokenizer model = ElectraModel.from_pretrained("megagonlabs/transformers-ud-japanese-electra-base-discriminator") tokenizer = ElectraSudachipyTokenizer.from_pretrained("megagonlabs/transformers-ud-japanese-electra-base-discriminator") model(**tokenizer("まさにオールマイティーな商品だ。", return_tensors="pt")).last_hidden_state tensor([[[-0.0498, -0.0285, 0.1042, ..., 0.0062, -0.1253, 0.0338], [-0.0686, 0.0071, 0.0087, ..., -0.0210, -0.1042, -0.0320], [-0.0636, 0.1465, 0.0263, ..., 0.0309, -0.1841, 0.0182], ..., [-0.1500, -0.0368, -0.0816, ..., -0.0303, -0.1653, 0.0650], [-0.0457, 0.0770, -0.0183, ..., -0.0108, -0.1903, 0.0694], [-0.0981, -0.0387, 0.1009, ..., -0.0150, -0.0702, 0.0455]]], grad_fn=<NativeLayerNormBackward>) ```
51b489c1111c8dfd02de1aa25f17d31b
apache-2.0
['generated_from_trainer']
false
bert-large-cased-finetuned-rte This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 1.5187 - Accuracy: 0.6643
b64e8096c6df898bda334f1927a42f13
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6969 | 1.0 | 623 | 0.7039 | 0.5343 | | 0.5903 | 2.0 | 1246 | 0.6461 | 0.7184 | | 0.4557 | 3.0 | 1869 | 1.5187 | 0.6643 |
2c877a5a0d841954078022e21d1b854e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8747 | 1.0 | 1063 | 3.7718 | | 3.7769 | 2.0 | 2126 | 3.7559 | | 3.7321 | 3.0 | 3189 | 3.7535 |
4c509b8c505ab8dffe13182a3e5076f9
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0599 - Precision: 0.9274 - Recall: 0.9372 - F1: 0.9323 - Accuracy: 0.9840
55ee578a9053ecf498fb2b1f69de5aaa
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2378 | 1.0 | 878 | 0.0719 | 0.9107 | 0.9200 | 0.9154 | 0.9801 | | 0.0509 | 2.0 | 1756 | 0.0620 | 0.9156 | 0.9311 | 0.9233 | 0.9821 | | 0.0307 | 3.0 | 2634 | 0.0599 | 0.9274 | 0.9372 | 0.9323 | 0.9840 |
e3e49a8079d39ccaa59e2e069d074bcb
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-large-squadshifts-vanilla-nyt-qg` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: nyt) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
b2e4aca995e9be9cdcc9f755c30be233
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (nyt) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
a1066a9b2a414e4bb4fc8edb41415de8
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-large-squadshifts-vanilla-nyt-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
cb2e6b2c406e45fb3b22f3d409311b07
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-nyt-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 92.67 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 24.7 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 16.38 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 11.53 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 8.43 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 24.58 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 64.38 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 24.57 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
3a0b5060553849930be7b9612bcc390e
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: nyt - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/bart-large - max_length: 512 - max_length_output: 32 - epoch: 5 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-nyt-qg/raw/main/trainer_config.json).
d39e65f890165de1eef1498cc30daf81
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-base-uk-fi Neural machine translation model for translating from Ukrainian (uk) to Finnish (fi). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
a8da2249dc2a36e9ebd64565e3802c6b
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-17 * source language(s): ukr * target language(s): fin * model: transformer-align * data: opusTCv20210807+pft+pbt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+pft+pbt_transformer-align_2022-03-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-fin/opusTCv20210807+pft+pbt_transformer-align_2022-03-17.zip) * more information released models: [OPUS-MT ukr-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-fin/README.md)
8fe00fc8b1960ed1cd3a75e4d6286974
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Африка є колискою людства.", "Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять." ] model_name = "pytorch-models/opus-mt-tc-base-uk-fi" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
300bd7447c847c336da3dcee2b44989c