license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.0962 | 1.0 | 1012 | 0.7528 | 0.3793 | 0.6109 | 0.4411 | 0.4411 | | 0.7022 | 2.0 | 2024 | 0.6763 | 0.3992 | 0.6557 | 0.4799 | 0.4799 | | 0.6136 | 3.0 | 3036 | 0.6751 | 0.3995 | 0.6597 | 0.4824 | 0.4824 | | 0.5444 | 4.0 | 4048 | 0.6799 | 0.3891 | 0.6817 | 0.4854 | 0.4854 | | 0.4846 | 5.0 | 5060 | 0.7371 | 0.4030 | 0.6701 | 0.4906 | 0.4906 | | 0.4379 | 6.0 | 6072 | 0.7520 | 0.3956 | 0.6788 | 0.4887 | 0.4887 | | 0.404 | 7.0 | 7084 | 0.7788 | 0.3801 | 0.6854 | 0.4800 | 0.4800 |
8d7c201cfb9bde9eb6a088e784a9c254
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_no-pretraining_s953 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
45b36f8dac9d9645b90b6059a73a63b4
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-finetuned-coscan-sex This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the coscan-speech dataset. It achieves the following results on the evaluation set: - Loss: 0.0229 - Accuracy: 0.9965
3cabbe11dae354c24e34536a51abb52d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0034 | 1.0 | 6644 | 0.0229 | 0.9965 |
1116d4dc3c86c047aaca11007b2764bf
mit
['generated_from_trainer']
false
edos-2023-baseline-roberta-base-label_sexist This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4729 - F1: 0.8048
7dbe97cf2d2326b72f5d7b5d17abc50c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4114 | 1.14 | 400 | 0.3516 | 0.7954 | | 0.2725 | 2.29 | 800 | 0.4086 | 0.7925 | | 0.2134 | 3.43 | 1200 | 0.4404 | 0.8062 | | 0.1632 | 4.57 | 1600 | 0.4729 | 0.8048 |
39898724c7a2d7dafedfc28c5a02ac2c
apache-2.0
['generated_from_trainer']
false
beit-base-patch16-224-pt22k-ft22k-finetunedt This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0147 - Accuracy: 1.0
7daff96ec60a3d2e929334c86b48cedb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4714 | 1.0 | 25 | 0.0147 | 1.0 | | 0.0089 | 2.0 | 50 | 0.0008 | 1.0 | | 0.0101 | 3.0 | 75 | 0.0003 | 1.0 | | 0.0021 | 4.0 | 100 | 0.0002 | 1.0 | | 0.0028 | 5.0 | 125 | 0.0001 | 1.0 | | 0.0016 | 6.0 | 150 | 0.0001 | 1.0 | | 0.0044 | 7.0 | 175 | 0.0001 | 1.0 | | 0.0007 | 8.0 | 200 | 0.0001 | 1.0 | | 0.0013 | 9.0 | 225 | 0.0001 | 1.0 | | 0.0004 | 10.0 | 250 | 0.0001 | 1.0 |
b6809ce8bb0a69994aeb1e3e994a9945
apache-2.0
[]
false
Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base") model = T5ForConditionalGeneration.from_pretrained("Langboat/mengzi-t5-base") ```
b4939d550cadb60d8f7b07621d0f17d0
apache-2.0
['generated_from_trainer']
false
favs-filtersort-multilabel-classification-bert-base-cased This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the filter_sort dataset. It achieves the following results on the evaluation set: - Loss: 0.3066 - F1: 0.7429 - Roc Auc: 0.8142 - Accuracy: 0.2
0daa3de48b2e427bb8f416a486061a07
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.7601 | 1.0 | 12 | 0.6966 | 0.2564 | 0.4518 | 0.0 | | 0.6757 | 2.0 | 24 | 0.5629 | 0.6667 | 0.7785 | 0.0 | | 0.5796 | 3.0 | 36 | 0.4652 | 0.6286 | 0.7477 | 0.0 | | 0.5026 | 4.0 | 48 | 0.4161 | 0.6479 | 0.7605 | 0.0 | | 0.4282 | 5.0 | 60 | 0.3830 | 0.6849 | 0.7862 | 0.0 | | 0.4085 | 6.0 | 72 | 0.3658 | 0.7273 | 0.7962 | 0.0 | | 0.3847 | 7.0 | 84 | 0.3538 | 0.7353 | 0.8052 | 0.0 | | 0.3829 | 8.0 | 96 | 0.3457 | 0.6761 | 0.7772 | 0.0 | | 0.3758 | 9.0 | 108 | 0.3409 | 0.6857 | 0.7810 | 0.0 | | 0.3487 | 10.0 | 120 | 0.3327 | 0.7143 | 0.7976 | 0.0 | | 0.3421 | 11.0 | 132 | 0.3268 | 0.6866 | 0.7758 | 0.0 | | 0.3351 | 12.0 | 144 | 0.3183 | 0.7059 | 0.7886 | 0.0 | | 0.3245 | 13.0 | 156 | 0.3149 | 0.7246 | 0.8014 | 0.0 | | 0.3191 | 14.0 | 168 | 0.3087 | 0.7246 | 0.8014 | 0.1 | | 0.3083 | 15.0 | 180 | 0.3066 | 0.7429 | 0.8142 | 0.2 | | 0.3061 | 16.0 | 192 | 0.3062 | 0.7429 | 0.8142 | 0.2 | | 0.2935 | 17.0 | 204 | 0.3017 | 0.7429 | 0.8142 | 0.2 | | 0.2888 | 18.0 | 216 | 0.3009 | 0.7429 | 0.8142 | 0.2 | | 0.297 | 19.0 | 228 | 0.3022 | 0.7429 | 0.8142 | 0.2 | | 0.2868 | 20.0 | 240 | 0.3014 | 0.7429 | 0.8142 | 0.2 |
e517e5d47e4a40a98cb6487e9a5c25a4
cc-by-sa-4.0
['spacy', 'token-classification']
false
UD v2.5 benchmarking pipeline for UD_English-EWT | Feature | Description | | --- | --- | | **Name** | `en_udv25_englishewt_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) |
61281f9b183a4cfc2c888746aec5558e
cc-by-sa-4.0
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (1760 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `GW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`morphologizer`** | `Number=Sing\|POS=PROPN`, `POS=PUNCT`, `Degree=Pos\|POS=ADJ`, `Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Definite=Def\|POS=DET\|PronType=Art`, `Number=Sing\|POS=NOUN`, `POS=ADP`, `Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|POS=DET\|PronType=Art`, `POS=AUX\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `POS=VERB\|VerbForm=Ger`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `POS=SCONJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `NumType=Card\|POS=NUM`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbForm=Ger`, `POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=PROPN`, `Degree=Pos\|NumType=Ord\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=CCONJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PRON`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=DET`, `Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|POS=ADV`, `Degree=Cmp\|POS=ADV`, `Number=Sing\|POS=PRON`, `Degree=Cmp\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADV\|PronType=Dem`, `POS=ADV\|PronType=Int`, `Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|POS=VERB\|VerbForm=Fin`, `Degree=Sup\|POS=ADJ`, `POS=PRON\|PronType=Int`, `NumType=Mult\|POS=ADV`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=DET\|PronType=Int`, `POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Number=Plur\|POS=DET\|PronType=Dem`, `POS=PRON\|Poss=Yes\|PronType=Int`, `Case=Acc\|POS=PRON\|Person=2\|PronType=Prs`, `POS=X`, `POS=PRON\|PronType=Dem`, `Number=Sing\|POS=PROPN\|Typo=Yes`, `POS=ADV\|PronType=Rel`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `POS=INTJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=X`, `POS=SYM`, `Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Imp\|POS=AUX\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|POS=CCONJ`, `POS=SCONJ\|Typo=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=SYM`, `POS=DET\|Typo=Yes`, `Degree=Pos\|POS=PROPN`, `Abbr=Yes\|POS=ADP`, `POS=ADP\|Typo=Yes`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs\|Typo=Yes`, `Abbr=Yes\|POS=VERB\|Tense=Pres\|VerbForm=Part`, `Abbr=Yes\|POS=PART`, `POS=AUX\|Typo=Yes\|VerbForm=Fin`, `Degree=Pos\|POS=ADJ\|Typo=Yes`, `POS=VERB\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=NOUN\|Typo=Yes`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Abbr=Yes\|Number=Sing\|POS=NOUN`, `Degree=Pos\|POS=NOUN`, `POS=CCONJ\|Typo=Yes`, `Number=Sing\|POS=X`, `Abbr=Yes\|POS=SCONJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|POS=AUX\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `POS=ADV\|Typo=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=NUM`, `POS=PRON\|Poss=Yes\|PronType=Rel`, `Abbr=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|POS=INTJ`, `Abbr=Yes\|POS=VERB\|VerbForm=Inf`, `Abbr=Yes\|Number=Sing\|POS=PRON`, `Abbr=Yes\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=PRON\|PronType=Int`, `Abbr=Yes\|POS=AUX\|VerbForm=Fin`, `Abbr=Yes\|POS=ADV`, `Abbr=Yes\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `POS=ADJ`, `Number=Plur\|POS=NOUN\|Typo=Yes`, `POS=DET\|PronType=Rel\|Typo=Yes`, `POS=PART\|Typo=Yes`, `Abbr=Yes\|POS=DET`, `POS=DET\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Degree=Pos\|NumType=Ord\|POS=ADV`, `POS=NOUN`, `Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Typo=Yes`, `POS=PRON\|Typo=Yes`, `Number=Plur\|POS=VERB`, `POS=VERB\|Typo=Yes\|VerbForm=Inf`, `Mood=Ind\|POS=VERB\|Tense=Past\|Typo=Yes\|VerbForm=Fin`, `Mood=Imp\|POS=AUX\|VerbForm=Inf`, `Abbr=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin`, `Abbr=Yes\|Case=Nom\|POS=PRON\|Person=2\|PronType=Prs`, `POS=VERB\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Mood=Ind\|POS=AUX\|Tense=Past\|Typo=Yes\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=VERB\|Typo=Yes\|VerbForm=Ger`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Abbr=Yes\|POS=PRON`, `Abbr=Yes\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Abbr=Yes\|Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `det:predet`, `discourse`, `expl`, `fixed`, `flat`, `flat:foreign`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nmod:npmod`, `nmod:poss`, `nmod:tmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:npmod`, `obl:tmod`, `orphan`, `parataxis`, `punct`, `reparandum`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `0`, `2`, `4`, `6`, `8`, `10`, `12`, `13`, `15`, `17`, `19`, `21`, `23`, `26`, `28`, `29`, `30`, `32`, `34`, `36`, `39`, `42`, `43`, `45`, `47`, `49`, `51`, `53`, `55`, `57`, `59`, `61`, `62`, `64`, `67`, `69`, `71`, `73`, `75`, `77`, `79`, `81`, `83`, `85`, `87`, `1`, `89`, `90`, `92`, `94`, `95`, `97`, `99`, `101`, `105`, `106`, `108`, `110`, `111`, `112`, `113`, `115`, `117`, `119`, `121`, `122`, `124`, `125`, `126`, `127`, `128`, `129`, `130`, `132`, `133`, `136`, `137`, `138`, `139`, `142`, `143`, `145`, `150`, `153`, `156`, `157`, `159`, `162`, `163`, `164`, `167`, `169`, `171`, `174`, `176`, `177`, `179`, `182`, `184`, `187`, `189`, `191`, `193`, `194`, `197`, `198`, `201`, `203`, `204`, `208`, `210`, `211`, `213`, `214`, `215`, `217`, `220`, `221`, `224`, `225`, `227`, `229`, `231`, `233`, `235`, `236`, `239`, `241`, `242`, `244`, `246`, `247`, `248`, `249`, `250`, `251`, `252`, `254`, `256`, `258`, `259`, `261`, `263`, `264`, `265`, `266`, `269`, `270`, `272`, `273`, `274`, `276`, `277`, `278`, `281`, `283`, `72`, `285`, `287`, `288`, `291`, `292`, `293`, `296`, `297`, `298`, `299`, `300`, `301`, `302`, `303`, `304`, `305`, `306`, `307`, `308`, `309`, `310`, `311`, `315`, `316`, `317`, `318`, `319`, `320`, `322`, `88`, `324`, `327`, `328`, `332`, `336`, `337`, `338`, `340`, `341`, `342`, `343`, `344`, `347`, `349`, `350`, `351`, `352`, `353`, `354`, `356`, `357`, `358`, `360`, `361`, `362`, `363`, `364`, `365`, `366`, `367`, `369`, `373`, `375`, `376`, `377`, `378`, `379`, `144`, `381`, `383`, `384`, `386`, `387`, `389`, `390`, `393`, `394`, `396`, `397`, `398`, `399`, `402`, `405`, `407`, `408`, `410`, `411`, `412`, `413`, `414`, `416`, `418`, `419`, `421`, `422`, `423`, `424`, `426`, `428`, `429`, `430`, `432`, `434`, `436`, `437`, `438`, `441`, `442`, `443`, `444`, `445`, `446`, `447`, `260`, `448`, `452`, `453`, `454`, `455`, `456`, `457`, `458`, `460`, `461`, `462`, `463`, `464`, `465`, `466`, `467`, `409`, `468`, `469`, `470`, `471`, `472`, `473`, `476`, `477`, `481`, `484`, `486`, `487`, `488`, `491`, `492`, `493`, `494`, `495`, `496`, `497`, `498`, `499`, `500`, `503`, `504`, `506`, `507`, `508`, `509`, `511`, `512`, `513`, `514`, `515`, `516`, `517`, `518`, `519`, `107`, `520`, `521`, `522`, `523`, `524`, `525`, `526`, `527`, `528`, `529`, `531`, `533`, `534`, `537`, `538`, `542`, `543`, `544`, `545`, `546`, `547`, `548`, `549`, `550`, `553`, `554`, `557`, `558`, `560`, `561`, `564`, `565`, `566`, `567`, `568`, `569`, `570`, `571`, `572`, `573`, `574`, `575`, `576`, `577`, `578`, `579`, `580`, `581`, `582`, `583`, `584`, `586`, `587`, `588`, `589`, `590`, `591`, `592`, `594`, `595`, `76`, `596`, `597`, `598`, `600`, `601`, `602`, `149`, `603`, `604`, `605`, `606`, `607`, `608`, `609`, `490`, `610`, `611`, `96`, `255`, `614`, `617`, `619`, `620`, `621`, `622`, `623`, `624`, `626`, `627`, `628`, `630`, `632`, `633`, `635`, `638`, `639`, `640`, `641`, `644`, `647`, `650`, `654`, `657`, `659`, `173`, `661`, `662`, `663`, `664`, `668`, `669`, `670`, `671`, `673`, `676`, `677`, `678`, `680`, `682`, `158`, `91`, `683`, `684`, `685`, `686`, `687`, `688`, `689`, `690`, `691`, `692`, `693`, `695`, `697`, `699`, `700`, `701`, `183`, `702`, `703`, `704`, `706`, `707`, `709`, `711`, `713`, `485`, `714`, `716`, `717`, `718`, `719`, `720`, `721`, `722`, `723`, `724`, `726`, `727`, `728`, `729`, `730`, `731`, `732`, `733`, `734`, `735`, `736`, `737`, `738`, `739`, `741`, `742`, `744`, `745`, `746`, `748`, `749`, `752`, `753`, `754`, `755`, `756`, `757`, `759`, `760`, `762`, `763`, `764`, `765`, `768`, `769`, `772`, `774`, `775`, `776`, `777`, `781`, `782`, `783`, `784`, `785`, `786`, `787`, `788`, `789`, `78`, `791`, `794`, `795`, `796`, `798`, `800`, `801`, `802`, `803`, `804`, `805`, `806`, `807`, `808`, `809`, `810`, `811`, `812`, `813`, `814`, `815`, `816`, `817`, `818`, `819`, `820`, `822`, `823`, `824`, `825`, `826`, `827`, `828`, `829`, `830`, `131`, `831`, `631`, `832`, `833`, `834`, `838`, `839`, `841`, `842`, `843`, `844`, `845`, `846`, `847`, `849`, `792`, `850`, `851`, `852`, `853`, `856`, `857`, `858`, `859`, `860`, `861`, `862`, `864`, `865`, `715`, `866`, `867`, `868`, `869`, `870`, `871`, `872`, `873`, `877`, `878`, `879`, `881`, `882`, `883`, `885`, `886`, `887`, `888`, `848`, `889`, `890`, `891`, `892`, `893`, `894`, `895`, `896`, `900`, `901`, `902`, `903`, `905`, `907`, `908`, `911`, `912`, `913`, `914`, `918`, `919`, `920`, `923`, `924`, `925`, `926`, `927`, `928`, `929`, `930`, `931`, `932`, `933`, `52`, `934`, `935`, `937`, `939`, `941`, `943`, `944`, `945`, `946`, `947`, `950`, `951`, `952`, `954`, `955`, `956`, `957`, `961`, `962`, `963`, `964`, `965`, `966`, `967`, `968`, `969`, `970`, `971`, `972`, `973`, `974`, `975`, `976`, `977`, `374`, `978`, `979`, `980`, `982`, `983`, `986`, `987`, `988`, `989`, `990`, `991`, `992`, `993`, `994`, `995`, `996`, `998`, `1000`, `1001`, `1002`, `1003`, `1004`, `1005`, `1006`, `1007`, `1008`, `1009`, `1012`, `1016`, `1020`, `1021`, `1023`, `1024`, `1025`, `1031`, `1032`, `1033`, `1034`, `1035`, `1036`, `1037`, `1038`, `1039`, `1041`, `1042`, `1043`, `1044`, `1045`, `1046`, `1047`, `1048`, `1049`, `1050`, `1051`, `1052`, `1053`, `1054`, `1055`, `1056`, `1057`, `1058`, `1059`, `1060`, `1061`, `1062`, `1063`, `1064`, `1065`, `642`, `1066`, `1067`, `1068`, `1069`, `1071`, `1072`, `1073`, `1074`, `1079`, `1080`, `1081`, `1082`, `1083`, `1085`, `1087`, `1088`, `1089`, `1090`, `559`, `1092`, `1093`, `1094`, `1096`, `1097`, `1098`, `1101`, `1102`, `1103`, `1104`, `1105`, `1106`, `1107`, `1109`, `1110`, `1112`, `1113`, `1114`, `1115`, `1116`, `1117`, `1118`, `1119`, `1122`, `1123`, `1124`, `1126`, `1127`, `1128`, `1129`, `1130`, `1132`, `1134`, `1137`, `1138`, `1140`, `1141`, `1142`, `1143`, `1144`, `1145`, `1146`, `1147`, `1150`, `1152`, `1161`, `1162`, `1163`, `1164`, `1165`, `1169`, `1170`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1178`, `1181`, `1182`, `1183`, `1186`, `1187`, `1188`, `1190`, `1191`, `1192`, `1111`, `1193`, `1194`, `1195`, `1196`, `1198`, `1200`, `1201`, `1202`, `1203`, `1204`, `1208`, `1211`, `1213`, `1215`, `1216`, `1217`, `1218`, `1219`, `1221`, `1222`, `1223`, `1224`, `1225`, `1226`, `1227`, `1230`, `1231`, `1232`, `1234`, `1235`, `1249`, `1250`, `1252`, `1253`, `1254`, `1255`, `1257`, `1258`, `1260`, `1262`, `1263`, `1264`, `1265`, `1266`, `1267`, `1269`, `1272`, `7`, `1274`, `1276`, `1277`, `1278`, `1280`, `1282`, `1283`, `1284`, `1285`, `1286`, `1287`, `1289`, `1290`, `1291`, `1293`, `1295`, `1298`, `1302`, `1303`, `1311`, `1312`, `1313`, `1314`, `1316`, `1318`, `1317`, `1320`, `1322`, `1323`, `192`, `1324`, `1326`, `1327`, `234`, `1329`, `1330`, `1331`, `1332`, `747`, `1333`, `1334`, `1335`, `1336`, `1337`, `1339`, `1340`, `1341`, `1342`, `1344`, `1346`, `1350`, `1351`, `1352`, `1355`, `1357`, `1358`, `1360`, `1361`, `1362`, `1363`, `1364`, `1365`, `1367`, `1369`, `1370`, `1371`, `1372`, `1373`, `1374`, `1375`, `1376`, `1378`, `1380`, `1382`, `1384`, `1385`, `1386`, `1389`, `1390`, `1391`, `1392`, `1393`, `1394`, `1395`, `1396`, `1397`, `1399`, `1401`, `1402`, `1403`, `1404`, `1405`, `1406`, `1407`, `1408`, `1409`, `1410`, `1411`, `1412`, `1413`, `1414`, `1416`, `1418`, `1419`, `1420`, `1421`, `1422`, `188`, `1423`, `1424`, `1425`, `1426`, `1428`, `1429`, `1430`, `1431`, `1432`, `1433`, `1434`, `1435`, `148`, `1436`, `1439`, `1440`, `1441`, `1442`, `1443`, `1444`, `1445`, `1446`, `1447`, `1448`, `1449`, `1450`, `1451`, `1452`, `1453`, `1454`, `1455`, `1456`, `1457`, `1458`, `1459`, `1460`, `1461`, `1462`, `1463`, `1464`, `1466`, `1467`, `1468`, `1469`, `1470`, `1471`, `1472`, `1474`, `1475`, `1478`, `1481`, `1484`, `1486`, `1488`, `1489`, `1473`, `1490`, `1492`, `1493`, `1494`, `1495`, `1496`, `1497`, `1498`, `1499`, `1500`, `1501`, `1502`, `1503`, `1504`, `1505`, `44`, `1506`, `1511`, `1513`, `1515`, `1517`, `1518`, `1522`, `1523`, `1525`, `1528`, `1530`, `1531`, `1532`, `1534`, `1536`, `1537`, `1538`, `1539`, `1540`, `1541`, `1543`, `1546`, `1547`, `1548`, `1549`, `1551`, `1552`, `1555`, `1556`, `1557`, `1558`, `1559`, `1560`, `1561`, `1562`, `1563`, `1564`, `1565`, `1566`, `1567`, `1568`, `1569`, `1570`, `1571`, `1572`, `1573`, `1574`, `1575`, `1576`, `1577`, `1578`, `1579`, `1580`, `1581`, `1582`, `1583`, `1584`, `1585`, `1586`, `1588`, `1590`, `1591`, `1592`, `1594`, `1597`, `1598`, `1599`, `1601`, `168`, `1602`, `1603`, `1605`, `1607`, `1608`, `1611`, `1612`, `1613`, `1614`, `1615`, `1616`, `1617`, `1618`, `1619`, `1620`, `1621`, `1622`, `1623`, `1624`, `1625`, `1626`, `1627`, `1628`, `1629`, `1630`, `1632`, `1554`, `1633`, `1634`, `1635`, `1636`, `1637`, `1638`, `1639`, `1642`, `1647`, `1648`, `1649`, `1651`, `1653`, `1654`, `1655`, `1657`, `1658`, `1659`, `1660`, `1661`, `1662`, `1663`, `1664`, `1665`, `1666`, `1667`, `1668`, `1669`, `1670`, `1671`, `1672`, `1673`, `1674`, `1675`, `1676`, `1677`, `1678`, `1679`, `1680`, `1681`, `1682`, `1683`, `1684`, `1685`, `1686`, `1687`, `1688`, `1689`, `1690`, `1691`, `1692`, `1693`, `1694`, `1695`, `1696`, `1697`, `1698`, `1699`, `1700`, `1701`, `1702`, `1704`, `1705`, `1706`, `1707`, `1708`, `1709`, `1710`, `1711`, `1712`, `1713`, `1714`, `1715`, `1716`, `1717`, `1718`, `1719`, `1720`, `1721`, `1722`, `1723`, `1724`, `1725`, `1726`, `1727`, `1730`, `1732`, `1734`, `1735`, `1736`, `1737`, `1738`, `1740`, `1742`, `1743`, `1744`, `1745`, `1746`, `1747`, `1748`, `1749`, `1750`, `1751`, `1754`, `1755`, `1756`, `1758`, `1760`, `1761`, `1762`, `1763`, `1766`, `1767`, `1768`, `1769`, `1770`, `1772`, `1775`, `1778`, `1779`, `1784`, `1787`, `1788`, `1789`, `1790`, `1791`, `1793`, `1795`, `1796`, `1798`, `1800`, `1804`, `1805`, `1806`, `1807`, `1808`, `1809`, `1810`, `1811`, `1812`, `1813`, `1814`, `1815`, `1816`, `1818`, `1821`, `1822`, `1823`, `1824`, `1825`, `1826`, `1827`, `1828`, `1831`, `1832`, `1833`, `1834`, `1835`, `1836`, `1837`, `1838`, `1839`, `1840`, `1841`, `1842`, `1843`, `1844`, `1846`, `1847`, `1848`, `1849`, `1850`, `1851`, `1852`, `1853`, `1855`, `1857`, `1858`, `1859`, `1860`, `1861`, `1862`, `1863`, `1866`, `1867`, `1868`, `1869`, `1872`, `1873`, `1876`, `1877`, `1878`, `1879`, `1880`, `1881`, `1883`, `1884`, `1886`, `1887`, `1888`, `1893`, `1752`, `1896`, `1897`, `1899`, `1900`, `1901`, `1906`, `1907`, `1908`, `1910`, `1911`, `1912`, `1913`, `1916`, `1917`, `1918`, `1919`, `1920`, `1922`, `1923`, `1925`, `1926`, `1927`, `1928`, `1929`, `1930`, `1931`, `1932`, `1933`, `1120`, `1934`, `1935`, `1936`, `1937`, `1938`, `1939`, `1940`, `1941`, `1942`, `1943`, `1944`, `1945`, `1946`, `1947`, `1948`, `1949`, `1950`, `1951`, `1952`, `1953`, `1954`, `1955`, `1956`, `1957`, `1958`, `1959`, `1961`, `1962`, `1963`, `1964`, `1965`, `1966`, `1967`, `1968`, `1969`, `1970`, `1971`, `1972`, `1973`, `1974`, `1975`, `1976`, `1977`, `1978`, `1979`, `1982`, `1985`, `1987`, `1988`, `1989`, `1990`, `1992`, `1994`, `1995`, `1996`, `1997`, `1998`, `1999`, `2000`, `2003`, `2006`, `152`, `2007`, `2009`, `2010`, `2011`, `2012`, `2013`, `2014`, `2015`, `2016`, `2017`, `2019`, `2020`, `2021`, `2022`, `2023`, `2024`, `2025`, `2026`, `2029`, `2030`, `2031`, `2032`, `2033`, `2034`, `2035`, `2037`, `2038`, `2039`, `2040`, `2041`, `2042`, `2043`, `2044`, `2045`, `2047` | </details>
9d9941784030528181fae6bf872bd0d3
cc-by-sa-4.0
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.15 | | `TOKEN_P` | 99.18 | | `TOKEN_R` | 99.11 | | `TOKEN_ACC` | 99.83 | | `SENTS_F` | 90.62 | | `SENTS_P` | 90.99 | | `SENTS_R` | 90.26 | | `TAG_ACC` | 96.36 | | `POS_ACC` | 96.94 | | `MORPH_ACC` | 96.91 | | `DEP_UAS` | 91.90 | | `DEP_LAS` | 89.42 | | `LEMMA_ACC` | 97.36 |
e00955f4287807abac2e5366ed7cbd78
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/domain_transfer_general-massive_cooking-roberta-large-v1-5-4 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
e84908186ddae762069e44baa4f51d6c
mit
['generated_from_trainer']
false
training This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the cynthiachan/FeedRef_10pct dataset. It achieves the following results on the evaluation set: - Loss: 0.0810 - Attackid Precision: 1.0 - Attackid Recall: 1.0 - Attackid F1: 1.0 - Attackid Number: 6 - Cve Precision: 1.0 - Cve Recall: 1.0 - Cve F1: 1.0 - Cve Number: 11 - Defenderthreat Precision: 0.0 - Defenderthreat Recall: 0.0 - Defenderthreat F1: 0.0 - Defenderthreat Number: 2 - Domain Precision: 1.0 - Domain Recall: 0.9565 - Domain F1: 0.9778 - Domain Number: 23 - Email Precision: 1.0 - Email Recall: 1.0 - Email F1: 1.0 - Email Number: 3 - Filepath Precision: 0.8841 - Filepath Recall: 0.8788 - Filepath F1: 0.8815 - Filepath Number: 165 - Hostname Precision: 1.0 - Hostname Recall: 1.0 - Hostname F1: 1.0 - Hostname Number: 12 - Ipv4 Precision: 1.0 - Ipv4 Recall: 1.0 - Ipv4 F1: 1.0 - Ipv4 Number: 12 - Md5 Precision: 0.8333 - Md5 Recall: 0.9615 - Md5 F1: 0.8929 - Md5 Number: 52 - Sha1 Precision: 0.6667 - Sha1 Recall: 0.8571 - Sha1 F1: 0.75 - Sha1 Number: 7 - Sha256 Precision: 0.9565 - Sha256 Recall: 1.0 - Sha256 F1: 0.9778 - Sha256 Number: 44 - Uri Precision: 0.0 - Uri Recall: 0.0 - Uri F1: 0.0 - Uri Number: 1 - Overall Precision: 0.9014 - Overall Recall: 0.9201 - Overall F1: 0.9107 - Overall Accuracy: 0.9851
e1f688c67636b739ca3b3b295edb33fd
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Attackid Precision | Attackid Recall | Attackid F1 | Attackid Number | Cve Precision | Cve Recall | Cve F1 | Cve Number | Defenderthreat Precision | Defenderthreat Recall | Defenderthreat F1 | Defenderthreat Number | Domain Precision | Domain Recall | Domain F1 | Domain Number | Email Precision | Email Recall | Email F1 | Email Number | Filepath Precision | Filepath Recall | Filepath F1 | Filepath Number | Hostname Precision | Hostname Recall | Hostname F1 | Hostname Number | Ipv4 Precision | Ipv4 Recall | Ipv4 F1 | Ipv4 Number | Md5 Precision | Md5 Recall | Md5 F1 | Md5 Number | Sha1 Precision | Sha1 Recall | Sha1 F1 | Sha1 Number | Sha256 Precision | Sha256 Recall | Sha256 F1 | Sha256 Number | Uri Precision | Uri Recall | Uri F1 | Uri Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-------------:|:----------:|:------:|:----------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:----------------:|:-------------:|:---------:|:-------------:|:---------------:|:------------:|:--------:|:------------:|:------------------:|:---------------:|:-----------:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:--------------:|:-----------:|:-------:|:-----------:|:----------------:|:-------------:|:---------:|:-------------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.3797 | 0.37 | 500 | 0.1998 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.0286 | 0.0435 | 0.0345 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.5108 | 0.7152 | 0.5960 | 165 | 0.1774 | 0.9167 | 0.2973 | 12 | 0.4 | 0.5 | 0.4444 | 12 | 0.3194 | 0.4423 | 0.3710 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.4588 | 0.8864 | 0.6047 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.3875 | 0.5858 | 0.4664 | 0.9593 | | 0.1713 | 0.75 | 1000 | 0.1619 | 0.6 | 0.5 | 0.5455 | 6 | 0.5 | 0.6364 | 0.56 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.6957 | 0.6957 | 0.6957 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.6879 | 0.6545 | 0.6708 | 165 | 0.5217 | 1.0 | 0.6857 | 12 | 0.5714 | 1.0 | 0.7273 | 12 | 0.6667 | 0.8846 | 0.7603 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7692 | 0.9091 | 0.8333 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.6685 | 0.7219 | 0.6942 | 0.9664 | | 0.1152 | 1.12 | 1500 | 0.1096 | 0.8333 | 0.8333 | 0.8333 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.7826 | 0.7826 | 0.7826 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.7202 | 0.8424 | 0.7765 | 165 | 1.0 | 1.0 | 1.0 | 12 | 0.4444 | 1.0 | 0.6154 | 12 | 0.6944 | 0.9615 | 0.8065 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8723 | 0.9318 | 0.9011 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7312 | 0.8609 | 0.7908 | 0.9751 | | 0.1089 | 1.5 | 2000 | 0.1243 | 1.0 | 1.0 | 1.0 | 6 | 0.9167 | 1.0 | 0.9565 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9048 | 0.8261 | 0.8636 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8011 | 0.8788 | 0.8382 | 165 | 0.6667 | 1.0 | 0.8 | 12 | 0.9091 | 0.8333 | 0.8696 | 12 | 0.7812 | 0.9615 | 0.8621 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7857 | 1.0 | 0.88 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8065 | 0.8876 | 0.8451 | 0.9750 | | 0.0947 | 1.87 | 2500 | 0.0913 | 0.75 | 1.0 | 0.8571 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.8462 | 0.9565 | 0.8980 | 23 | 0.3333 | 0.6667 | 0.4444 | 3 | 0.8035 | 0.8424 | 0.8225 | 165 | 0.6 | 1.0 | 0.7500 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.7969 | 0.9808 | 0.8793 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8302 | 1.0 | 0.9072 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7952 | 0.8846 | 0.8375 | 0.9792 | | 0.0629 | 2.25 | 3000 | 0.0940 | 1.0 | 0.8333 | 0.9091 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9565 | 0.9565 | 0.9565 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8671 | 0.8303 | 0.8483 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.9273 | 0.9808 | 0.9533 | 52 | 0.25 | 0.1429 | 0.1818 | 7 | 0.8776 | 0.9773 | 0.9247 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8946 | 0.8787 | 0.8866 | 0.9825 | | 0.0442 | 2.62 | 3500 | 0.1012 | 1.0 | 1.0 | 1.0 | 6 | 0.9167 | 1.0 | 0.9565 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9091 | 0.8696 | 0.8889 | 23 | 0.75 | 1.0 | 0.8571 | 3 | 0.8182 | 0.8727 | 0.8446 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.92 | 0.8846 | 0.9020 | 52 | 0.5 | 1.0 | 0.6667 | 7 | 0.9565 | 1.0 | 0.9778 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8616 | 0.9024 | 0.8815 | 0.9818 | | 0.0401 | 3.0 | 4000 | 0.0810 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 0.9565 | 0.9778 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8841 | 0.8788 | 0.8815 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.8333 | 0.9615 | 0.8929 | 52 | 0.6667 | 0.8571 | 0.75 | 7 | 0.9565 | 1.0 | 0.9778 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.9014 | 0.9201 | 0.9107 | 0.9851 |
bc497eca89ab6baac8ac9ed877e5519c
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Seek.art MEGA is a general use "anything" model that significantly improves on 1.5 across dozens of styles. Created by Coreco at [seek.art](https://seek.art/) This model was trained on nearly 10k high-quality public domain digital artworks with the goal of improving output quality across the board. We find the model to be highly flexible in its ability to mix various styles, subjects, and details. We recommend resolutions above 640px in one or both dimensions for best results. You can try this model and several others for free at [seek.art](https://seek.art/). We also recommend an inference tool supporting prompt weighting and high resolution optimization / fixing for best results. We suggest [InvokeAI](https://github.com/invoke-ai/InvokeAI) as a sensibly licensed and fully featured open-source inference tool.
cfbac0ff022042c9af780ac315961fe7
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Examples <img src="https://huggingface.co/coreco/seek.art_MEGA/resolve/main/examples.png" style="max-width: 800px;" width="100%"/> The above example images including the prompts and all relevant settings are available [here](https://seek.art/explore/search?collection=6112a64d-bd8b-4043-8d96-88c7cfa65c43). Additionally, search thousands of high quality prompts on [seek.art](https://seek.art/) for free.
36952d6c8c00eade67297cc3c8af2fd4
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Use Restrictions You agree not to use the Model or Derivatives of the Model: - for the commercial purpose of hosted content generation (inference) without the express written permission of seek.art. Model output for personal use carries no such commercial restriction. - In any way that violates any applicable national, federal, state, local or international law or regulation; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate personal identifiable information that can be used to harm an individual; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories; - To provide medical advice and medical results interpretation; - To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
99bf23039f6a4adc1c6eeefafdfb9b48
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3186 - Accuracy: 0.87 - F1: 0.8770
6ab7a559704280833d85db28e9d8e1ad
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-cola-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan
35fdcc857a46fd63e14ce3e97adad7d3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.2575 | 0.47 | 500 | 6.4792 | | 6.4145 | 0.94 | 1000 | 6.4699 | | 6.2252 | 1.4 | 1500 | 6.5489 | | 6.0413 | 1.87 | 2000 | 6.3427 | | 5.8394 | 2.34 | 2500 | 6.2134 | | 5.825 | 2.81 | 3000 | nan | | 5.8071 | 3.27 | 3500 | 6.1627 | | 5.6601 | 3.74 | 4000 | 6.0835 | | 5.686 | 4.21 | 4500 | 6.0319 | | 5.6029 | 4.68 | 5000 | 5.9500 | | 5.5236 | 5.14 | 5500 | 5.9621 | | 5.586 | 5.61 | 6000 | 5.8955 | | 5.5582 | 6.08 | 6500 | 6.0435 | | 5.412 | 6.55 | 7000 | 6.0175 | | 5.397 | 7.02 | 7500 | nan |
571ee2d5fad1590493f176cfb32540bc
apache-2.0
['generated_from_trainer']
false
recipe-lr8e06-wd0.1-bs32 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2752 - Rmse: 0.5246 - Mse: 0.2752 - Mae: 0.4184
bf1db6ce786e714aa85791eab0b52059
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2769 | 1.0 | 623 | 0.2773 | 0.5266 | 0.2773 | 0.4297 | | 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4144 | | 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 | | 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 | | 0.2714 | 5.0 | 3115 | 0.2758 | 0.5252 | 0.2758 | 0.4233 | | 0.2705 | 6.0 | 3738 | 0.2752 | 0.5246 | 0.2752 | 0.4184 |
9eeba128cb74a443d75c5a73d4628b10
mit
[]
false
Basic use ```python import cv2 import numpy as np import onnxruntime as rt from huggingface_hub import hf_hub_download tagger_model_path = hf_hub_download(repo_id="skytnt/deepdanbooru_onnx", filename="deepdanbooru.onnx") tagger_model = rt.InferenceSession(tagger_model_path, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) tagger_model_meta = tagger_model.get_modelmeta().custom_metadata_map tagger_tags = eval(tagger_model_meta['tags']) def tagger_predict(image, score_threshold): s = 512 h, w = image.shape[:-1] h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) ph, pw = s - h, s - w image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) image = cv2.copyMakeBorder(image, ph // 2, ph - ph // 2, pw // 2, pw - pw // 2, cv2.BORDER_REPLICATE) image = image.astype(np.float32) / 255 image = img_new[np.newaxis, :] probs = tagger_model.run(None, {"input_1": image})[0][0] probs = probs.astype(np.float32) res = [] for prob, label in zip(probs.tolist(), tagger_tags): if prob < score_threshold: continue res.append(label) return res img = cv2.imread("test.jpg") img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) tags = tagger_predict(img, 0.5) print(tags) ```
5a7ad9900be699424bbad62b29fbf6c1
mit
[]
false
Multi-gpu batch process ```python import cv2 import torch import os import numpy as np import onnxruntime as rt from huggingface_hub import hf_hub_download from torch.utils.data import DataLoader, Dataset from PIL import Image from tqdm import tqdm from threading import Thread class MyDataset(Dataset): def __init__(self, image_list): self.image_list = image_list def __len__(self): length = len(self.image_list) return length def __getitem__(self, index): image = Image.open(self.image_list[index]).convert("RGB") image = np.asarray(image) s = 512 h, w = image.shape[:-1] h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) ph, pw = s - h, s - w image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) image = cv2.copyMakeBorder(image, ph // 2, ph - ph // 2, pw // 2, pw - pw // 2, cv2.BORDER_REPLICATE) image = image.astype(np.float32) / 255 image = torch.from_numpy(image) idx = torch.tensor([index], dtype=torch.int32) return image, idx def get_images(path): def file_ext(fname): return os.path.splitext(fname)[1].lower() all_files = { os.path.relpath(os.path.join(root, fname), path) for root, _dirs, files in os.walk(path) for fname in files } all_images = sorted( os.path.join(path, fname) for fname in all_files if file_ext(fname) in [".png", ".jpg", ".jpeg"] ) print(len(all_images)) return all_images def process(all_images, batch_size=8, score_threshold=0.35): predictions = {} def work_fn(images, device_id): dataset = MyDataset(images) dataloader = DataLoader( dataset, batch_size=batch_size, shuffle=False, persistent_workers=True, num_workers=4, pin_memory=True, ) for data in tqdm(dataloader): image, idxs = data image = image.numpy() probs = tagger_model[device_id].run(None, {"input_1": image})[0] probs = probs.astype(np.float32) bs = probs.shape[0] for i in range(bs): tags = [] for prob, label in zip(probs[i].tolist(), tagger_tags): if prob > score_threshold: tags.append((label, prob)) predictions[images[idxs[i].item()]] = tags gpu_num = len(tagger_model) image_num = (len(all_images) // gpu_num) + 1 ts = [Thread(target=work_fn, args=(all_images[i * image_num:(i + 1) * image_num], i)) for i in range(gpu_num)] for t in ts: t.start() for t in ts: t.join() return predictions gpu_num = 4 batch_size = 8 tagger_model_path = hf_hub_download(repo_id="skytnt/deepdanbooru_onnx", filename="deepdanbooru.onnx") tagger_model = [ rt.InferenceSession(tagger_model_path, providers=['CUDAExecutionProvider'], provider_options=[{'device_id': i}]) for i in range(gpu_num)] tagger_model_meta = tagger_model[0].get_modelmeta().custom_metadata_map tagger_tags = eval(tagger_model_meta['tags']) all_images = get_images("./data") predictions = process(all_images, batch_size) ```
3c195da5f31fbc70bf70d5cde7ab7077
cc-by-4.0
['generated_from_trainer']
false
bert-large-uncased-whole-word-masking-squad2-with-ner-Pwhatisthe-conll2003-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets.
e8c814fbe080c367ccaecf4134cf489a
mit
['generated_from_keras_callback']
false
Deep98/Cardinal__Catholicism_-clustered This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3075 - Train End Logits Accuracy: 0.8958 - Train Start Logits Accuracy: 0.9306 - Validation Loss: 1.3105 - Validation End Logits Accuracy: 0.75 - Validation Start Logits Accuracy: 0.5 - Epoch: 0
7517080fd13eeff1d9cfbe9a990e711c
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3075 | 0.8958 | 0.9306 | 1.3105 | 0.75 | 0.5 | 0 |
2fc57774eec76c9db241f4f0cbe13eb9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4795 | 1.28 | 100 | 2.2135 | | 2.0935 | 2.56 | 200 | 2.1722 | | 1.9961 | 3.84 | 300 | 2.1639 | | 1.9455 | 5.13 | 400 | 2.1605 | | 1.9083 | 6.41 | 500 | 2.1609 |
f467718346daa905d424b92185770e5b
apache-2.0
['translation']
false
alv-eng * source group: Atlantic-Congo languages * target group: English * OPUS readme: [alv-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md) * model: transformer * source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.eval.txt)
70a90f4759a4aa49d42e52652e02a42b
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ewe-eng.ewe.eng | 6.3 | 0.328 | | Tatoeba-test.ful-eng.ful.eng | 0.4 | 0.108 | | Tatoeba-test.ibo-eng.ibo.eng | 4.5 | 0.196 | | Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.511 | | Tatoeba-test.lin-eng.lin.eng | 2.8 | 0.213 | | Tatoeba-test.lug-eng.lug.eng | 3.4 | 0.140 | | Tatoeba-test.multi.eng | 20.9 | 0.376 | | Tatoeba-test.nya-eng.nya.eng | 38.7 | 0.492 | | Tatoeba-test.run-eng.run.eng | 24.5 | 0.417 | | Tatoeba-test.sag-eng.sag.eng | 5.5 | 0.177 | | Tatoeba-test.sna-eng.sna.eng | 26.9 | 0.412 | | Tatoeba-test.swa-eng.swa.eng | 4.9 | 0.196 | | Tatoeba-test.toi-eng.toi.eng | 3.9 | 0.147 | | Tatoeba-test.tso-eng.tso.eng | 76.7 | 0.957 | | Tatoeba-test.umb-eng.umb.eng | 4.0 | 0.195 | | Tatoeba-test.wol-eng.wol.eng | 3.7 | 0.170 | | Tatoeba-test.xho-eng.xho.eng | 38.9 | 0.556 | | Tatoeba-test.yor-eng.yor.eng | 25.1 | 0.412 | | Tatoeba-test.zul-eng.zul.eng | 46.1 | 0.623 |
c1a18ef877178e69130079b93c2e5714
apache-2.0
['translation']
false
System Info: - hf_name: alv-eng - source_languages: alv - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en'] - src_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt - src_alpha3: alv - tgt_alpha3: eng - short_pair: alv-en - chrF2_score: 0.376 - bleu: 20.9 - brevity_penalty: 1.0 - ref_len: 15208.0 - src_name: Atlantic-Congo languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: alv - tgt_alpha2: en - prefer_old: False - long_pair: alv-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
3e90861073d525f5d4a901585256010e
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
XLS-R-300M - Maltese This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: - Loss: 0.1895 - Wer: 0.1984
4b1b9ee22d1a325107510bbbe70a190b
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60.0 - mixed_precision_training: Native AMP
bf172781988ee67a582da6686aa6d150
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4219 | 3.6 | 400 | 3.3127 | 1.0 | | 3.0399 | 7.21 | 800 | 3.0330 | 1.0 | | 1.5756 | 10.81 | 1200 | 0.6108 | 0.5724 | | 1.0995 | 14.41 | 1600 | 0.3091 | 0.3154 | | 0.9639 | 18.02 | 2000 | 0.2596 | 0.2841 | | 0.9032 | 21.62 | 2400 | 0.2270 | 0.2514 | | 0.8145 | 25.23 | 2800 | 0.2172 | 0.2483 | | 0.7845 | 28.83 | 3200 | 0.2084 | 0.2333 | | 0.7694 | 32.43 | 3600 | 0.1974 | 0.2234 | | 0.7333 | 36.04 | 4000 | 0.2020 | 0.2185 | | 0.693 | 39.64 | 4400 | 0.1947 | 0.2148 | | 0.6802 | 43.24 | 4800 | 0.1960 | 0.2102 | | 0.667 | 46.85 | 5200 | 0.1904 | 0.2072 | | 0.6486 | 50.45 | 5600 | 0.1881 | 0.2009 | | 0.6339 | 54.05 | 6000 | 0.1877 | 0.1989 | | 0.6254 | 57.66 | 6400 | 0.1893 | 0.2003 |
93e83430d51c802070ff70141786f7c7
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config mt --split test ```
2e45083d411f27c17b66f9e6358173bb
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mt", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text
8be2e1c13bb8cf9547713e045df93636
apache-2.0
[]
false
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**. The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia), and finally fine-tuned on [Web Questions (WQ)](https://huggingface.co/datasets/web_questions). **Note**: The model was fine-tuned on 90% of the train splits of [Web Questions (WQ)](https://huggingface.co/datasets/web_questions) for 20k steps and validated on the held-out 10% of the train split. Other community Checkpoints: [here](https://huggingface.co/models?search=ssm) Paper: [How Much Knowledge Can You Pack Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf) Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
d53e78c4fca89315351c12e19c5f85cb
apache-2.0
[]
false
Results on Web Questions - Test Set |Id | link | Exact Match | |---|---|---| |**T5-11b**|**https://huggingface.co/google/t5-11b-ssm-wqo**|**40.8**| |T5-xxl|https://huggingface.co/google/t5-xxl-ssm-wqo|42.8|
4da0b998a87b6a35139c63787ea5706c
apache-2.0
[]
false
Usage The model can be used as follows for **closed book question answering**: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-11b-ssm-wqo") t5_tok = AutoTokenizer.from_pretrained("google/t5-11b-ssm-wqo") input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids gen_output = t5_qa_model.generate(input_ids)[0] print(t5_tok.decode(gen_output, skip_special_tokens=True)) ```
57b99b1f760a283352e9ff59fbd3fc4e
apache-2.0
['generated_from_trainer']
false
t5-base-fine-tuned-for-Punctuation-Restoration This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1097
3915932ee702e96cdf8b0162a2910dec
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Base Yue This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 yue dataset. It achieves the following results on the evaluation set: - Loss: 0.3671 - Wer: 69.5864
1d001d6268ececb049b2eac726460b05
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 1000 - mixed_precision_training: Native AMP
12572fafd413de9304b09512ce34db1e
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0998 | 2.78 | 500 | 0.3500 | 71.4517 | | 0.0085 | 5.56 | 1000 | 0.3671 | 69.5864 |
74f65a675e07800ae4e391c4d0e5a19a
apache-2.0
['translation']
false
opus-mt-lt-de * source languages: lt * target languages: de * OPUS readme: [lt-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lt-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/lt-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-de/opus-2020-01-21.eval.txt)
218a7c0a60567760459bd8485a0e3bc6
apache-2.0
['generated_from_trainer']
false
distilbert_model_fine_tuned_unlabeled_all This model is a fine-tuned version of [nouman-10/distilbert_model_fine_tuned_unlabeled_all](https://huggingface.co/nouman-10/distilbert_model_fine_tuned_unlabeled_all) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1708 - Accuracy: 0.95
3bce88dba3602ecb989d69239e42cd7b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1329 | 1.0 | 875 | 0.1708 | 0.95 |
291e9e4cd6514d9b0b29e64f12143044
mit
['music']
false
Model description TunesFormer is a Transformer-based melody generation system trained on 285,449 melodies with musical forms (represented by control codes), where all scores are represented in ABC notation. It was introduced in the paper [TunesFormer: Forming Tunes with Control Codes](https://arxiv.org/abs/2301.02884) by Wu et al. The code is released in [this repository](https://github.com/sander-wood/tunesformer), and the dataset is released in [huggingface](https://huggingface.co/datasets/sander-wood/abc_cc). By utilizing specific symbols commonly found in ABC notation to indicate section boundaries, TunesFormer can understand and generate melodies with given musical forms based on control codes. The checkpoint released here is TunesFormer-GP (Global Placement), where all the control codes are placed at the beginning of the ABC notation.
a532ef858970707c70dbbb4a3da92596
mit
['music']
false
Intended uses & limitations You can use this model for melody generation conditioned on musical forms. All scores generated by this model can be written on one stave (for vocal solo or instrumental solo) in standard classical notation, and are in a variety of styles, e.g., blues, classical, folk, jazz, pop, and world music. The generated tunes are in ABC notation, and can be converted to sheet music or audio using [this website](https://ldzhangyx.github.io/abc/), or [this software](https://sourceforge.net/projects/easyabc/). TunesFormer supports the generation of up to 8 sections, and up to 32 bars per section. In addition, although TunesFormer mostly generates music correctly according to the control codes, due to the random nature of sampling, the musical structure generated by the model occasionally does not match that specified by the control codes when more than 6 sections are generated, or when more than 17 bars are generated for a single section. For more information, please check [our paper](https://arxiv.org/abs/2301.02884).
1fcaab7caefeb0607f34524ca88b8764
mit
['music']
false
How to use 1. Install dependencies for the code released in [this repository](https://github.com/sander-wood/tunesformer): ``` torch 1.9.1+cu111 samplings 0.1.7 transformers 4.18.0 ``` 2. Set the `control_codes` and `prompt` in the script `run_inference.py` for conditional music generation. ``` control_codes = "[SECS_3][BARS_4][SIM_6][BARS_4][SIM_10][SIM_6][BARS_4]" prompt = """L:1/4 M:4/4 K:C "C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||""" ``` For TunesFormer, the input is a concatenation of `control_codes` and `prompt`. Both `control_codes` and `prompt` are optional. However, if you need to set the prompt, you must set the control codes. 3. Run the script `run_inference.py`. When running a script for the first time, the downloaded files will be cached for future reuse. ``` python run_inference.py -num_tunes 3 -max_length 1024 -top_p 0.9 -temperature 1.0 -seed 1 ``` 4. Enjoy tunes in the folder `output_tunes`! If you want to convert these ABC tunes to sheet music or audio, please refer to `Intended uses & limitations`. ``` X:1 L:1/4 M:4/4 K:C "C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" G G"F" A A |"G" G G"C" E2 | "G" F F"C" E E |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |] X:2 L:1/4 M:4/4 K:C "C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" E E"F" F F |"C" G G"F" A2 | "G7" F F"C" E E |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |] X:3 L:1/4 M:4/4 K:C "C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" G G"F" A A |"C" G G"F" F2 | "C" E E"G" D D |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |] ```
692c3c65c37920508412b8de02403398
mit
['music']
false
Usage ``` optional arguments: -h, --help show this help message and exit -num_tunes NUM_TUNES the number of independently computed returned tunes -max_length MAX_LENGTH integer to define the maximum length in tokens of each tune -top_p TOP_P float to define the tokens that are within the sample operation of text generation -temperature TEMPERATURE the temperature of the sampling operation -seed SEED seed for randomstate ```
1bce271f305012f521c419f09540afe9
mit
['music']
false
BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2301.02884, doi = {10.48550/ARXIV.2301.02884}, url = {https://arxiv.org/abs/2301.02884}, author = {Wu, Shangda and Sun, Maosong}, keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering}, title = {TunesFormer: Forming Tunes with Control Codes}, publisher = {arXiv}, year = {2023}, copyright = {Creative Commons Attribution 4.0 International} } ```
fc9df816fbc5b857c71460b30517c0cd
apache-2.0
['generated_from_trainer']
false
summarise_v2 This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3235 - Rouge2 Precision: 0.018 - Rouge2 Recall: 0.0916 - Rouge2 Fmeasure: 0.0292
959d60496a83d2b61654a6fde00edc3c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 3.1721 | 0.08 | 10 | 2.7742 | 0.0107 | 0.0671 | 0.0178 | | 3.0802 | 0.16 | 20 | 2.7914 | 0.0111 | 0.0878 | 0.019 | | 3.0795 | 0.24 | 30 | 2.6954 | 0.0094 | 0.076 | 0.0157 | | 2.5806 | 0.32 | 40 | 2.6587 | 0.0028 | 0.0271 | 0.0046 | | 2.6553 | 0.4 | 50 | 2.5958 | 0.0084 | 0.0566 | 0.0143 | | 2.689 | 0.48 | 60 | 2.4857 | 0.0089 | 0.0733 | 0.015 | | 2.6642 | 0.56 | 70 | 2.4205 | 0.0069 | 0.0478 | 0.0116 | | 2.3768 | 0.64 | 80 | 2.3754 | 0.0127 | 0.0795 | 0.0215 | | 2.1949 | 0.72 | 90 | 2.3752 | 0.0155 | 0.1013 | 0.0258 | | 2.3257 | 0.8 | 100 | 2.3509 | 0.0155 | 0.1011 | 0.0261 | | 2.4053 | 0.88 | 110 | 2.3261 | 0.015 | 0.0901 | 0.0246 | | 2.9896 | 0.96 | 120 | 2.3235 | 0.018 | 0.0916 | 0.0292 |
d74b89aab792305e535d3979b3f6af07
apache-2.0
['translation', 'generated_from_trainer']
false
marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on the kde4 dataset. It achieves the following results on the evaluation set: - eval_loss: 1.2473 - eval_bleu: 41.4902 - eval_runtime: 1405.0341 - eval_samples_per_second: 15.699 - eval_steps_per_second: 0.246 - step: 0
4861981d7281e2e5d51ff90505ae94e7
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
mk-walkcycle Dreambooth model trained by spooncats with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/spooncats/mk-walkcycle/resolve/main/sample_images/00001-1893850525-spiderman_wal___.png)
5a183d25d02a7576bfa7667c0a842c70
mit
['text-classification', 'pytorch', 'transformers']
false
Multi2ConvAI-Corona: finetuned Bert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned Bert
d2c7a625b9cb82071c553a87ecd8f1ab
mit
['text-classification', 'pytorch', 'transformers']
false
Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert") ````
4752cb4598c44ddcf134f8b2609e8cfa
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
ka-rina Dreambooth model trained by cdefghijkl with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
2a2b89fbbb213fb5f8295d8b7c95e393
apache-2.0
[]
false
Graphcore/gpt2-medium-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
dd7a1cb3fc8d8ce164e119a9141b6736
apache-2.0
[]
false
Model description GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
b4321bf3c011daad61a5f5a7c85ac4d5
apache-2.0
[]
false
Intended uses & limitations This model contains just the `IPUConfig` files for running the [HuggingFace/gpt2-medium](https://huggingface.co/gpt2-medium) model on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.**
fff87196f9047ebf37eaca5c7a42266e
apache-2.0
['zero-shot-classification', 'nli', 'pytorch']
false
Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA *Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html
ccedcf3c587c11b08532ebf7d03a3226
apache-2.0
['zero-shot-classification', 'nli', 'pytorch']
false
transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html). In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
6976af4baaa671bb06ef0669243d827a
apache-2.0
['zero-shot-classification', 'nli', 'pytorch']
false
Usage ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="Recognai/zeroshot_selectra_medium") classifier( "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo", candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"], hypothesis_template="Este ejemplo es {}." ) """Output {'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo', 'labels': ['sociedad', 'cultura', 'salud', 'economia', 'deportes'], 'scores': [0.3711881935596466, 0.25650349259376526, 0.17355826497077942, 0.1641489565372467, 0.03460107371211052]} """ ``` The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
4fa4674d29778507206a9ae567421aad
apache-2.0
['zero-shot-classification', 'nli', 'pytorch']
false
Metrics | Model | Params | XNLI (acc) | \*MLSUM (acc) | | --- | --- | --- | --- | | [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 | | [zs SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium) | 41M | **0.807** | **0.589** | | zs SELECTRA small | **22M** | 0.795 | 0.446 | \*evaluated with zero-shot learning (ZSL) - **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion. - **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
7697252457742a23fef3c982554f336b
apache-2.0
['zero-shot-classification', 'nli', 'pytorch']
false
Authors - David Fidalgo ([GitHub](https://github.com/dcfidalgo)) - Daniel Vila ([GitHub](https://github.com/dvsrepo)) - Francisco Aranda ([GitHub](https://github.com/frascuchon)) - Javier Lopez ([GitHub](https://github.com/javispp))
516a899c9abba7470c035def205f1fb9
apache-2.0
['generated_from_trainer']
false
![SGH logo.png](https://s3.amazonaws.com/moonup/production/uploads/1667382308985-631feef1124782a19eff4243.png) This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the multi_news dataset. It achieves the following results on the evaluation set: - Loss: 2.3650 - Rouge1 Precision: 0.4673 - Rouge1 Recall: 0.4135 - Rouge1 Fmeasure: 0.4263 - Rouge2 Precision: 0.1579 - Rouge2 Recall: 0.1426 - Rouge2 Fmeasure: 0.1458 - Rougel Precision: 0.2245 - Rougel Recall: 0.2008 - Rougel Fmeasure: 0.2061 - Rougelsum Precision: 0.2245 - Rougelsum Recall: 0.2008 - Rougelsum Fmeasure: 0.2061
583cfe83dd8b92a10ad1b5cdc4a40bc5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
aa5b933f541001e09fd9c5625a46af06
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:| | 2.8095 | 0.16 | 10 | 2.5393 | 0.287 | 0.5358 | 0.3674 | 0.1023 | 0.1917 | 0.1311 | 0.1374 | 0.2615 | 0.1771 | 0.1374 | 0.2615 | 0.1771 | | 2.6056 | 0.32 | 20 | 2.4752 | 0.5005 | 0.3264 | 0.3811 | 0.1663 | 0.1054 | 0.1249 | 0.2582 | 0.1667 | 0.1957 | 0.2582 | 0.1667 | 0.1957 | | 2.5943 | 0.48 | 30 | 2.4422 | 0.4615 | 0.3833 | 0.4047 | 0.1473 | 0.1273 | 0.1321 | 0.2242 | 0.1885 | 0.1981 | 0.2242 | 0.1885 | 0.1981 | | 2.4842 | 0.64 | 40 | 2.4186 | 0.4675 | 0.3829 | 0.4081 | 0.1581 | 0.1294 | 0.1384 | 0.2286 | 0.187 | 0.1995 | 0.2286 | 0.187 | 0.1995 | | 2.4454 | 0.8 | 50 | 2.3990 | 0.467 | 0.408 | 0.4222 | 0.1633 | 0.1429 | 0.1477 | 0.2294 | 0.2008 | 0.2076 | 0.2294 | 0.2008 | 0.2076 | | 2.3622 | 0.96 | 60 | 2.3857 | 0.4567 | 0.3898 | 0.41 | 0.1433 | 0.1233 | 0.1295 | 0.2205 | 0.1876 | 0.1976 | 0.2205 | 0.1876 | 0.1976 | | 2.4034 | 1.13 | 70 | 2.3835 | 0.4515 | 0.4304 | 0.4294 | 0.1526 | 0.1479 | 0.1459 | 0.2183 | 0.209 | 0.2078 | 0.2183 | 0.209 | 0.2078 | | 2.2612 | 1.29 | 80 | 2.3804 | 0.455 | 0.4193 | 0.4236 | 0.1518 | 0.1429 | 0.1427 | 0.2177 | 0.2025 | 0.2037 | 0.2177 | 0.2025 | 0.2037 | | 2.2563 | 1.45 | 90 | 2.3768 | 0.4821 | 0.391 | 0.4196 | 0.1652 | 0.1357 | 0.144 | 0.2385 | 0.1929 | 0.2069 | 0.2385 | 0.1929 | 0.2069 | | 2.243 | 1.61 | 100 | 2.3768 | 0.4546 | 0.4093 | 0.4161 | 0.1552 | 0.1402 | 0.1422 | 0.2248 | 0.2016 | 0.2052 | 0.2248 | 0.2016 | 0.2052 | | 2.2505 | 1.77 | 110 | 2.3670 | 0.4625 | 0.4189 | 0.4262 | 0.1606 | 0.1485 | 0.1493 | 0.2301 | 0.2098 | 0.2119 | 0.2301 | 0.2098 | 0.2119 | | 2.2453 | 1.93 | 120 | 2.3650 | 0.4673 | 0.4135 | 0.4263 | 0.1579 | 0.1426 | 0.1458 | 0.2245 | 0.2008 | 0.2061 | 0.2245 | 0.2008 | 0.2061 |
b79fcfb9e0bd6a04d76bdbcec22f73a0
mit
['generated_from_keras_callback']
false
xenergy/gpt2-indo This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3370 - Validation Loss: 1.8387 - Epoch: 0
a8eb58da60be2d5b96edd9cba7ec3ef2
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 39182, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
cf62bd1d11aa23930d4378bee1259754
apache-2.0
[]
false
DistilBERT base multilingual model Spanish subset (cased) This model is the Spanish extract of `distilbert-base-multilingual-cased` (https://huggingface.co/distilbert-base-multilingual-cased), a distilled version of the [BERT base multilingual model](bert-base-multilingual-cased). This model is cased: it does make a difference between english and English. It uses the extraction method proposed by Geotrend described in https://github.com/Geotrend-research/smaller-transformers. The resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of **63M parameters** (compared to 134M parameters for DistilmBERT). The goal of this model is to reduce even further the size of the `distilbert-base-multilingual` multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT.
94d3507a701ae7975be28fa2793c98fa
mit
['generated_from_trainer']
false
bert-base-german-cased-finetuned-200labels This model is a fine-tuned version of [ogimgio/bert-base-german-cased-finetuned-7labels](https://huggingface.co/ogimgio/bert-base-german-cased-finetuned-7labels) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0744 - Micro f1: 0.0894 - Macro f1: 0.0538
bc3688ea91084750f1be603eb08229f7
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 50
bb86234d42e8ec360b3e1fbbf41c9d16
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.8041 | 1.0 | 1380 | 0.7312 | 0.0422 | 0.0413 | | 0.605 | 2.0 | 2760 | 0.5440 | 0.0436 | 0.0423 | | 0.4455 | 3.0 | 4140 | 0.4026 | 0.0478 | 0.0449 | | 0.3317 | 4.0 | 5520 | 0.3070 | 0.0574 | 0.0516 | | 0.2553 | 5.0 | 6900 | 0.2432 | 0.0682 | 0.0599 | | 0.2041 | 6.0 | 8280 | 0.1982 | 0.0759 | 0.0657 | | 0.167 | 7.0 | 9660 | 0.1653 | 0.0798 | 0.0677 | | 0.1403 | 8.0 | 11040 | 0.1417 | 0.0839 | 0.0693 | | 0.1222 | 9.0 | 12420 | 0.1249 | 0.0865 | 0.0695 | | 0.109 | 10.0 | 13800 | 0.1132 | 0.0880 | 0.0684 | | 0.0999 | 11.0 | 15180 | 0.1052 | 0.0874 | 0.0661 | | 0.0941 | 12.0 | 16560 | 0.0994 | 0.0878 | 0.0655 | | 0.089 | 13.0 | 17940 | 0.0951 | 0.0876 | 0.0640 | | 0.0854 | 14.0 | 19320 | 0.0917 | 0.0888 | 0.0638 | | 0.0831 | 15.0 | 20700 | 0.0890 | 0.0889 | 0.0626 | | 0.0804 | 16.0 | 22080 | 0.0869 | 0.0890 | 0.0616 | | 0.0788 | 17.0 | 23460 | 0.0851 | 0.0890 | 0.0606 | | 0.077 | 18.0 | 24840 | 0.0835 | 0.0894 | 0.0599 | | 0.0759 | 19.0 | 26220 | 0.0822 | 0.0894 | 0.0593 | | 0.0745 | 20.0 | 27600 | 0.0811 | 0.0896 | 0.0588 | | 0.0735 | 21.0 | 28980 | 0.0800 | 0.0890 | 0.0573 | | 0.0728 | 22.0 | 30360 | 0.0791 | 0.0888 | 0.0564 | | 0.0716 | 23.0 | 31740 | 0.0783 | 0.0895 | 0.0559 | | 0.0709 | 24.0 | 33120 | 0.0775 | 0.0900 | 0.0556 | | 0.0698 | 25.0 | 34500 | 0.0768 | 0.0896 | 0.0550 | | 0.0694 | 26.0 | 35880 | 0.0761 | 0.0897 | 0.0548 | | 0.069 | 27.0 | 37260 | 0.0755 | 0.0892 | 0.0545 | | 0.0684 | 28.0 | 38640 | 0.0750 | 0.0893 | 0.0541 | | 0.0679 | 29.0 | 40020 | 0.0744 | 0.0894 | 0.0538 |
395419d467d03bceaed0470ff1dcb43c
apache-2.0
['tapex', 'table-question-answering']
false
TAPEX-large model fine-tuned on WikiSQL. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining). To load it and run inference, you can do the following: ``` from transformers import BartTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-wikisql") model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-wikisql")
dbeeae73b0ecd3bb987f1a75da5d08e1
apache-2.0
['tapex', 'table-question-answering']
false
define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py linearizer = IndexedRowTableLinearize() linear_table = linearizer.process_table(table_dict)
7f17565bf056a092b4a3c20118959320
apache-2.0
['translation']
false
opus-mt-en-ti * source languages: en * target languages: ti * OPUS readme: [en-ti](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ti/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.eval.txt)
baf14978f443d3c922d5764339d59d89
apache-2.0
['generated_from_keras_callback']
false
Imene/vit-base-patch16-224-wi2 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3098 - Train Accuracy: 0.9821 - Train Top-5-accuracy: 0.9971 - Validation Loss: 3.0737 - Validation Accuracy: 0.2491 - Validation Top-5-accuracy: 0.4476 - Epoch: 9
0db9cca5a50850ef67e4a08a817530a2
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0003, 'decay_steps': 1750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
f641c81ee013a5d760d2ad5f12841050
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Train Top-5-accuracy | Validation Loss | Validation Accuracy | Validation Top-5-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 4.4859 | 0.0195 | 0.0579 | 4.2995 | 0.0368 | 0.0865 | 0 | | 4.1729 | 0.0355 | 0.0987 | 4.0916 | 0.0472 | 0.1266 | 1 | | 3.9541 | 0.0666 | 0.1641 | 3.8050 | 0.0781 | 0.2035 | 2 | | 3.5823 | 0.1247 | 0.2615 | 3.4015 | 0.1429 | 0.2950 | 3 | | 3.0156 | 0.1913 | 0.3987 | 3.0598 | 0.1880 | 0.3916 | 4 | | 2.4618 | 0.3077 | 0.5572 | 2.9869 | 0.2056 | 0.4129 | 5 | | 1.8979 | 0.4541 | 0.7165 | 2.9507 | 0.2298 | 0.4425 | 6 | | 1.2075 | 0.6914 | 0.8886 | 3.0106 | 0.2394 | 0.4425 | 7 | | 0.6026 | 0.9097 | 0.9810 | 3.0739 | 0.2428 | 0.4413 | 8 | | 0.3098 | 0.9821 | 0.9971 | 3.0737 | 0.2491 | 0.4476 | 9 |
326dd20a79b70b132a6f969444539c90
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large v2 Italian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 it dataset. It achieves the following results on the evaluation set: - Loss: 0.1332 - Wer: 4.5576
3cdaebb28f1682ebf60252e5bb190e4f
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 6000
2ef0c3b1896ffc951e4d2e4f12149eb3
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1684 | 0.17 | 1000 | 0.1620 | 6.4620 | | 0.1174 | 0.33 | 2000 | 0.1418 | 5.5663 | | 0.069 | 1.1 | 3000 | 0.1400 | 5.2865 | | 0.0649 | 1.27 | 4000 | 0.1315 | 4.8932 | | 0.0334 | 2.04 | 5000 | 0.1368 | 4.6845 | | 0.037 | 2.21 | 6000 | 0.1332 | 4.5576 |
e6437d3e71087e0df5bdd2ab9d2aad17
apache-2.0
['generated_from_trainer']
false
bert-base-uncased.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_44 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
9b6e9285cfcdc8169a9a67492e7499a3
apache-2.0
['generated_from_trainer']
false
amazon_sentiment_sample_of_1900_with_summary This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1062 - Accuracy: 0.9581 - F1: 0.9579
2343cd1c226b692e83887b3ca837e87b
apache-2.0
['t5', 'text2text-generation', 'seq2seq']
false
Description [megagonlabs/t5-base-japanese-web](https://huggingface.co/megagonlabs/t5-base-japanese-web) is a T5 (Text-to-Text Transfer Transformer) model pre-trained on Japanese web texts. Training codes are [available on GitHub](https://github.com/megagonlabs/t5-japanese). The vocabulary size of this model is 32K. [8K version is also available](https://huggingface.co/megagonlabs/t5-base-japanese-web-8k).
927df8a0a2d3aa336da9fb6c705383a1
apache-2.0
['t5', 'text2text-generation', 'seq2seq']
false
Corpora We used following corpora for pre-training. - Japanese in [mC4/3.0.1](https://huggingface.co/datasets/mc4) (We used [Tensorflow native format](https://github.com/allenai/allennlp/discussions/5056)) - 87,425,304 pages - 782 GB in TFRecord format - [Japanese](https://www.tensorflow.org/datasets/catalog/wiki40b
b9cedd1813dacd85a21cb0bbb5a2e39c
apache-2.0
['t5', 'text2text-generation', 'seq2seq']
false
Tokenizer We used Japanese Wikipedia to train [SentencePiece](https://github.com/google/sentencepiece). - Vocabulary size: 32,000 - [Byte-fallback](https://github.com/google/sentencepiece/releases/tag/v0.1.9): Enabled
800eec45d1cad2077606ab5f34c9157d
apache-2.0
['t5', 'text2text-generation', 'seq2seq']
false
Parameters - T5 model: [models/t5.1.1.base.gin](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/gin/models/t5.1.1.base.gin) - Training steps: 1,000,000 It took about 126 hours with TPU v3-8
096e5e8b9330f430fa914a6d15eff080
apache-2.0
['t5', 'text2text-generation', 'seq2seq']
false
Related models - [日本語T5事前学習済みモデル (sonoisa/t5-base-japanese)](https://huggingface.co/sonoisa/t5-base-japanese) - [日本語T5事前学習済みモデル (sonoisa/t5-base-japanese-mC4-Wikipedia)](https://huggingface.co/sonoisa/t5-base-japanese-mC4-Wikipedia)
d9d2cc4029519bb199d67087bf43f676
apache-2.0
['t5', 'text2text-generation', 'seq2seq']
false
Citations - mC4 Contains information from `mC4` which is made available under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/). ```bibtex @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ``` - wiki40b ```bibtex @inproceedings{49029, title = {Wiki-40B: Multilingual Language Model Dataset}, author = {Mandy Guo and Zihang Dai and Denny Vrandecic and Rami Al-Rfou}, year = {2020}, booktitle = {LREC 2020} } ```
522184b88ecaaf474b36820d590d44cf
mit
['generated_from_keras_callback']
false
roberta-base-finetuned-unlabeled_all This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3581 - Validation Loss: 2.1388 - Epoch: 0
4e790851d667b143fd86e58dc26a655b
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4659, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
62debe2b91af3186d1a518d9d6d7e67f
apache-2.0
['translation']
false
opus-mt-ts-es * source languages: ts * target languages: es * OPUS readme: [ts-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.eval.txt)
ca4d1c50ffedb69c1eb73ec5e40c8798
unknown
[]
false
Ella lo dejó como lo dejaban todas, en defensa propia, la dependencia emocional que la había sujetado tantas veces, finalmente se vio superada por su instinto de supervivencia. - Estes una puta, me quieres dejar por ese que te follas cuando discutimos. - Amorcito, yo no estoy con nadie más que contigo y no puedes tratarme así, sólo yo entiendo que lo haces por tu trastorno. - Yo no tengo ningún trastorno, eres tú la que me enfermas con tu indecisión, ya sabes que eres el amor de mi vida y que haría cualquier cosa, cualquier cosa, por no perderte, te aviso.
1fd11fad20b412448aaadf50e4905181
apache-2.0
['translation']
false
opus-mt-nso-sv * source languages: nso * target languages: sv * OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt)
43e0d56edf77bc9de5e789b3dc148992