license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['Twitter', 'Multilingual']
false
Citation If you use TwHIN-BERT or out datasets in your work, please cite the following: ```bib @article{zhang2022twhin, title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations}, author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed}, journal={arXiv preprint arXiv:2209.07562}, year={2022} } ```
faa8a5f55dbdc2cd4affdc895c99606c
apache-2.0
['roberta-wwm']
false
使用Huggingface-Transformers 依托于[Huggingface-Transformers](https://github.com/huggingface/transformers),可轻松调用以上模型。 ``` tokenizer = BertTokenizer.from_pretrained("MODEL_NAME") model = BertModel.from_pretrained("MODEL_NAME") ``` **注意:本目录中的所有模型均使用BertTokenizer以及BertModel加载,请勿使用RobertaTokenizer/RobertaModel!** 其中`MODEL_NAME`对应列表如下: | 模型名 | MODEL_NAME | | - | - | | fin-roberta-wwm | wangfan/jdt-fin-roberta-wwm |
8b2eff3b5ec315500be38085246af16e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 282 | 3.3270 | 17.3937 | 4.0098 | 13.0087 | 15.3801 | 18.984 |
4e04de5c9f7aa7a38b6400bbdd0be5b6
mit
['spacy', 'token-classification']
false
uk_core_news_md Ukrainian pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `uk_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | floret (50000, 300) | | **Sources** | [Ukr-Synth (e5d9eaf3)](https://huggingface.co/datasets/ukr-models/Ukr-Synth) (Volodymyr Kurnosov)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) |
0872e5c78a6ccd522f92859352b54849
mit
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (1211 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `POS=CCONJ`, `Degree=Cmp\|POS=ADV`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=ADV\|PronType=Rel`, `POS=PART`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `POS=ADV`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|POS=ADP`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Loc\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Nom\|NumType=Card\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Number=Plur\|POS=ADJ`, `POS=SCONJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf`, `Degree=Pos\|POS=ADV`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Person=0\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Tot`, `POS=PART\|Polarity=Neg`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT\|PunctType=Quot`, `POS=PUNCT\|PunctType=Dash`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `POS=ADV\|PronType=Dem`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|POS=ADP`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|POS=ADP`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Number=Ptan\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|POS=PRON\|PronType=Neg`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|NumType=Card\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Gen\|Number=Ptan\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=NOUN`, `Abbr=Yes\|Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Animacy=Anim\|Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Loc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=ADJ`, `Case=Gen\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Loc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Acc\|NumType=Card\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Abbr=Yes\|Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Degree=Abs\|POS=ADV`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|Variant=Short`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Hyph=Yes\|POS=ADJ\|Variant=Short`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=ADV`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Rel`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Number=Plur\|POS=PROPN\|Uninflect=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=INTJ`, `Case=Acc\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=X\|Uninflect=Yes`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Loc\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Aspect=Perf\|Case=Loc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Animacy=Anim\|Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Number=Ptan\|POS=NOUN`, `Case=Gen\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Case=Nom\|NumType=Card\|POS=NUM`, `POS=SYM`, `Case=Loc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|NumType=Card\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Gen\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Dat\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Nom\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Gen\|NumType=Card\|POS=NUM`, `Case=Ins\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Loc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Abbr=Yes\|Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Hyph=Yes\|POS=ADJ`, `POS=ADV\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Voc\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Neg`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Rel`, `Animacy=Anim\|Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Variant=Short`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Anim\|Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PART\|PartType=Conseq`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|NumType=Card\|POS=DET\|PronType=Ind`, `Mood=Cnd\|POS=AUX`, `Abbr=Yes\|Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Number=Ptan\|POS=NOUN`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|POS=ADP`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Case=Loc\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|POS=PRON\|PronType=Ind`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=NOUN\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg\|Variant=Short`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=X`, `Case=Nom\|Gender=Masc\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Animacy=Inan\|Case=Ins\|Number=Ptan\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=NOUN\|Uninflect=Yes`, `POS=ADV\|PronType=Int`, `Aspect=Imp\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Conv`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|NumType=Card\|Number=Plur\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=PROPN\|Uninflect=Yes`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Number=Ptan\|POS=PROPN\|Uninflect=Yes`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Loc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Nom\|POS=PRON\|PronType=Neg`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Number=Ptan\|POS=PROPN\|Uninflect=Yes`, `Aspect=Imp\|Case=Ins\|Number=Plur\|POS=ADJ\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Animacy=Anim\|Case=Acc\|Number=Ptan\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|NumType=Card\|POS=NUM`, `Case=Ins\|Gender=Masc\|NumType=Card\|POS=NUM`, `Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Abbr=Yes\|Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Anim\|Animacy[gram]=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Inan\|Case=Loc\|Number=Ptan\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, _(truncated: full list in pipeline meta)_ | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advcl:sp`, `advcl:svc`, `advmod`, `advmod:det`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `det:numgov`, `discourse`, `expl`, `fixed`, `flat:abs`, `flat:foreign`, `flat:name`, `flat:range`, `flat:repeat`, `flat:sibl`, `flat:title`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `nummod:gov`, `obj`, `obl`, `orphan`, `parataxis`, `parataxis:discourse`, `punct`, `vocative`, `xcomp`, `xcomp:sp` | | **`ner`** | `LOC`, `ORG`, `PER` | </details>
f2e43c3b09ca479ecabdbbb82a7cbe6c
mit
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.99 | | `TOKEN_P` | 99.99 | | `TOKEN_R` | 99.97 | | `TOKEN_F` | 99.98 | | `POS_ACC` | 98.19 | | `MORPH_ACC` | 95.19 | | `MORPH_MICRO_P` | 97.85 | | `MORPH_MICRO_R` | 97.15 | | `MORPH_MICRO_F` | 97.50 | | `SENTS_P` | 94.18 | | `SENTS_R` | 90.60 | | `SENTS_F` | 92.36 | | `DEP_UAS` | 93.85 | | `DEP_LAS` | 91.76 | | `TAG_ACC` | 98.19 | | `LEMMA_ACC` | 0.00 | | `ENTS_P` | 87.49 | | `ENTS_R` | 87.82 | | `ENTS_F` | 87.66 |
4de7b8a92425fe3ad7332f74bea4a1a4
mit
['question-generation']
false
T5 for question-generation This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example `<hl> 42 <hl> is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
b07d1f1a4d6b1db4f6dd9e0135ed3f48
apache-2.0
['generated_from_trainer']
false
try_connll-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0596 - Precision: 0.9283 - Recall: 0.9372 - F1: 0.9328 - Accuracy: 0.9841
4f301ecaf236b940098aca17fb2e4d22
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2383 | 1.0 | 878 | 0.0691 | 0.9139 | 0.9239 | 0.9189 | 0.9810 | | 0.0497 | 2.0 | 1756 | 0.0607 | 0.9200 | 0.9343 | 0.9271 | 0.9833 | | 0.0303 | 3.0 | 2634 | 0.0596 | 0.9283 | 0.9372 | 0.9328 | 0.9841 |
f914c4cc2a364cf8c7fd84c9f973289a
apache-2.0
['generated_from_trainer', 'summarization']
false
mt5-small-finetuned-arxiv-cs This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on a subset of the arxiv dataset. It achieves the following results on the evaluation set: - Loss: 1.6922 - Rouge1: 0.7734 - Rouge2: 0.2865 - Rougel: 0.6665 - Rougelsum: 0.6743
08b50d74ce896a7d2475d483802288e6
apache-2.0
['generated_from_trainer', 'summarization']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 14.0947 | 1.0 | 500 | 2.7666 | 1.2101 | 0.459 | 1.1426 | 1.1385 | | 2.8524 | 2.0 | 1000 | 1.8208 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.2623 | 3.0 | 1500 | 1.6922 | 0.7734 | 0.2865 | 0.6665 | 0.6743 |
7ad434adc1ec3e256770068c1072d2c5
apache-2.0
['StableDiffusion', 'Warhammer', 'wh40k']
false
StableDiffusion model trained on Sororitas Sisters of Battle dataset Use token whsororitas for Sororitas Use token whinsignia for Insignia-themed items - Samples ![](002894.cefd6aa7.3328309311.png) ![](002779.09ad1707.2535063938.png) ![](002795.595afbdc.1309005523.png) ![](002902.1a772dce.3328309311.png) ![](003036.9b1585a3.71377978.png) ![](003039.e6481d2b.71377978.png) ![](003040.ea9c2949.71377978.png) ![](003042.24418fca.1336165508.png) ![](003473.efafacf2.3715296471.png) ![](003475.5b900bb1.3715296471.png) ![](002805.d088ab5c.3987490540.png)
ee74b1f78f4f5cc1497ed3e150c6cb30
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-2 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1358 - F1: 0.8638
a5fa8b3c7d8daef1c493cb26f81c2eb8
mit
['generated_from_trainer']
false
farsi_lastname_classifier_2 This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0370 - Pearson: 0.9361
210068d646baf28b3824dd55cc5ea2de
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 12 | 0.2937 | 0.7153 | | No log | 2.0 | 24 | 0.1063 | 0.8056 | | No log | 3.0 | 36 | 0.0530 | 0.9110 | | No log | 4.0 | 48 | 0.0446 | 0.9272 | | No log | 5.0 | 60 | 0.0445 | 0.9250 | | No log | 6.0 | 72 | 0.0528 | 0.9096 | | No log | 7.0 | 84 | 0.0407 | 0.9318 | | No log | 8.0 | 96 | 0.0344 | 0.9350 | | No log | 9.0 | 108 | 0.0378 | 0.9359 | | No log | 10.0 | 120 | 0.0370 | 0.9361 |
743765178b450ce1b32f498b27db72fd
apache-2.0
['tabular-classification', 'baseline-trainer']
false
Baseline Model trained on titanic_traink4m62li8 to apply classification on survived **Metrics of the best model:** accuracy 0.975294 average_precision 0.983664 roc_auc 0.987422 recall_macro 0.971786 f1_macro 0.973370 Name: MultinomialNB(), dtype: float64 **See model plot below:** <style>
ebb4e53c666fd7587190425a5bfb6981
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;,EasyPreprocessor(types= continuous dirty_float ... free_string useless passenger_id True False ... False False pclass False False ... False False name False False ... True False sex False False ... False False age True False ... False False sibsp False False ... False False parch False False ... False False ticket False False ... True False fare True False ... False False cabin False False ... True False embarked False False ... False False boat False False ... False False body True False ... False False home.dest False False ... True False[14 rows x 7 columns])),(&
7571c7486007c98ef7fdcb02378d3e64
apache-2.0
['tabular-classification', 'baseline-trainer']
false
x27;, MultinomialNB())]))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float ... free_string useless passenger_id True False ... False False pclass False False ... False False name False False ... True False sex False False ... False False age True False ... False False sibsp False False ... False False parch False False ... False False ticket False False ... True False fare True False ... False False cabin False False ... True False embarked False False ... False False boat False False ... False False body True False ... False False home.dest False False ... True False[14 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">pipeline: Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&
a8806286ba5a996d63956c7bc02784a4
apache-2.0
['summarization', 'generated_from_trainer']
false
bart-base-finetuned-samsum-v2 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.5326 - Rouge1: 47.3928 - Rouge2: 24.0713 - Rougel: 40.029 - Rougelsum: 43.6252 - Gen Len: 17.8154
25e421d5f709be9d64fcf09ce6d1da86
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
be2946bd81aa611daa7293d453b01792
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:| | 1.59 | 1.0 | 1841 | 1.5326 | 47.3928 | 24.0713 | 40.029 | 43.6252 | 17.8154 |
e4e322c928ab591a6f82579961132232
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_stsb_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1255 - Pearson: nan - Spearmanr: nan - Combined Score: nan
047ac08d3a787f15b0d370683f6a5376
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 4.2655 | 1.0 | 23 | 3.2719 | 0.0074 | 0.0048 | 0.0061 | | 3.8876 | 2.0 | 46 | 3.0839 | -0.0416 | -0.0423 | -0.0420 | | 3.6577 | 3.0 | 69 | 2.8849 | nan | nan | nan | | 3.4237 | 4.0 | 92 | 2.6822 | 0.0011 | 0.0035 | 0.0023 | | 3.1879 | 5.0 | 115 | 2.4766 | nan | nan | nan | | 2.9317 | 6.0 | 138 | 2.2745 | 0.0091 | 0.0098 | 0.0094 | | 2.6928 | 7.0 | 161 | 2.0801 | 0.0173 | 0.0165 | 0.0169 | | 2.4619 | 8.0 | 184 | 1.8985 | -0.0019 | -0.0026 | -0.0023 | | 2.2395 | 9.0 | 207 | 1.7302 | nan | nan | nan | | 2.0254 | 10.0 | 230 | 1.5798 | nan | nan | nan | | 1.8258 | 11.0 | 253 | 1.4485 | nan | nan | nan | | 1.6552 | 12.0 | 276 | 1.3382 | -0.0040 | -0.0043 | -0.0041 | | 1.511 | 13.0 | 299 | 1.2493 | -0.0376 | -0.0378 | -0.0377 | | 1.3781 | 14.0 | 322 | 1.1843 | nan | nan | nan | | 1.2754 | 15.0 | 345 | 1.1427 | nan | nan | nan | | 1.193 | 16.0 | 368 | 1.1255 | nan | nan | nan | | 1.1427 | 17.0 | 391 | 1.1320 | 0.0123 | 0.0102 | 0.0113 | | 1.1061 | 18.0 | 414 | 1.1565 | 0.0412 | 0.0370 | 0.0391 | | 1.0979 | 19.0 | 437 | 1.1724 | nan | nan | nan | | 1.0972 | 20.0 | 460 | 1.1748 | 0.0246 | 0.0255 | 0.0251 | | 1.0882 | 21.0 | 483 | 1.1792 | nan | nan | nan |
7ab01239a8b03e836761d5bc10afb99b
apache-2.0
['generated_from_trainer']
false
bart-paraphrase-finetuned-xsum-v3 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3377 - Rouge1: 99.9461 - Rouge2: 72.6619 - Rougel: 99.9461 - Rougelsum: 99.9461 - Gen Len: 9.0396
3fb701702bd83d6145f001687c8ae045
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 139 | 0.3653 | 96.4972 | 70.8271 | 96.5252 | 96.5085 | 9.7158 | | No log | 2.0 | 278 | 0.6624 | 98.3228 | 72.2829 | 98.2598 | 98.2519 | 9.0612 | | No log | 3.0 | 417 | 0.2880 | 98.2415 | 72.36 | 98.249 | 98.2271 | 9.4496 | | 0.5019 | 4.0 | 556 | 0.4188 | 98.1123 | 70.8536 | 98.0746 | 98.0465 | 9.4065 | | 0.5019 | 5.0 | 695 | 0.3718 | 98.8882 | 72.6619 | 98.8997 | 98.8882 | 10.7842 | | 0.5019 | 6.0 | 834 | 0.4442 | 99.6076 | 72.6619 | 99.6076 | 99.598 | 9.0647 | | 0.5019 | 7.0 | 973 | 0.2681 | 99.6076 | 72.6619 | 99.598 | 99.598 | 9.1403 | | 0.2751 | 8.0 | 1112 | 0.3577 | 99.2479 | 72.6619 | 99.2536 | 99.2383 | 9.0612 | | 0.2751 | 9.0 | 1251 | 0.2481 | 98.8785 | 72.6394 | 98.8882 | 98.8882 | 9.7914 | | 0.2751 | 10.0 | 1390 | 0.2339 | 99.6076 | 72.6619 | 99.6076 | 99.6076 | 9.1942 | | 0.2051 | 11.0 | 1529 | 0.2472 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.2338 | | 0.2051 | 12.0 | 1668 | 0.3948 | 99.6076 | 72.6619 | 99.598 | 99.598 | 9.0468 | | 0.2051 | 13.0 | 1807 | 0.4756 | 99.6076 | 72.6619 | 99.6076 | 99.6076 | 9.0576 | | 0.2051 | 14.0 | 1946 | 0.3543 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1544 | 15.0 | 2085 | 0.2828 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0576 | | 0.1544 | 16.0 | 2224 | 0.2456 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.1079 | | 0.1544 | 17.0 | 2363 | 0.2227 | 99.9461 | 72.6394 | 99.9461 | 99.9461 | 9.5072 | | 0.1285 | 18.0 | 2502 | 0.3490 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1285 | 19.0 | 2641 | 0.3736 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 | | 0.1285 | 20.0 | 2780 | 0.3377 | 99.9461 | 72.6619 | 99.9461 | 99.9461 | 9.0396 |
24514f77585562245822f9e4e500d687
apache-2.0
['generated_from_trainer']
false
nyaszzzz This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.4490
066760623fe8fe208f1e4712cffd2428
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.5
3a8abd5dc53faf45786898142df0af97
mit
['timelms', 'twitter']
false
Twitter June 2022 (RoBERTa-base, 154M) This is a RoBERTa-base model trained on 153.86M tweets until the end of June 2022 (15M tweets increment). More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829). Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms). For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms
e998e6b5784f72bd6fd8d46f26bba5e0
mit
['timelms', 'twitter']
false
Example Masked Language Model ```python from transformers import pipeline, AutoTokenizer MODEL = "cardiffnlp/twitter-roberta-base-mar2022-15M-incr" fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL) tokenizer = AutoTokenizer.from_pretrained(MODEL) def pprint(candidates, n): for i in range(n): token = tokenizer.decode(candidates[i]['token']) score = candidates[i]['score'] print("%d) %.5f %s" % (i+1, score, token)) texts = [ "So glad I'm <mask> vaccinated.", "I keep forgetting to bring a <mask>.", "Looking forward to watching <mask> Game tonight!", ] for text in texts: t = preprocess(text) print(f"{'-'*30}\n{t}") candidates = fill_mask(t) pprint(candidates, 5) ``` Output: ``` ------------------------------ So glad I'm <mask> vaccinated. 1) 0.35668 not 2) 0.27636 fully 3) 0.18418 getting 4) 0.03197 still 5) 0.02259 triple ------------------------------ I keep forgetting to bring a <mask>. 1) 0.04261 book 2) 0.04233 backpack 3) 0.04161 charger 4) 0.03892 mask 5) 0.03636 lighter ------------------------------ Looking forward to watching <mask> Game tonight! 1) 0.55292 the 2) 0.17813 The 3) 0.03052 this 4) 0.01565 Championship 5) 0.01391 End ```
c7ccf2871e41477e3ec46ba966121547
mit
['timelms', 'twitter']
false
naive approach for demonstration text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() return np.mean(features[0], axis=0) MODEL = "cardiffnlp/twitter-roberta-base-mar2022-15M-incr" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModel.from_pretrained(MODEL) query = "The book was awesome" tweets = ["I just ordered fried chicken 🐣", "The movie was great", "What time is the next game?", "Just finished reading 'Embeddings in NLP'"] sims = Counter() for tweet in tweets: sim = 1 - cosine(get_embedding(query), get_embedding(tweet)) sims[tweet] = sim print('Most similar to: ', query) print(f"{'-'*30}") for idx, (tweet, sim) in enumerate(sims.most_common()): print("%d) %.5f %s" % (idx+1, sim, tweet)) ``` Output: ``` Most similar to: The book was awesome ------------------------------ 1) 0.98951 The movie was great 2) 0.96042 Just finished reading 'Embeddings in NLP' 3) 0.95454 I just ordered fried chicken 🐣 4) 0.95148 What time is the next game? ```
f844a5dcf0934772e4b19b288ad38172
mit
['timelms', 'twitter']
false
Example Feature Extraction ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np MODEL = "cardiffnlp/twitter-roberta-base-mar2022-15M-incr" tokenizer = AutoTokenizer.from_pretrained(MODEL) text = "Good night 😊" text = preprocess(text)
248d8ad9ce04be4095e1edac0e73ed51
apache-2.0
['translation']
false
opus-mt-gil-es * source languages: gil * target languages: es * OPUS readme: [gil-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.eval.txt)
0121184aba41a8921a0f99751b759661
mit
['generated_from_trainer']
false
xlnet-base-cased-finetuned-wnli This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6874 - Accuracy: 0.5634
e4d3b996ed9dac0f0044363df8fbe334
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.7209 | 0.5352 | | No log | 2.0 | 80 | 0.6874 | 0.5634 | | No log | 3.0 | 120 | 0.6908 | 0.5634 | | No log | 4.0 | 160 | 0.6987 | 0.4930 | | No log | 5.0 | 200 | 0.6952 | 0.5634 |
805158bceeee3c21b30f648012ff1095
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_wav2vec2_s727 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
9ee90e349823e4f27f1b8b23bb574e1f
apache-2.0
['generated_from_keras_callback']
false
Lunage/my_distilbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.6915 - Validation Loss: 3.4024 - Epoch: 0
ca89e9ac9b102fb5162f45a14c927279
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -843, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
a89056e40fe9970928a9f2503c265d63
apache-2.0
['pytorch', 'causal-lm']
false
By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) * language: el * licence: apache-2.0 * dataset: ~23.4 GB of Greek corpora * model: GPT2 (12-layer, 768-hidden, 12-heads, 117M parameters. OpenAI GPT-2 English model, finetuned for the Greek language) * pre-processing: tokenization + BPE segmentation * metrics: perplexity
5a9377ec3d33848aaaf425f7f52069b7
apache-2.0
['pytorch', 'causal-lm']
false
Model description A text generation (autoregressive) model, using Huggingface transformers and fastai based on the English GPT-2. Finetuned with gradual layer unfreezing. This is a more efficient and sustainable alternative compared to training from scratch, especially for low-resource languages. Based on the work of Thomas Dehaene (ML6) for the creation of a Dutch GPT2: https://colab.research.google.com/drive/1Y31tjMkB8TqKKFlZ5OJ9fcMp3p8suvs4?usp=sharing
4085d93fc16a03f32dadfe562eddb6bc
apache-2.0
['pytorch', 'causal-lm']
false
How to use ``` from transformers import pipeline model = "lighteternal/gpt2-finetuned-greek" generator = pipeline( 'text-generation', device=0, model=f'{model}', tokenizer=f'{model}') text = "Μια φορά κι έναν καιρό" print("\ ".join([x.get("generated_text") for x in generator( text, max_length=len(text.split(" "))+15, do_sample=True, top_k=50, repetition_penalty = 1.2, add_special_tokens=False, num_return_sequences=5, temperature=0.95, top_p=0.95)])) ```
2eb9d7d63be33635faf44368c417ae37
apache-2.0
['pytorch', 'causal-lm']
false
Training data We used a 23.4GB sample from a consolidated Greek corpus from CC100, Wikimatrix, Tatoeba, Books, SETIMES and GlobalVoices containing long senquences. This is a better version of our GPT-2 small model (https://huggingface.co/lighteternal/gpt2-finetuned-greek-small)
0c951cceaed36c70185a94ed41d2d56a
apache-2.0
['pytorch', 'causal-lm']
false
Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call) Based on the work of Thomas Dehaene (ML6): https://blog.ml6.eu/dutch-gpt2-autoregressive-language-modelling-on-a-budget-cff3942dd020
d762c06e0ff7f8f20796dfc2692b2314
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4779 - Wer: 0.3468
dbbd97276ada1825001ed4d32c531849
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4408 | 4.0 | 500 | 1.2302 | 0.9116 | | 0.561 | 8.0 | 1000 | 0.4809 | 0.4320 | | 0.2091 | 12.0 | 1500 | 0.4285 | 0.3880 | | 0.1221 | 16.0 | 2000 | 0.4448 | 0.3665 | | 0.0858 | 20.0 | 2500 | 0.4622 | 0.3585 | | 0.0597 | 24.0 | 3000 | 0.4621 | 0.3517 | | 0.0453 | 28.0 | 3500 | 0.4779 | 0.3468 |
d1577e4b7d8619d44d9238848d24696c
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-transformers-github-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2348
d8e3779825db85353437050ee0106516
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0247 | 1.0 | 582 | 1.6457 | | 1.5989 | 2.0 | 1164 | 1.4157 | | 1.4449 | 3.0 | 1746 | 1.3494 | | 1.3579 | 4.0 | 2328 | 1.3774 | | 1.3039 | 5.0 | 2910 | 1.1908 | | 1.2428 | 6.0 | 3492 | 1.2780 | | 1.19 | 7.0 | 4074 | 1.2569 | | 1.1544 | 8.0 | 4656 | 1.1927 | | 1.126 | 9.0 | 5238 | 1.1703 | | 1.0893 | 10.0 | 5820 | 1.2100 | | 1.0631 | 11.0 | 6402 | 1.1988 | | 1.0417 | 12.0 | 6984 | 1.1643 | | 1.0252 | 13.0 | 7566 | 1.2202 | | 1.0101 | 14.0 | 8148 | 1.1678 | | 0.9972 | 15.0 | 8730 | 1.0999 | | 0.995 | 16.0 | 9312 | 1.2348 |
31ba48e2c5ff9cddd8761f2d0c1c1e87
apache-2.0
['generated_from_trainer']
false
whisper-small-nya This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5086 - Wer: 27.5487
510420ab9ffaa7e546251dafd5b82caf
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
49472349f7efdf129b3b694b210447a6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2671 | 0.99 | 500 | 0.5633 | 35.9244 | | 0.1372 | 1.97 | 1000 | 0.4515 | 48.1630 | | 0.0742 | 2.96 | 1500 | 0.4474 | 32.4985 | | 0.0341 | 3.94 | 2000 | 0.4595 | 35.3574 | | 0.0191 | 4.93 | 2500 | 0.4722 | 28.2930 | | 0.0073 | 5.92 | 3000 | 0.4774 | 25.3633 | | 0.0031 | 6.9 | 3500 | 0.4875 | 25.9539 | | 0.0009 | 7.89 | 4000 | 0.4995 | 26.2611 | | 0.0012 | 8.87 | 4500 | 0.5056 | 25.1861 | | 0.0004 | 9.86 | 5000 | 0.5086 | 27.5487 |
5cadfc648261658a2f60dedb3d94dc05
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-zeroth This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the zeroth_korean_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.7052 - Wer: 0.4621
e15fba4c10f431ef918b3cb1d8c1e1b9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 15.1763 | 1.61 | 400 | 4.6768 | 1.0 | | 3.1779 | 3.21 | 800 | 1.6680 | 0.8752 | | 1.052 | 4.82 | 1200 | 0.9580 | 0.7332 | | 0.5412 | 6.42 | 1600 | 0.7752 | 0.5993 | | 0.3281 | 8.03 | 2000 | 0.7158 | 0.5615 | | 0.2312 | 9.64 | 2400 | 0.6975 | 0.5532 | | 0.2001 | 11.24 | 2800 | 0.7489 | 0.5677 | | 0.1587 | 12.85 | 3200 | 0.6954 | 0.5267 | | 0.1321 | 14.46 | 3600 | 0.7329 | 0.5371 | | 0.1178 | 16.06 | 4000 | 0.7534 | 0.5341 | | 0.103 | 17.67 | 4400 | 0.7046 | 0.5066 | | 0.0843 | 19.28 | 4800 | 0.7507 | 0.5028 | | 0.079 | 20.88 | 5200 | 0.7137 | 0.4886 | | 0.0647 | 22.49 | 5600 | 0.7170 | 0.4855 | | 0.0565 | 24.1 | 6000 | 0.7124 | 0.4781 | | 0.0487 | 25.7 | 6400 | 0.7043 | 0.4721 | | 0.0433 | 27.31 | 6800 | 0.7128 | 0.4557 | | 0.0379 | 28.91 | 7200 | 0.7052 | 0.4621 |
e8b06ce636ac9665159ad0765669ccd5
apache-2.0
['translation']
false
opus-mt-tw-sv * source languages: tw * target languages: sv * OPUS readme: [tw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.eval.txt)
bbb2b945b65aed1f60a3e5a89cec2371
mit
['summarization', 'generated_from_trainer']
false
bart-base-cnn-xsum-wiki-swe This model is a fine-tuned version of [Gabriel/bart-base-cnn-xsum-swe](https://huggingface.co/Gabriel/bart-base-cnn-xsum-swe) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3884 - Rouge1: 26.8917 - Rouge2: 11.8254 - Rougel: 22.6089 - Rougelsum: 26.1492 - Gen Len: 19.3468
5c7421443cf2fcc18a8c690c3578b84c
mit
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 9 - mixed_precision_training: Native AMP
842b921b82614a3d9d322196018425bd
mit
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.4993 | 1.0 | 2985 | 2.3834 | 25.8959 | 10.9373 | 21.8329 | 25.2002 | 19.1416 | | 2.2397 | 2.0 | 5970 | 2.2939 | 26.1166 | 11.4087 | 22.2444 | 25.4752 | 19.2351 | | 2.0318 | 3.0 | 8955 | 2.2687 | 26.5222 | 11.6512 | 22.567 | 25.851 | 19.2384 | | 1.879 | 4.0 | 11940 | 2.2750 | 26.7637 | 11.7676 | 22.6674 | 26.0753 | 19.2622 | | 1.7532 | 5.0 | 14925 | 2.2923 | 26.8104 | 11.8724 | 22.6794 | 26.0907 | 19.3063 | | 1.6315 | 6.0 | 17910 | 2.3190 | 26.7758 | 11.7989 | 22.5925 | 26.032 | 19.3136 | | 1.5409 | 7.0 | 20895 | 2.3517 | 26.8762 | 11.8552 | 22.6694 | 26.1329 | 19.3275 | | 1.4711 | 8.0 | 23880 | 2.3679 | 26.899 | 11.9185 | 22.6764 | 26.1574 | 19.2994 | | 1.4105 | 9.0 | 26865 | 2.3884 | 26.8917 | 11.8254 | 22.6089 | 26.1492 | 19.3468 |
e9e23d0b657a563fcd2193c74a0c5a21
apache-2.0
['generated_from_keras_callback']
false
distil-bert-finetuned-log-parser-winlogbeat This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset. It achieves the following results on the evaluation set:
74f6cfc33d7f41c6d3d26e58a5cac9b9
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
0a62107be6429c66162c593c1b0de014
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - mixed_precision_training: Native AMP
79fc5e02fc704ff04d08d29e1a9d769f
apache-2.0
['translation']
false
opus-mt-crs-en * source languages: crs * target languages: en * OPUS readme: [crs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.eval.txt)
200df24f78668e65e42024dbdf814ab3
creativeml-openrail-m
['text-to-image', 'v2.1', 'Embedding']
false
TI embedding trained on 768x768 stills from 'The Transformers. The Movie' (1986). *Install by downloading the embedding, and putting it in the **\embeddings** folder.* *Use embedding's filename in your prompt to activate the style* ![0001.png](https://s3.amazonaws.com/moonup/production/uploads/1670940426348-6364e6c712188d67e653853e.png) ![0002.png](https://s3.amazonaws.com/moonup/production/uploads/1670940426306-6364e6c712188d67e653853e.png) ![tf86movie_02.png](https://s3.amazonaws.com/moonup/production/uploads/1670940476558-6364e6c712188d67e653853e.png) ![tf86movie_03.png](https://s3.amazonaws.com/moonup/production/uploads/1670940477358-6364e6c712188d67e653853e.png) ![tf86movie_06.png](https://s3.amazonaws.com/moonup/production/uploads/1670940476971-6364e6c712188d67e653853e.png) ![tf86movie_01.png](https://s3.amazonaws.com/moonup/production/uploads/1670940548168-6364e6c712188d67e653853e.png) ![tf86movie_05.png](https://s3.amazonaws.com/moonup/production/uploads/1670940549109-6364e6c712188d67e653853e.png) ![tf86movie_09.png](https://s3.amazonaws.com/moonup/production/uploads/1670940548454-6364e6c712188d67e653853e.png) ![tf86movie_10.png](https://s3.amazonaws.com/moonup/production/uploads/1670940548713-6364e6c712188d67e653853e.png) ![tf86movie_07.png](https://s3.amazonaws.com/moonup/production/uploads/1670940668516-6364e6c712188d67e653853e.png) All images rendered in SD v2.1
e6cfb508099d4233e2026cfc0a894ace
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 244 | 0.3302 | 0.8829 | 0.8757 | 0.8793 | 0.9140 |
f3c06148108377ac325375ca4cdb18f6
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5282 - Wer: 0.3302
4b825e2f4a3d9b900e3ff046e11e29a8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5185 | 1.0 | 500 | 1.5798 | 0.9593 | | 0.8096 | 2.01 | 1000 | 0.5024 | 0.5082 | | 0.4196 | 3.01 | 1500 | 0.4594 | 0.4489 | | 0.2936 | 4.02 | 2000 | 0.4104 | 0.4131 | | 0.2215 | 5.02 | 2500 | 0.4308 | 0.4062 | | 0.1891 | 6.02 | 3000 | 0.4242 | 0.3825 | | 0.1626 | 7.03 | 3500 | 0.4187 | 0.3792 | | 0.136 | 8.03 | 4000 | 0.4387 | 0.3766 | | 0.1221 | 9.04 | 4500 | 0.4634 | 0.3832 | | 0.1119 | 10.04 | 5000 | 0.4271 | 0.3640 | | 0.0976 | 11.04 | 5500 | 0.4379 | 0.3701 | | 0.0846 | 12.05 | 6000 | 0.4686 | 0.3648 | | 0.0792 | 13.05 | 6500 | 0.4502 | 0.3595 | | 0.0709 | 14.06 | 7000 | 0.4723 | 0.3634 | | 0.0671 | 15.06 | 7500 | 0.4601 | 0.3577 | | 0.058 | 16.06 | 8000 | 0.5146 | 0.3535 | | 0.055 | 17.07 | 8500 | 0.5352 | 0.3540 | | 0.0576 | 18.07 | 9000 | 0.5102 | 0.3469 | | 0.0448 | 19.08 | 9500 | 0.5159 | 0.3527 | | 0.0429 | 20.08 | 10000 | 0.5085 | 0.3538 | | 0.0384 | 21.08 | 10500 | 0.5001 | 0.3453 | | 0.0339 | 22.09 | 11000 | 0.5322 | 0.3460 | | 0.032 | 23.09 | 11500 | 0.5295 | 0.3459 | | 0.0306 | 24.1 | 12000 | 0.5285 | 0.3434 | | 0.0268 | 25.1 | 12500 | 0.5280 | 0.3382 | | 0.0231 | 26.1 | 13000 | 0.5259 | 0.3363 | | 0.0242 | 27.11 | 13500 | 0.5298 | 0.3325 | | 0.0215 | 28.11 | 14000 | 0.5350 | 0.3306 | | 0.0226 | 29.12 | 14500 | 0.5282 | 0.3302 |
942f3fa38cf64232514a7fd0f13db10b
mit
['audio', 'automatic-speech-recognition', 'speech']
false
Pretrained Model Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz. **Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
0d52ebcf5520eefc38a915bd37b74886
mit
['audio', 'automatic-speech-recognition', 'speech']
false
Training Script Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation). In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/tamil-finetuning-multilingual).
f8818f2a937b454c2ff962fba20ed54c
mit
['audio', 'automatic-speech-recognition', 'speech']
false
Usage The model can be used directly (without a language model) as follows: ```python import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import argparse def parse_transcription(wav_file):
a3da51b4b8b899bdf6fc8bd33f3aa53a
mit
['audio', 'automatic-speech-recognition', 'speech']
false
load pretrained model processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
26706c2cd5c4d6b8c4c321ed33025b80
mit
['audio', 'automatic-speech-recognition', 'speech']
false
Evaluation The model can be evaluated as follows on the hindi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
82225d2b6cd13c3e5c34a3dd3ca06bb0
mit
['audio', 'automatic-speech-recognition', 'speech']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 53.64 % [**Colab Evaluation**](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_vakyansh_tamil_tnm_4200_evaluation_common_voice.ipynb)
8c285bb5a43963297c31b7442f978cda
mit
['bridgetower']
false
BridgeTower large-itm-mlm model The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in [this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in [this repository](https://github.com/microsoft/BridgeTower). BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
9a0f2f6eb0cc455ec6a24c1869a43b08
mit
['bridgetower']
false
How to use Here is how to use this model to perform image and text matching: ```python from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm") model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-large-itm-mlm")
4863869ae62b2c35e4d883368d314fd1
mit
['bridgetower']
false
prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0,1].item() ``` Here is how to use this model to perform masked language modeling: ```python from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000360943.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") text = "a <mask> looking out of the window" processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm") model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-large-itm-mlm")
efe1753321a497dde5a710b31943c8e8
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab10 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4460 - Wer: 0.3425
7213f369dcb4b8ada1b23f25cc09b313
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9891 | 3.52 | 500 | 3.1554 | 1.0 | | 1.71 | 7.04 | 1000 | 0.7122 | 0.5811 | | 0.6164 | 10.56 | 1500 | 0.5149 | 0.4880 | | 0.4188 | 14.08 | 2000 | 0.4726 | 0.4344 | | 0.3038 | 17.61 | 2500 | 0.4765 | 0.4092 | | 0.2312 | 21.13 | 3000 | 0.4387 | 0.3765 | | 0.1867 | 24.65 | 3500 | 0.4411 | 0.3583 | | 0.1582 | 28.17 | 4000 | 0.4460 | 0.3425 |
5bf55edce40fc5b60c585056bc7303ea
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-kitchen_and_dining-1000-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.4398 - Accuracy: 0.2308
46ce137a579c9d65addec16032ecd833
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 5.0631 | 1.0 | 1 | 4.8365 | 0.1509 | | 4.2899 | 2.0 | 2 | 4.6738 | 0.2041 | | 3.7697 | 3.0 | 3 | 4.5378 | 0.2189 | | 3.1321 | 4.0 | 4 | 4.4398 | 0.2308 | | 2.7818 | 5.0 | 5 | 4.3885 | 0.2308 |
c20f9a48bc1f8b4ed7f523cf9ada5704
creativeml-openrail-m
['stable diffusion', 'stable diffusion diffusers', 'SlimeX']
false
**[SlimeX](https://civitai.com/models/6963/slimex) by [Zanc](https://civitai.com/user/Zanc) (owner)** **This model intends to produce high-quality, highly detailed anime style SFW and NSFW images.** - **Slime** = No vae - **SlimeX** = vae included
ddc52285e007145b2d9a023ace311ae3
creativeml-openrail-m
['stable diffusion', 'stable diffusion diffusers', 'SlimeX']
false
1 ![SampleImage1.png](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6648cc23-a8a2-4a3c-9bc7-3e5a68b74300/width=824) ``` masterpiece, best quality, 1girl, beautiful detailed eyes, perfect face, beautiful detailed face, looking at viewer, sigma 400mm f1.8, photo fine print, amazing sharp focus, ultra detailed, silver hair, upper body, navel, large breasts, race queen, black jacket, blue eyes, cat ears, long hair, sleepy, sweat, breathing, soft skin, indoors, afterglow Negative prompt: (worst quality, low quality:1.4), (monochrome:1.3), (NSFW:1.4), 3d, text, frame, jpeg artifacts, grids, watermark, logo, username, text, flowers, particles, (missing fingers:1.3), bad hands, Size: 448x576, Seed: 355678310, Model: SlimeX, Steps: 20, Sampler: DDIM, CFG scale: 8, Clip skip: 2, Model hash: f22782eb52, Hires steps: 20, Hires upscale: 1.85, Hires upscaler: Latent (nearest-exact), Denoising strength: 0.5 ``` -
794f1b1f01e12a4bcac3d313ce9e256f
creativeml-openrail-m
['stable diffusion', 'stable diffusion diffusers', 'SlimeX']
false
2 ![SampleImage2.png](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5f846f8e-84bd-48e5-c846-5d9afbd53900/width=824) ``` masterpiece, best quality, izekonabe akio, 1girl Negative prompt: (worst quality, low quality:1.4), (monochrome:1.4) Size: 448x576, Seed: 3548745218, Model: SlimeX, Steps: 20, Sampler: DDIM, CFG scale: 8, Clip skip: 2, Model hash: f22782eb52, Hires steps: 20, Hires upscale: 1.85, Hires upscaler: Latent (nearest-exact), Denoising strength: 0.5 ``` -
79ca83abe772affad2a18f9907432fbc
creativeml-openrail-m
['stable diffusion', 'stable diffusion diffusers', 'SlimeX']
false
3 ![SampleImage3.png](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a2f821eb-dffe-473c-e06f-b492e0fe2800/width=824) ``` masterpiece, best quality, ilyotaka haruhiko, solo, 1girl, solo, hair between eyes, long hair, short beard, light white purple hair, short hair, medium breasts, looking at viewer, thigh highs Negative prompt: (worst quality, low quality:1.4), (monochrome:1.4) Size: 448x576, Seed: 672966383, Model: SlimeX, Steps: 20, Sampler: DDIM, CFG scale: 8, Clip skip: 2, Model hash: f22782eb52, Hires steps: 20, Hires upscale: 1.85, Hires upscaler: Latent (nearest-exact), Denoising strength: 0.5 ``` -
2621b3da4ba75cb80edba915d6bec8e8
apache-2.0
[]
false
Model description Skein is a series of hybrid story generation models intended for use in both text adventure writing and normal novel-style writing. The models are known to possess a strong second person bias. For inquiries, please contact the KoboldAI community. The name comes from the Integrated Development Environment for the Inform 7 programming language, which calls a dialogue tree a "skein". Inform 6 and 7 were used to create some of the interactive fiction in the dataset.
8250d82f2da946e5e3cd433b3922b4d6
apache-2.0
[]
false
Training procedure GPT-NeoX-20B-Skein was trained on a TPUv3-32 TPU pod using a heavily modified version of Ben Wang's Mesh Transformer JAX library, the original version of which was used by EleutherAI to train their GPT-J-6B model. The training hyperparameters and statistics can be found [here](https://wandb.ai/ve-forbryderne/skein-20b?workspace=user-ve-forbryderne).
9dcf0a248304dffcdbcc9e2a42d80c35
apache-2.0
[]
false
Training data The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging. For more details, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt).
50e1be38c9e266ebfca48782c25de227
apache-2.0
[]
false
Citation details The GPT-NeoX-20B model weights: ```bibtex @inproceedings{gpt-neox-20b, title={{GPT-NeoX-20B}: An Open-Source Autoregressive Language Model}, author={Black, Sid and Biderman, Stella and Hallahan, Eric and Anthony, Quentin and Gao, Leo and Golding, Laurence and He, Horace and Leahy, Connor and McDonell, Kyle and Phang, Jason and Pieler, Michael and Prashanth, USVSN Sai and Purohit, Shivanshu and Reynolds, Laria and Tow, Jonathan and Wang, Ben and Weinbach, Samuel}, booktitle={Proceedings of the ACL Workshop on Challenges \& Perspectives in Creating Large Language Models}, url={https://arxiv.org/abs/2204.06745}, year={2022} } ``` The Mesh Transformer JAX library: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ```
3709e5126cd0e76823f9a97cedbbfc67
apache-2.0
['generated_from_trainer']
false
beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05-finetuned-FER2013-7e-05 This model is a fine-tuned version of [Celal11/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05](https://huggingface.co/Celal11/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.9121 - Accuracy: 0.7116
a4da4c197e0b166401dcad9c582ceadd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2
c796eb9ea9de35158ed6dc970a710430
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4564 | 1.0 | 224 | 0.9463 | 0.7014 | | 0.6463 | 2.0 | 448 | 0.9121 | 0.7116 |
8a83f57838bd5a8289d1a5ad3bf803c3
apache-2.0
['text-generation', 'text2text-generation', 'summarization', 'conversational']
false
MVP The MVP model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
de2109d8df6c25cb70b9f106e702e45c
apache-2.0
['text-generation', 'text2text-generation', 'summarization', 'conversational']
false
Model Description MVP is supervised pre-trained using a mixture of labeled datasets. It follows a standard Transformer encoder-decoder architecture. MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering.
5863ab9dc3baf3d3b75f98c0f4f4e7eb
apache-2.0
['text-generation', 'text2text-generation', 'summarization', 'conversational']
false
Examples For summarization: ```python >>> from transformers import MvpTokenizer, MvpForConditionalGeneration >>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp") >>> inputs = tokenizer( ... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ["Why You Shouldn't Quit Your Job"] ``` For data-to-text generation: ```python >>> from transformers import MvpTokenizerFast, MvpForConditionalGeneration >>> tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp") >>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp") >>> inputs = tokenizer( ... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man", ... return_tensors="pt", ... ) >>> generated_ids = model.generate(**inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic'] ```
8b6d4abf3757952fa97833539957f200
mit
['generated_from_trainer']
false
pretrained_model This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 0.0568 - F1: 0.5868 - Roc Auc: 0.7616 - Accuracy: 0.4821
debe62f939a5815b57bfda3da22e956a
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.1205 | 1.0 | 679 | 0.0865 | 0.5632 | 0.7347 | 0.4458 | | 0.0859 | 2.0 | 1358 | 0.0829 | 0.5717 | 0.7378 | 0.4521 | | 0.0727 | 3.0 | 2037 | 0.0827 | 0.5897 | 0.7523 | 0.4753 | | 0.0629 | 4.0 | 2716 | 0.0857 | 0.5808 | 0.7535 | 0.4652 | | 0.0568 | 5.0 | 3395 | 0.0904 | 0.5868 | 0.7616 | 0.4821 | | 0.0423 | 6.0 | 4074 | 0.0989 | 0.5806 | 0.7682 | 0.4724 | | 0.0344 | 7.0 | 4753 | 0.1079 | 0.5736 | 0.7657 | 0.4650 | | 0.0296 | 8.0 | 5432 | 0.1158 | 0.5637 | 0.7649 | 0.4504 | | 0.0206 | 9.0 | 6111 | 0.1200 | 0.5674 | 0.7689 | 0.4486 | | 0.0177 | 10.0 | 6790 | 0.1240 | 0.5728 | 0.7737 | 0.4547 |
e4080dd457dc66c01194b4d879b71739
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-sst2-with-unfamiliar-words This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0870 - Accuracy: 0.9866
835498389f341cf2f18b6d927a2337c7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2917 | 1.0 | 975 | 0.0703 | 0.9778 | | 0.063 | 2.0 | 1950 | 0.0815 | 0.9821 | | 0.0233 | 3.0 | 2925 | 0.0680 | 0.9866 | | 0.0134 | 4.0 | 3900 | 0.0817 | 0.9866 | | 0.0054 | 5.0 | 4875 | 0.0870 | 0.9866 |
26b09a005b7ede1de7752879a65ba92d
apache-2.0
['image-classification', 'vision']
false
Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
d49f5ade89402359268ee15a2b572f91
apache-2.0
['image-classification', 'vision']
false
Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at the same resolution, 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
bf0cdde61816e39cd42378c4e1ac4938
apache-2.0
['image-classification', 'vision']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch16-224') model = ViTForImageClassification.from_pretrained('google/vit-large-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
636f38df3105be52d7a0e0bf3302c76b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.372e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 3138344630 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 100 - mixed_precision_training: Native AMP
160da65a202210567ed082d99f52cce9
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1261 | 13.0 | 8619 | 3.4600 | | 1.141 | 14.0 | 9282 | 3.4634 | | 1.1278 | 15.0 | 9945 | 3.4665 | | 1.1183 | 16.0 | 10608 | 3.4697 | | 1.1048 | 17.0 | 11271 | 3.4714 | | 1.1061 | 18.0 | 11934 | 3.4752 | | 1.1471 | 19.0 | 12597 | 3.4773 | | 1.1402 | 20.0 | 13260 | 3.4798 | | 1.0847 | 21.0 | 13923 | 3.4811 | | 1.1462 | 22.0 | 14586 | 3.4841 | | 1.1107 | 23.0 | 15249 | 3.4852 | | 1.1192 | 24.0 | 15912 | 3.4873 | | 1.0868 | 25.0 | 16575 | 3.4879 | | 1.1313 | 26.0 | 17238 | 3.4898 | | 1.1033 | 27.0 | 17901 | 3.4915 | | 1.1578 | 28.0 | 18564 | 3.4939 | | 1.0987 | 29.0 | 19227 | 3.4947 | | 1.0779 | 30.0 | 19890 | 3.4972 | | 1.3567 | 61.0 | 20191 | 3.4576 | | 1.3278 | 62.0 | 20522 | 3.4528 | | 1.3292 | 63.0 | 20853 | 3.4468 | | 1.3285 | 64.0 | 21184 | 3.4431 | | 1.3032 | 65.0 | 21515 | 3.4370 | | 1.318 | 66.0 | 21846 | 3.4345 | | 1.3003 | 67.0 | 22177 | 3.4289 | | 1.3202 | 68.0 | 22508 | 3.4274 | | 1.2643 | 69.0 | 22839 | 3.4232 | | 1.2862 | 70.0 | 23170 | 3.4223 | | 1.2597 | 71.0 | 23501 | 3.4186 | | 1.2426 | 72.0 | 23832 | 3.4176 | | 1.2539 | 73.0 | 24163 | 3.4152 | | 1.2604 | 74.0 | 24494 | 3.4147 | | 1.263 | 75.0 | 24825 | 3.4128 | | 1.2642 | 76.0 | 25156 | 3.4127 | | 1.2694 | 77.0 | 25487 | 3.4109 | | 1.2251 | 78.0 | 25818 | 3.4106 | | 1.2673 | 79.0 | 26149 | 3.4097 | | 1.233 | 80.0 | 26480 | 3.4096 | | 1.2408 | 81.0 | 26811 | 3.4087 | | 1.2579 | 82.0 | 27142 | 3.4088 | | 1.2346 | 83.0 | 27473 | 3.4081 | | 1.2298 | 84.0 | 27804 | 3.4082 | | 1.219 | 85.0 | 28135 | 3.4079 | | 1.2515 | 86.0 | 28466 | 3.4080 | | 1.2316 | 87.0 | 28797 | 3.4084 | | 1.2085 | 88.0 | 29128 | 3.4085 | | 1.2334 | 89.0 | 29459 | 3.4085 | | 1.2263 | 90.0 | 29790 | 3.4084 | | 1.2312 | 91.0 | 30121 | 3.4084 | | 1.2584 | 92.0 | 30452 | 3.4086 | | 1.2106 | 93.0 | 30783 | 3.4089 | | 1.2078 | 94.0 | 31114 | 3.4091 | | 1.2329 | 95.0 | 31445 | 3.4090 | | 1.1836 | 96.0 | 31776 | 3.4097 | | 1.2135 | 97.0 | 32107 | 3.4097 | | 1.2372 | 98.0 | 32438 | 3.4099 | | 1.2163 | 99.0 | 32769 | 3.4107 | | 1.1937 | 100.0 | 33100 | 3.4110 |
a628015c977e04fa596757236208baff
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5955 | 1.0 | 25 | 1.4376 | | 1.4736 | 2.0 | 50 | 1.2969 | | 1.3925 | 3.0 | 75 | 1.3163 |
abf2c1bd2984de32fa50d015c7e97e8a