license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
resnet-18-feature-extraction This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1485 - Accuracy: 0.95 - Precision: 0.9653 - Recall: 0.9789 - F1: 0.9720 - Roc Auc: 0.8505
53781a1b1c2e20a528c7c0aae58866ee
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50
6dcf0b64b109b7327170df717a37973b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | No log | 0.8 | 2 | 0.6232 | 0.75 | 0.9636 | 0.7465 | 0.8413 | 0.7621 | | No log | 1.8 | 4 | 0.6971 | 0.4875 | 1.0 | 0.4225 | 0.5941 | 0.7113 | | No log | 2.8 | 6 | 0.7915 | 0.2875 | 1.0 | 0.1972 | 0.3294 | 0.5986 | | No log | 3.8 | 8 | 0.8480 | 0.2875 | 1.0 | 0.1972 | 0.3294 | 0.5986 | | 0.8651 | 4.8 | 10 | 0.9094 | 0.2562 | 1.0 | 0.1620 | 0.2788 | 0.5810 | | 0.8651 | 5.8 | 12 | 0.7470 | 0.5625 | 1.0 | 0.5070 | 0.6729 | 0.7535 | | 0.8651 | 6.8 | 14 | 0.5915 | 0.85 | 1.0 | 0.8310 | 0.9077 | 0.9155 | | 0.8651 | 7.8 | 16 | 0.4817 | 0.8875 | 0.9844 | 0.8873 | 0.9333 | 0.8881 | | 0.8651 | 8.8 | 18 | 0.3455 | 0.9187 | 0.9778 | 0.9296 | 0.9531 | 0.8815 | | 0.5349 | 9.8 | 20 | 0.2966 | 0.9187 | 0.9708 | 0.9366 | 0.9534 | 0.8572 | | 0.5349 | 10.8 | 22 | 0.2347 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.5349 | 11.8 | 24 | 0.2468 | 0.9313 | 0.9645 | 0.9577 | 0.9611 | 0.8400 | | 0.5349 | 12.8 | 26 | 0.2310 | 0.9563 | 0.9720 | 0.9789 | 0.9754 | 0.8783 | | 0.5349 | 13.8 | 28 | 0.2083 | 0.9313 | 0.9580 | 0.9648 | 0.9614 | 0.8157 | | 0.3593 | 14.8 | 30 | 0.1840 | 0.9375 | 0.9521 | 0.9789 | 0.9653 | 0.7950 | | 0.3593 | 15.8 | 32 | 0.1947 | 0.9375 | 0.9648 | 0.9648 | 0.9648 | 0.8435 | | 0.3593 | 16.8 | 34 | 0.1837 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.3593 | 17.8 | 36 | 0.1819 | 0.9437 | 0.9524 | 0.9859 | 0.9689 | 0.7985 | | 0.3593 | 18.8 | 38 | 0.1924 | 0.9437 | 0.9650 | 0.9718 | 0.9684 | 0.8470 | | 0.2737 | 19.8 | 40 | 0.1990 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.2737 | 20.8 | 42 | 0.1759 | 0.95 | 0.9718 | 0.9718 | 0.9718 | 0.8748 | | 0.2737 | 21.8 | 44 | 0.1804 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.2737 | 22.8 | 46 | 0.1666 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.2737 | 23.8 | 48 | 0.1534 | 0.9437 | 0.9524 | 0.9859 | 0.9689 | 0.7985 | | 0.2278 | 24.8 | 50 | 0.1612 | 0.9375 | 0.9521 | 0.9789 | 0.9653 | 0.7950 | | 0.2278 | 25.8 | 52 | 0.1535 | 0.9437 | 0.9586 | 0.9789 | 0.9686 | 0.8228 | | 0.2278 | 26.8 | 54 | 0.1568 | 0.9437 | 0.9716 | 0.9648 | 0.9682 | 0.8713 | | 0.2278 | 27.8 | 56 | 0.2107 | 0.9375 | 0.9714 | 0.9577 | 0.9645 | 0.8678 | | 0.2278 | 28.8 | 58 | 0.1592 | 0.9313 | 0.9517 | 0.9718 | 0.9617 | 0.7915 | | 0.2057 | 29.8 | 60 | 0.1557 | 0.9375 | 0.9648 | 0.9648 | 0.9648 | 0.8435 | | 0.2057 | 30.8 | 62 | 0.1714 | 0.9437 | 0.9650 | 0.9718 | 0.9684 | 0.8470 | | 0.2057 | 31.8 | 64 | 0.1571 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.2057 | 32.8 | 66 | 0.1574 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.2057 | 33.8 | 68 | 0.1423 | 0.9563 | 0.9720 | 0.9789 | 0.9754 | 0.8783 | | 0.2 | 34.8 | 70 | 0.1677 | 0.9437 | 0.9650 | 0.9718 | 0.9684 | 0.8470 | | 0.2 | 35.8 | 72 | 0.1560 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.2 | 36.8 | 74 | 0.1594 | 0.9375 | 0.9521 | 0.9789 | 0.9653 | 0.7950 | | 0.2 | 37.8 | 76 | 0.1512 | 0.9437 | 0.9586 | 0.9789 | 0.9686 | 0.8228 | | 0.2 | 38.8 | 78 | 0.1396 | 0.9563 | 0.9655 | 0.9859 | 0.9756 | 0.8541 | | 0.1838 | 39.8 | 80 | 0.1509 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.1838 | 40.8 | 82 | 0.1529 | 0.95 | 0.9718 | 0.9718 | 0.9718 | 0.8748 | | 0.1838 | 41.8 | 84 | 0.1506 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.1838 | 42.8 | 86 | 0.1549 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 | | 0.1838 | 43.8 | 88 | 0.1331 | 0.9563 | 0.9655 | 0.9859 | 0.9756 | 0.8541 | | 0.1872 | 44.8 | 90 | 0.1409 | 0.9437 | 0.9524 | 0.9859 | 0.9689 | 0.7985 | | 0.1872 | 45.8 | 92 | 0.1639 | 0.9375 | 0.9583 | 0.9718 | 0.9650 | 0.8192 | | 0.1872 | 46.8 | 94 | 0.1391 | 0.95 | 0.9589 | 0.9859 | 0.9722 | 0.8263 | | 0.1872 | 47.8 | 96 | 0.1436 | 0.9563 | 0.9655 | 0.9859 | 0.9756 | 0.8541 | | 0.1872 | 48.8 | 98 | 0.1442 | 0.9437 | 0.9586 | 0.9789 | 0.9686 | 0.8228 | | 0.185 | 49.8 | 100 | 0.1485 | 0.95 | 0.9653 | 0.9789 | 0.9720 | 0.8505 |
e2ea6a9170e04d8aac89d4a93e48d579
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/massive_transport-roberta-large-v1-5-3 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
89e3d5b7aaca5528bfd5ea79d9ee74ab
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-big-en-es Neural machine translation model for translating from English (en) to Spanish (es). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT โ€“ Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge โ€“ Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
ed85026471391faa280ac02f02425a93
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-13 * source language(s): eng * target language(s): spa * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT eng-spa README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md)
e3fd1bc4a0dd3da1c88ca1d25486733a
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "A wasp stung him and he had an allergic reaction.", "I love nature." ] model_name = "pytorch-models/opus-mt-tc-big-en-es" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
07ff2f24813f1228d860a3101f35121f
cc-by-4.0
['translation', 'opus-mt-tc']
false
Me encanta la naturaleza. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-es") print(pipe("A wasp stung him and he had an allergic reaction."))
31c71612817fa9e3e1b16843b14d01c2
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
e20a7123b92dbf0c7c02da0adfc0a5fd
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | eng-spa | tatoeba-test-v2021-08-07 | 0.73863 | 57.2 | 16583 | 134710 | | eng-spa | flores101-devtest | 0.56440 | 28.5 | 1012 | 29199 | | eng-spa | newssyscomb2009 | 0.58415 | 31.5 | 502 | 12503 | | eng-spa | news-test2008 | 0.56707 | 30.1 | 2051 | 52586 | | eng-spa | newstest2009 | 0.57836 | 30.2 | 2525 | 68111 | | eng-spa | newstest2010 | 0.62357 | 37.6 | 2489 | 65480 | | eng-spa | newstest2011 | 0.62415 | 38.9 | 3003 | 79476 | | eng-spa | newstest2012 | 0.63031 | 39.5 | 3003 | 79006 | | eng-spa | newstest2013 | 0.60354 | 35.9 | 3000 | 70528 | | eng-spa | tico19-test | 0.73554 | 53.0 | 2100 | 66563 |
a3fe0d610e1a03e2531995816142d2cd
cc-by-sa-4.0
['spacy', 'token-classification']
false
UD v2.5 benchmarking pipeline for UD_Russian-GSD | Feature | Description | | --- | --- | | **Name** | `ru_udv25_russiangsd_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) |
56c2eaf6bb05b45b0c55e71b7d7b7603
cc-by-sa-4.0
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (3014 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `!`, `&
742998dde8e51880858b383935526602
cc-by-sa-4.0
['spacy', 'token-classification']
false
39;`, `'`, `(`, `)`, `,`, `-`, `--`, `.`, `.,`, `/`, `:`, `AFX`, `APOSTROPHE`, `AWP`, `CC`, `CD`, `DT`, `FW`, `IN`, `JJ`, `JJH`, `JJL`, `JJR`, `JJRL`, `JJS`, `NEG`, `NFP`, `NN`, `NNP`, `ORD`, `PRED`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `UH`, `VB`, `VBC`, `VBG`, `VBNH`, `VBNL`, `WDT`, `WP`, `WRB`, `X`, ```` | | **`morphologizer`** | `POS=ADP`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=CCONJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `POS=PART\|Polarity=Neg`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Variant=Short`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Nom\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Card\|POS=NUM`, `Case=Nom\|NumType=Card\|POS=NUM`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET`, `POS=PART`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=ADV`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Pos\|POS=ADV`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|NumType=Card\|POS=NUM`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=NUM`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Number=Plur\|POS=ADJ\|Variant=Short`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Case=Loc\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Number=Plur\|POS=PRON`, `POS=SYM`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Nom\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Number=Plur\|POS=DET`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `Foreign=Yes\|POS=X`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Case=Nom\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Acc\|NumType=Card\|POS=NUM`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=DET`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Number=Plur\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Variant=Short`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Variant=Short`, `Degree=Cmp\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Case=Nom\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Degree=Pos\|POS=VERB`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Ins\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Ins\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Variant=Short`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Number=Plur\|POS=PRON`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Dat\|NumType=Card\|POS=NUM`, `POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Variant=Short`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `POS=X`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Abbr=Yes\|POS=PROPN`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Degree=Sup\|POS=ADV`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2`, `Case=Dat\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Foreign=Yes\|POS=NOUN`, `POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Number=Plur\|POS=PRON`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|POS=AUX\|VerbForm=Conv`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=AUX`, `Case=Dat\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Dat\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|NumType=Card\|POS=NUM`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=3`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Dat\|Number=Plur\|POS=PRON`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Dat\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|POS=VERB\|VerbForm=Conv`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|POS=ADV`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|NumType=Card\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PART`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|NumType=Card\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Loc\|NumType=Card\|POS=NUM`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Variant=Short`, `Animacy=Anim\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|NumType=Card\|POS=NUM`, `Case=Gen\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Variant=Short\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=VERB`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Loc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Ins\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Foreign=Yes\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Degree=Cmp\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Par\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType=Card\|POS=SYM`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|NumType=Card\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=2`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|POS=DET` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `nummod:entity`, `nummod:gov`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `6`, `8`, `10`, `12`, `14`, `16`, `19`, `21`, `23`, `27`, `29`, `31`, `35`, `37`, `39`, `42`, `45`, `49`, `50`, `53`, `55`, `59`, `61`, `62`, `64`, `66`, `68`, `70`, `72`, `75`, `77`, `78`, `81`, `83`, `85`, `87`, `89`, `91`, `94`, `97`, `99`, `101`, `105`, `106`, `107`, `109`, `110`, `112`, `114`, `116`, `118`, `119`, `121`, `123`, `126`, `128`, `130`, `132`, `133`, `135`, `137`, `139`, `0`, `141`, `145`, `147`, `148`, `150`, `152`, `154`, `156`, `158`, `160`, `162`, `166`, `168`, `169`, `171`, `173`, `175`, `177`, `179`, `181`, `182`, `184`, `186`, `188`, `189`, `192`, `193`, `194`, `195`, `197`, `198`, `199`, `202`, `204`, `205`, `206`, `207`, `208`, `210`, `211`, `213`, `216`, `217`, `219`, `221`, `223`, `224`, `226`, `228`, `229`, `231`, `233`, `234`, `237`, `239`, `241`, `242`, `244`, `245`, `247`, `249`, `251`, `253`, `256`, `257`, `260`, `262`, `264`, `266`, `268`, `270`, `272`, `275`, `277`, `279`, `283`, `287`, `289`, `290`, `293`, `294`, `296`, `298`, `300`, `302`, `305`, `307`, `310`, `313`, `315`, `317`, `319`, `322`, `324`, `326`, `328`, `330`, `332`, `335`, `337`, `339`, `340`, `341`, `345`, `346`, `348`, `350`, `353`, `355`, `357`, `360`, `362`, `364`, `366`, `368`, `370`, `372`, `374`, `376`, `378`, `380`, `381`, `384`, `386`, `388`, `391`, `393`, `395`, `397`, `398`, `400`, `401`, `402`, `404`, `408`, `409`, `410`, `412`, `413`, `415`, `416`, `418`, `420`, `421`, `423`, `424`, `426`, `428`, `430`, `432`, `434`, `436`, `438`, `439`, `441`, `443`, `446`, `449`, `453`, `455`, `457`, `248`, `459`, `460`, `462`, `464`, `465`, `467`, `470`, `472`, `474`, `477`, `479`, `480`, `482`, `484`, `485`, `486`, `489`, `491`, `493`, `496`, `498`, `500`, `502`, `504`, `505`, `506`, `508`, `509`, `512`, `513`, `515`, `517`, `520`, `522`, `524`, `525`, `527`, `529`, `531`, `532`, `533`, `535`, `536`, `540`, `542`, `544`, `546`, `548`, `549`, `551`, `552`, `555`, `276`, `556`, `557`, `559`, `560`, `562`, `564`, `565`, `567`, `569`, `570`, `571`, `572`, `574`, `575`, `577`, `578`, `580`, `582`, `584`, `586`, `589`, `591`, `593`, `595`, `597`, `599`, `601`, `602`, `172`, `604`, `605`, `606`, `608`, `610`, `611`, `612`, `614`, `615`, `76`, `617`, `618`, `619`, `621`, `117`, `623`, `624`, `626`, `628`, `629`, `631`, `635`, `637`, `638`, `639`, `641`, `642`, `644`, `645`, `647`, `648`, `650`, `652`, `654`, `656`, `658`, `659`, `661`, `663`, `665`, `666`, `668`, `669`, `671`, `675`, `677`, `678`, `679`, `681`, `682`, `683`, `686`, `687`, `689`, `691`, `693`, `695`, `697`, `699`, `701`, `22`, `703`, `705`, `707`, `710`, `714`, `716`, `718`, `720`, `723`, `725`, `727`, `729`, `731`, `732`, `734`, `737`, `739`, `740`, `743`, `745`, `747`, `748`, `751`, `753`, `754`, `757`, `758`, `760`, `762`, `764`, `766`, `768`, `770`, `772`, `773`, `775`, `776`, `778`, `779`, `780`, `781`, `782`, `783`, `785`, `787`, `789`, `791`, `793`, `794`, `796`, `797`, `800`, `801`, `802`, `803`, `804`, `806`, `807`, `808`, `809`, `810`, `812`, `816`, `818`, `819`, `821`, `823`, `825`, `826`, `827`, `829`, `833`, `834`, `835`, `836`, `838`, `842`, `843`, `844`, `846`, `848`, `849`, `850`, `852`, `854`, `856`, `858`, `860`, `862`, `864`, `866`, `867`, `868`, `870`, `871`, `873`, `874`, `875`, `878`, `880`, `881`, `883`, `887`, `889`, `890`, `891`, `894`, `895`, `896`, `898`, `900`, `902`, `903`, `904`, `907`, `909`, `910`, `911`, `912`, `914`, `916`, `917`, `918`, `919`, `920`, `924`, `925`, `927`, `928`, `931`, `933`, `934`, `936`, `937`, `935`, `938`, `939`, `942`, `944`, `946`, `948`, `949`, `950`, `951`, `953`, `954`, `956`, `958`, `959`, `960`, `962`, `964`, `966`, `968`, `970`, `972`, `974`, `976`, `978`, `980`, `981`, `982`, `984`, `985`, `987`, `988`, `989`, `990`, `991`, `992`, `993`, `995`, `996`, `997`, `998`, `1000`, `1001`, `1002`, `1004`, `1006`, `1008`, `1010`, `1012`, `1013`, `1016`, `1018`, `1019`, `1021`, `1023`, `1024`, `1025`, `1028`, `1030`, `1031`, `1033`, `1034`, `1036`, `1038`, `1039`, `1040`, `1041`, `1043`, `1045`, `1046`, `1048`, `1052`, `1054`, `1055`, `1056`, `1057`, `1062`, `1064`, `1065`, `1067`, `1069`, `1070`, `1072`, `1073`, `1074`, `1075`, `1076`, `1078`, `1080`, `1081`, `1083`, `1085`, `1087`, `1088`, `1089`, `1091`, `1092`, `1093`, `1094`, `1095`, `1096`, `1097`, `1098`, `1100`, `1102`, `1104`, `1106`, `1108`, `1109`, `1110`, `1111`, `1112`, `1113`, `1116`, `1117`, `1119`, `1121`, `1123`, `1124`, `1125`, `1127`, `1129`, `1132`, `1134`, `1135`, `1138`, `1139`, `1141`, `1143`, `1144`, `1145`, `1146`, `1147`, `1149`, `1152`, `1153`, `1155`, `1156`, `1157`, `1159`, `1161`, `1163`, `1165`, `1166`, `1168`, `1169`, `1172`, `1174`, `1176`, `1177`, `1179`, `1183`, `1184`, `1185`, `1186`, `1188`, `1190`, `1193`, `1195`, `1196`, `1200`, `1203`, `1204`, `1206`, `1207`, `1208`, `1209`, `1211`, `1212`, `1214`, `1216`, `1217`, `1218`, `1219`, `1221`, `1223`, `1224`, `1225`, `1227`, `1228`, `1230`, `1232`, `1234`, `1237`, `1238`, `1239`, `1241`, `1243`, `1244`, `1246`, `1248`, `1249`, `1251`, `1252`, `1255`, `1257`, `1259`, `1261`, `1262`, `1263`, `1265`, `1267`, `1268`, `1269`, `1273`, `1275`, `1277`, `1279`, `1281`, `1283`, `1285`, `1287`, `1289`, `1291`, `1293`, `1295`, `1297`, `1299`, `1302`, `1305`, `1306`, `1309`, `1311`, `1312`, `1313`, `1314`, `1315`, `1317`, `1319`, `1321`, `1322`, `1325`, `1326`, `1328`, `1330`, `1331`, `1333`, `325`, `1334`, `1336`, `1338`, `1339`, `1341`, `1343`, `1346`, `1347`, `1348`, `1349`, `1350`, `1352`, `1353`, `1354`, `1355`, `1357`, `1358`, `1359`, `1361`, `1363`, `1365`, `1368`, `1370`, `1371`, `1372`, `1374`, `1376`, `1377`, `1378`, `1380`, `1382`, `1384`, `1385`, `1386`, `1388`, `1389`, `1391`, `1393`, `1395`, `1396`, `1398`, `1399`, `1402`, `1404`, `1405`, `1120`, `1406`, `1408`, `1409`, `1410`, `1412`, `1413`, `1414`, `1415`, `1417`, `1419`, `1421`, `1423`, `1425`, `1426`, `1427`, `1429`, `1431`, `1433`, `1434`, `1436`, `1438`, `1439`, `1441`, `1443`, `1444`, `1445`, `1447`, `1448`, `1449`, `1450`, `1451`, `1452`, `1454`, `1457`, `1458`, `1459`, `1461`, `1463`, `1465`, `1467`, `1468`, `1469`, `1470`, `1472`, `1475`, `1477`, `1479`, `1480`, `1481`, `1483`, `1484`, `1487`, `1489`, `1491`, `1492`, `1493`, `1496`, `1497`, `1499`, `1501`, `1502`, `1504`, `1506`, `1507`, `1508`, `1509`, `1511`, `1513`, `1515`, `1516`, `1517`, `1518`, `1519`, `1521`, `1522`, `1523`, `1525`, `1527`, `1529`, `1531`, `1532`, `1534`, `1535`, `1536`, `1537`, `1539`, `1541`, `1543`, `1545`, `1546`, `1548`, `1549`, `1550`, `1551`, `1552`, `1553`, `1555`, `1557`, `1558`, `1559`, `1560`, `1562`, `1564`, `1566`, `1567`, `1569`, `1571`, `1573`, `1575`, `1576`, `1578`, `1580`, `1581`, `1582`, `1583`, `1584`, `1585`, `1586`, `1588`, `1590`, `1592`, `1593`, `1595`, `1599`, `1601`, `1602`, `1604`, `1606`, `1610`, `1611`, `1613`, `1614`, `1616`, `1617`, `1618`, `1619`, `1621`, `1623`, `1624`, `1626`, `1628`, `1629`, `1631`, `1632`, `1634`, `1635`, `1636`, `1637`, `1638`, `1640`, `1642`, `1644`, `1646`, `1647`, `1649`, `1651`, `1652`, `1654`, `1655`, `1659`, `1663`, `1665`, `1666`, `1667`, `1668`, `1671`, `1672`, `1674`, `1675`, `1677`, `1679`, `1681`, `1685`, `1687`, `1688`, `1689`, `1691`, `1692`, `1695`, `1696`, `1699`, `1701`, `1702`, `1703`, `1705`, `1706`, `1709`, `1710`, `1711`, `1712`, `1714`, `1715`, `1446`, `1718`, `1720`, `1721`, `1722`, `1723`, `1725`, `1727`, `1728`, `1730`, `1732`, `1733`, `1734`, `1736`, `1738`, `1739`, `1741`, `1743`, `1745`, `1746`, `1747`, `1748`, `1749`, `1750`, `1751`, `1753`, `1754`, `1757`, `1758`, `1760`, `1761`, `1763`, `1764`, `1766`, `1767`, `1768`, `1769`, `1770`, `1772`, `1774`, `1775`, `1776`, `1778`, `1780`, `1781`, `1783`, `1785`, `1788`, `1790`, `1792`, `1793`, `1794`, `1795`, `1797`, `1798`, `1800`, `1801`, `1802`, `1804`, `1806`, `1809`, `1810`, `1812`, `1815`, `1817`, `1818`, `1819`, `1821`, `1822`, `1823`, `1824`, `1825`, `1827`, `1828`, `1829`, `1833`, `1834`, `1835`, `1836`, `1837`, `1839`, `1842`, `1844`, `1845`, `1846`, `1848`, `1850`, `1851`, `1852`, `1853`, `1854`, `1855`, `1857`, `1859`, `1862`, `1863`, `1864`, `1865`, `1866`, `1867`, `1868`, `1871`, `1873`, `1874`, `1876`, `1877`, `1879`, `1880`, `1881`, `1885`, `1886`, `1888`, `1889`, `1891`, `1893`, `1894`, `1895`, `1896`, `1898`, `1900`, `1901`, `1902`, `1903`, `1904`, `1905`, `1906`, `1908`, `1911`, `1912`, `1914`, `1916`, `1918`, `1920`, `1921`, `1923`, `1925`, `1927`, `1928`, `1929`, `1931`, `1933`, `1934`, `1935`, `1936`, `1938`, `1940`, `1941`, `1943`, `1945`, `1947`, `1948`, `1950`, `1951`, `1952`, `1954`, `1956`, `1958`, `1960`, `1961`, `1963`, `1965`, `1969`, `1970`, `1971`, `1972`, `1973`, `1974`, `1975`, `1976`, `1977`, `1978`, `1980`, `1982`, `1983`, `1985`, `1987`, `1988`, `1989`, `1990`, `1992`, `1996`, `1997`, `1998`, `1999`, `2000`, `2001`, `717`, `2002`, `2004`, `2007`, `2008`, `2010`, `2011`, `2012`, `2013`, `2015`, `2016`, `2018`, `2020`, `2021`, `2022`, `2024`, `2025`, `2026`, `2029`, `2031`, `2032`, `2033`, `2034`, `2036`, `855`, `2038`, `2040`, `2041`, `2042`, `2044`, `2046`, `2047`, `2048`, `2050`, `2052`, `2054`, `2058`, `2062`, `2063`, `2066`, `2068`, `2070`, `2072`, `2074`, `2075`, `2076`, `2078`, `2079`, `2080`, `2081`, `2083`, `2084`, `2085`, `2088`, `2089`, `2090`, `2091`, `2092`, `2093`, `2094`, `2096`, `2097`, `2098`, `2099`, `2101`, `2104`, `2105`, `2106`, `2107`, `2109`, `2110`, `2115`, `2117`, `2118`, `2121`, `2122`, `2123`, `2124`, `2125`, `2126`, `2127`, `2128`, `2129`, `2130`, `2131`, `2134`, `2135`, `2137`, `2138`, `630`, `2140`, `2143`, `2145`, `2147`, `2148`, `2149`, `2151`, `2152`, `2153`, `2154`, `2155`, `2156`, `2157`, `2159`, `2162`, `2164`, `2165`, `2167`, `2169`, `2170`, `2171`, `2175`, `2176`, `2180`, `2181`, `2183`, `2185`, `2187`, `2189`, `2190`, `2191`, `2194`, `2195`, `2196`, `2198`, `2200`, `2201`, `2202`, `2203`, `2205`, `2206`, `2207`, `2209`, `2211`, `2212`, `2213`, `2215`, `2217`, `2218`, `2219`, `2220`, `2222`, `2223`, `2224`, `2226`, `2228`, `2230`, `2231`, `2233`, `2235`, `2237`, `2239`, `2240`, `2241`, `2242`, `2243`, `2246`, `2247`, `2249`, `2251`, `2252`, `2253`, `2255`, `2256`, `2260`, `2261`, `2263`, `2265`, `2266`, `2267`, `2268`, `2270`, `2271`, `2273`, `2274`, `2277`, `2278`, `2280`, `2282`, `2284`, `2285`, `2287`, `2288`, `2290`, `2291`, `2292`, `2293`, `2294`, `2295`, `2297`, `2299`, `2301`, `2302`, `2303`, `2305`, `2306`, `2308`, `2310`, `2311`, `2313`, `2314`, `2315`, `2316`, `2317`, `2318`, `2321`, `2322`, `2324`, `2325`, `2327`, `2328`, `2329`, `2331`, `2332`, `2333`, `2335`, `2326`, `2336`, `2337`, `2339`, `2340`, `2342`, `2345`, `180`, `2347`, `2348`, `2349`, `2351`, `2352`, `2353`, `2354`, `2356`, `2357`, `2358`, `2360`, `2362`, `2364`, `2366`, `2368`, `2370`, `2372`, `2376`, `2377`, `2378`, `2380`, `2382`, `2383`, `2384`, `2385`, `2386`, `2388`, `2389`, `2391`, `2392`, `2393`, `2395`, `2397`, `2399`, `2400`, `2401`, `2403`, `2405`, `2406`, `2407`, `2409`, `2411`, `2412`, `2413`, `2414`, `2416`, `2417`, `2418`, `2419`, `2420`, `2421`, `2423`, `2424`, `2426`, `2427`, `2428`, `2429`, `2431`, `2432`, `2433`, `2434`, `2435`, `2436`, `2438`, `2439`, `2441`, `2442`, `2444`, `2445`, `2447`, `2448`, `2450`, `2451`, `2452`, `2453`, `2455`, `2456`, `2458`, `2460`, `2462`, `2463`, `2465`, `2468`, `2469`, `2470`, `2471`, `2473`, `2474`, `2476`, `2477`, `2479`, `2480`, `2482`, `2484`, `2488`, `2489`, `2493`, `2496`, `2497`, `2498`, `2499`, `2501`, `2502`, `2503`, `2504`, `2505`, `2506`, `2507`, `2509`, `2510`, `2511`, `2514`, `2517`, `2518`, `2519`, `2520`, `2522`, `2525`, `2528`, `2530`, `2531`, `2532`, `2533`, `2535`, `2536`, `2537`, `2538`, `2540`, `2542`, `2543`, `2545`, `2546`, `2547`, `2551`, `2553`, `2555`, `2557`, `2558`, `2559`, `2561`, `2562`, `2563`, `2566`, `2569`, `2571`, `2572`, `2573`, `2577`, `2578`, `2582`, `2584`, `2586`, `2587`, `2589`, `2591`, `2593`, `2594`, `2598`, `2599`, `2600`, `2601`, `2602`, `2603`, `2604`, `2605`, `2606`, `2607`, `2610`, `2611`, `2612`, `2613`, `2614`, `2616`, `2617`, `2618`, `2619`, `2621`, `2622`, `2623`, `2625`, `2626`, `2628`, `2630`, `2632`, `2633`, `2635`, `2637`, `2639`, `2640`, `2642`, `2644`, `2646`, `2647`, `2648`, `2649`, `2650`, `2652`, `2653`, `2654`, `2655`, `2656`, `2657`, `2659`, `2660`, `2661`, `2662`, `2663`, `2665`, `2666`, `2667`, `2669`, `2671`, `2672`, `2674`, `2675`, `2677`, `2678`, `2679`, `2680`, `2681`, `2683`, `2686`, `2687`, `2689`, `2691`, `2695`, `2696`, `2699`, `2701`, `2703`, `2706`, `2707`, `2708`, `2710`, `2711`, `2712`, `2714`, `2716`, `2718`, `2719`, `2720`, `2721`, `2722`, `2723`, `2725`, `2727`, `2728`, `2729`, `2730`, `2731`, `2732`, `2733`, `96`, `2735`, `2737`, `2739`, `2740`, `2741`, `2742`, `2746`, `2748`, `2750`, `2752`, `2754`, `2755`, `2756`, `2757`, `2758`, `2759`, `2761`, `2762`, `2764`, `2765`, `2767`, `2768`, `2770`, `2771`, `2773`, `2774`, `2775`, `2776`, `528`, `2778`, `2779`, `2781`, `2783`, `2786`, `2788`, `2789`, `2790`, `2791`, `2792`, `2793`, `2795`, `2796`, `2798`, `2799`, `2800`, `2802`, `2803`, `2805`, `2806`, `2807`, `2808`, `2810`, `2812`, `2813`, `2814`, `2815`, `2816`, `2818`, `2820`, `2821`, `2822`, `2823`, `2824`, `2826`, `2827`, `2830`, `2831`, `2833`, `2834`, `2836`, `2837`, `2838`, `2840`, `2841`, `2843`, `2844`, `2846`, `2848`, `2849`, `2850`, `2851`, `2852`, `2853`, `2855`, `2856`, `2858`, `2860`, `2861`, `2408`, `1222`, `2864`, `2865`, `2866`, `2867`, `2869`, `2870`, `2872`, `2873`, `2874`, `2875`, `2876`, `2877`, `2879`, `2880`, `2883`, `2885`, `2886`, `2887`, `2888`, `2890`, `2891`, `2892`, `2893`, `2895`, `2896`, `2897`, `2900`, `2901`, `2902`, `2903`, `2904`, `2906`, `2907`, `2908`, `2909`, `2911`, `2913`, `2914`, `2917`, `2919`, `2920`, `2921`, `2922`, `2924`, `2926`, `2927`, `2929`, `2931`, `2932`, `2934`, `2936`, `2940`, `2942`, `2944`, `2945`, `2946`, `2948`, `2950`, `2951`, `2953`, `2955`, `2956`, `2957`, `2959`, `2960`, `2962`, `2963`, `2964`, `2965`, `2966`, `2968`, `2970`, `2971`, `2972`, `2973`, `2975`, `2976`, `2978`, `2979`, `2980`, `2981`, `2982`, `2984`, `2986`, `2987`, `2989`, `2990`, `2991`, `2992`, `2993`, `2994`, `2995`, `2997`, `2999`, `3000`, `3002`, `3003`, `3004`, `3006`, `3008`, `3009`, `3010`, `3013`, `3014`, `3017`, `3019`, `3022`, `3024`, `3026`, `3027`, `3028`, `3029`, `3030`, `3031`, `3034`, `3035`, `3038`, `3039`, `3041`, `3044`, `3046`, `3047`, `3049`, `3050`, `3052`, `3054`, `3056`, `3057`, `3058`, `3059`, `3060`, `3061`, `3062`, `3063`, `3065`, `3066`, `3068`, `3070`, `3071`, `3073`, `3074`, `3075`, `3077`, `3078`, `3079`, `3080`, `3083`, `3084`, `3087`, `3089`, `3092`, `706`, `1420`, `394`, `3093`, `3094`, `3097`, `3098`, `3099`, `3101`, `616`, `3102`, `3103`, `3104`, `3105`, `3106`, `3108`, `3110`, `3111`, `3112`, `3114`, `3115`, `3118`, `3121`, `3122`, `3124`, `3125`, `3129`, `3132`, `3134`, `3136`, `3140`, `3141`, `3142`, `3144`, `3147`, `3149`, `3150`, `3151`, `3153`, `3155`, `3156`, `3158`, `3159`, `3160`, `3161`, `3162`, `3165`, `3166`, `3169`, `3170`, `3171`, `3173`, `3174`, `3175`, `3177`, `3178`, `3179`, `3180`, `3183`, `3184`, `3186`, `3187`, `3188`, `3189`, `3192`, `3194`, `3195`, `3197`, `3198`, `3199`, `3200`, `3202`, `3205`, `3206`, `3209`, `3210`, `3211`, `3212`, `3213`, `3215`, `3216`, `3217`, `3218`, `3219`, `3222`, `3224`, `3225`, `3226`, `3227`, `3228`, `3229`, `3231`, `201`, `3232`, `3233`, `3234`, `3235`, `3237`, `3239`, `3241`, `3242`, `3243`, `3245`, `3246`, `3247`, `3248`, `3249`, `3250`, `3253`, `3255`, `3257`, `3258`, `3259`, `3260`, `3261`, `3264`, `3265`, `3267`, `3268`, `3269`, `3270`, `3272`, `3273`, `3274`, `3275`, `3277`, `3278`, `3279`, `3280`, `3282`, `3284`, `3285`, `3286`, `3289`, `3291`, `3293`, `3295`, `3296`, `3297`, `3298`, `3299`, `3301`, `3305`, `3306`, `3307`, `3308`, `3310`, `3312`, `3313`, `3314`, `3315`, `3316`, `3317`, `3318`, `3319`, `3321`, `3322`, `3324`, `3325`, `3327`, `3328`, `3329`, `3330`, `3331`, `3333`, `3334`, `2307`, `3336`, `3338`, `3340`, `3341`, `3342`, `3344`, `3345`, `3347`, `3349`, `3350`, `3351`, `3353`, `3354`, `3355`, `3356`, `3357`, `3358`, `3359`, `3361`, `3362`, `3364`, `3369`, `3370`, `3371`, `3373`, `3375`, `3376`, `3377`, `3379`, `3381`, `3382`, `3384`, `3386`, `3388`, `3389`, `3391`, `3392`, `3394`, `3395`, `3397`, `3398`, `3400`, `3403`, `3404`, `3405`, `3407`, `3409`, `3411`, `3412`, `3413`, `3415`, `3417`, `3419`, `3421`, `3422`, `3423`, `3424`, `3425`, `3426`, `3428`, `3429`, `3432`, `3433`, `3434`, `3435`, `3436`, `3437`, `3439`, `3441`, `3443`, `3446`, `3448`, `3449`, `3450`, `3451`, `3452`, `3453`, `3454`, `3455`, `3456`, `3457`, `3458`, `3459`, `3461`, `3463`, `1044`, `3464`, `3465`, `3466`, `3467`, `3468`, `3470`, `3472`, `3473`, `3474`, `3475`, `3476`, `3477`, `3479`, `3480`, `3482`, `3483`, `3485`, `3486`, `3487`, `3489`, `3490`, `3493`, `3494`, `3495`, `3496`, `3497`, `3498`, `3500`, `3502`, `3504`, `3505`, `3506`, `3508`, `3509`, `3511`, `3512`, `3514`, `3515`, `3517`, `3518`, `3519`, `3521`, `3522`, `3523`, `3524`, `3525`, `3526`, `3528`, `3529`, `3531`, `3532`, `3534`, `3536`, `3537`, `3538`, `3539`, `3540`, `3541`, `3542`, `3543`, `3544`, `3545`, `3547`, `3549`, `3550`, `3551`, `3552`, `3553`, `3556`, `3558`, `3561`, `3563`, `3564`, `3566`, `3567`, `3569`, `3571`, `3574`, `3576`, `3577`, `3579`, `3581`, `3583`, `3584`, `3587`, `3588`, `3589`, `3592`, `3593`, `3594`, `3595`, `3597`, `3600`, `3601`, `3602`, `3603`, `3604`, `3605`, `3606`, `3609`, `3610`, `3612`, `3613`, `3614`, `3615`, `3616`, `3617`, `3619`, `3624`, `3627`, `3628`, `3630`, `3632`, `3634`, `3636`, `3637`, `3640`, `3642`, `3643`, `3645`, `3646`, `3648`, `3650`, `3654`, `3655`, `3656`, `3658`, `3659`, `3661`, `3663`, `3665`, `3666`, `3667`, `3669`, `3670`, `3672`, `3673`, `3674`, `3675`, `3677`, `3679`, `3680`, `3681`, `3682`, `3683`, `3684`, `3685`, `3687`, `3688`, `3689`, `3690`, `3691`, `3693`, `3696`, `3697`, `3698`, `3699`, `3700` | </details>
c53781c413a47b5c0a8a2e59cd66e60a
cc-by-sa-4.0
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.49 | | `TOKEN_P` | 99.48 | | `TOKEN_R` | 99.50 | | `TOKEN_ACC` | 99.94 | | `SENTS_F` | 96.05 | | `SENTS_P` | 95.56 | | `SENTS_R` | 96.55 | | `TAG_ACC` | 96.91 | | `POS_ACC` | 98.25 | | `MORPH_ACC` | 94.72 | | `DEP_UAS` | 92.10 | | `DEP_LAS` | 88.72 | | `LEMMA_ACC` | 94.45 |
69c43bcb183511c5148ede7f4d4f9b2e
cc-by-4.0
[]
false
MahaNER-BERT MahaNER-BERT is a MahaBERT(l3cube-pune/marathi-bert) model fine-tuned on L3Cube-MahaNER - a Marathi named entity recognition dataset. [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.06029) ``` @InProceedings{litake-EtAl:2022:WILDRE6, author = {Litake, Onkar and Sabane, Maithili Ravindra and Patil, Parth Sachin and Ranade, Aparna Abhijeet and Joshi, Raviraj}, title = {L3Cube-MahaNER: A Marathi Named Entity Recognition Dataset and BERT models}, booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {29--34} } ```
a11d2efb430bc8c277de5679fa2c1744
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2723 - F1: 0.8340
2bb3d30d429bb99e69370f7d79802601
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5909 | 1.0 | 191 | 0.3404 | 0.7891 | | 0.2594 | 2.0 | 382 | 0.2919 | 0.8152 | | 0.1752 | 3.0 | 573 | 0.2723 | 0.8340 |
a2415f2cff8abca8702cc4dbff22e607
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_vp-sv_s44 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
884e427f00be5ac0cbc1c06a86797b37
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-mrpc-custom-tokenizer-target-glue-qnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-mrpc-custom-tokenizer](https://huggingface.co/muhtasham/small-mlm-glue-mrpc-custom-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4493 - Accuracy: 0.7986
92d887298662f6553ede63e60651e7c7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5672 | 0.15 | 500 | 0.4950 | 0.7650 | | 0.532 | 0.31 | 1000 | 0.4894 | 0.7710 | | 0.5191 | 0.46 | 1500 | 0.5007 | 0.7681 | | 0.5102 | 0.61 | 2000 | 0.4682 | 0.7873 | | 0.5033 | 0.76 | 2500 | 0.4596 | 0.7897 | | 0.4975 | 0.92 | 3000 | 0.4537 | 0.7908 | | 0.4751 | 1.07 | 3500 | 0.4637 | 0.7900 | | 0.4547 | 1.22 | 4000 | 0.5252 | 0.7717 | | 0.45 | 1.37 | 4500 | 0.4494 | 0.8003 | | 0.454 | 1.53 | 5000 | 0.4493 | 0.7986 |
4994e67dd09954a5b6cc8579b24d5d05
mit
['grammar-correction']
false
T5-Efficient-TINY for grammar correction This is a [T5-Efficient-TINY](https://huggingface.co/google/t5-efficient-tiny) model that was trained on a subset of [C4_200M](https://ai.googleblog.com/2021/08/the-c4200m-synthetic-dataset-for.html) dataset to solve the grammar correction task in English. To bring additional errors, random typos were introduced to the input sentences using the [nlpaug](https://github.com/makcedward/nlpaug) library. Since the model was trained on only one task, there are no prefixes needed. The model was trained as a part of the project during the [Full Stack Deep Learning](https://fullstackdeeplearning.com/course/2022/) course. ONNX version of the model is deployed on the [site](https://edge-ai.vercel.app/models/grammar-check) and can be run directly in the browser.
69c2116b9c6dc4aef6bf417701bd87f6
mit
[]
false
black-waifu on Stable Diffusion This is the `<black-waifu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<black-waifu> 0](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/5.jpeg) ![<black-waifu> 1](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/6.jpeg) ![<black-waifu> 2](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/15.jpeg) ![<black-waifu> 3](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/14.jpeg) ![<black-waifu> 4](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/9.jpeg) ![<black-waifu> 5](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/3.jpeg) ![<black-waifu> 6](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/0.jpeg) ![<black-waifu> 7](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/12.jpeg) ![<black-waifu> 8](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/13.jpeg) ![<black-waifu> 9](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/2.jpeg) ![<black-waifu> 10](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/10.jpeg) ![<black-waifu> 11](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/7.jpeg) ![<black-waifu> 12](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/1.jpeg) ![<black-waifu> 13](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/11.jpeg) ![<black-waifu> 14](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/4.jpeg) ![<black-waifu> 15](https://huggingface.co/sd-concepts-library/black-waifu/resolve/main/concept_images/8.jpeg)
558e3b81997913c6323fe9afb76109d9
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0285 - Rouge1: 16.9728 - Rouge2: 8.2969 - Rougel: 16.8366 - Rougelsum: 16.851 - Gen Len: 10.1597
c1b8c889107310692b1ca32b0982eee3
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 7.1016 | 1.0 | 1209 | 3.3069 | 13.9858 | 5.8437 | 13.6053 | 13.5125 | 8.3782 | | 3.898 | 2.0 | 2418 | 3.1567 | 16.6706 | 8.6393 | 16.2882 | 16.2249 | 9.7521 | | 3.5915 | 3.0 | 3627 | 3.0928 | 17.111 | 8.3921 | 16.9139 | 16.7805 | 10.3445 | | 3.4174 | 4.0 | 4836 | 3.0482 | 16.9728 | 8.3066 | 16.8868 | 16.8485 | 10.3151 | | 3.3258 | 5.0 | 6045 | 3.0375 | 16.5972 | 8.2621 | 16.3524 | 16.3093 | 10.0672 | | 3.2427 | 6.0 | 7254 | 3.0232 | 17.3009 | 8.6087 | 17.0782 | 17.0105 | 10.0756 | | 3.2009 | 7.0 | 8463 | 3.0302 | 16.9284 | 8.6569 | 16.7885 | 16.7784 | 10.2143 | | 3.1838 | 8.0 | 9672 | 3.0285 | 16.9728 | 8.2969 | 16.8366 | 16.851 | 10.1597 |
8d8733e74f0ebb82119c3d790509c663
apache-2.0
['dutch', 'english', 't5', 't5x', 'ul2', 'seq2seq']
false
ul2-base-dutch-english for Dutch and English Pretrained T5 model on Dutch and English using a UL2 (Mixture-of-Denoisers) objective. The T5 model was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer). The UL2 objective was introduced in [this paper](https://arxiv.org/abs/2205.05131) and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2). **Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice.
46228bc682e997ca4d172c39ca22bc85
apache-2.0
['dutch', 'english', 't5', 't5x', 'ul2', 'seq2seq']
false
Model description T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format. `ul2-base-dutch-english` T5 is a transformers model pretrained on a very large corpus of Dutch and English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts. This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md
db36b5fa4a52a4602e01d2291bbeb426
apache-2.0
['dutch', 'english', 't5', 't5x', 'ul2', 'seq2seq']
false
How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-base-dutch-english", use_fast=False) model = T5ForConditionalGeneration.from_pretrained("yhavinga/ul2-base-dutch-english") ``` and in Flax: ```python from transformers import T5Tokenizer, FlaxT5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-base-dutch-english", use_fast=False) model = FlaxT5ForConditionalGeneration.from_pretrained("yhavinga/ul2-base-dutch-english") ```
56c03edf426d4c0276ca25241c988fb4
apache-2.0
['dutch', 'english', 't5', 't5x', 'ul2', 'seq2seq']
false
Training data The `ul2-base-dutch-english` T5 model was pre-trained simultaneously on a combination of several datasets, including the `full_en_nl` config of the "mc4_nl_cleaned" dataset, which is a cleaned version of Common Crawl's web crawl corpus, Dutch books, the Dutch subset of Wikipedia (2022-03-20), the English subset of Wikipedia (2022-03-01), and a subset of "mc4_nl_cleaned" containing only texts from Dutch and Belgian newspapers. This last dataset is oversampled to bias the model towards descriptions of events in the Netherlands and Belgium.
d06419aecb3986f316490679da8f0504
apache-2.0
['dutch', 'english', 't5', 't5x', 'ul2', 'seq2seq']
false
Preprocessing The ul2-base-dutch-english T5 model uses a SentencePiece unigram tokenizer with a vocabulary of 32,000 tokens. The tokenizer includes the special tokens `<pad>`, `</s>`, `<unk>`, known from the original T5 paper, `[NLU]`, `[NLG]` and `[S2S]` for the MoD pre-training, and `<n>` for newline. During pre-training with the UL2 objective, input and output sequences consist of 512 consecutive tokens. The tokenizer does not lowercase texts and is therefore case-sensitive; it distinguises between `dutch` and `Dutch`. Additionally, 100+28 extra tokens were added for pre-training tasks, resulting in a total of 32,128 tokens.
6fe6625b552c8e09e658593116b1d9ff
apache-2.0
[]
false
Model Description This model is russian version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased). The code for the transforming process can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/spellchecker/ml_ranging/models/distilbert_base_russian_cased/distilbert_from_multilang_to_ru.ipynb). This model give exactly the same representations produced by the original model which preserves the original accuracy. There is a similar model of [Geotrend/distilbert-base-ru-cased](https://huggingface.co/Geotrend/distilbert-base-ru-cased). However, our model is derived from a slightly different approach. Instead of using wikipedia's Russian dataset to pick the necessary tokens, we used regular expressions in this model to select only Russian tokens, punctuation marks, numbers and other service tokens. Thus, our model contains several hundred tokens, which have been filtered out in [Geotrend/distilbert-base-ru-cased](https://huggingface.co/Geotrend/distilbert-base-ru-cased). This model was created as part of a master's project to develop a method for correcting typos in medical histories using BERT models as a ranking of candidates. The project is open source and can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker).
173b4080b4a6c765e50e8a530d4dd349
apache-2.0
[]
false
How to Get Started With the Model You can use the model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> pipeline = pipeline('fill-mask', model='DmitryPogrebnoy/distilbert-base-russian-cased') >>> pipeline("ะฏ [MASK] ะฝะฐ ะทะฐะฒะพะดะต.") [{'score': 0.11498937010765076, 'token': 1709, 'token_str': 'ั€ะฐะฑะพั‚ะฐะป', 'sequence': 'ะฏ ั€ะฐะฑะพั‚ะฐะป ะฝะฐ ะทะฐะฒะพะดะต.'}, {'score': 0.07212855666875839, 'token': 12375, 'token_str': '
fcd94c03395e6303823029ef81590cab
apache-2.0
[]
false
ั€ะพัะปะฐ', 'sequence': 'ะฏั€ะพัะปะฐ ะฝะฐ ะทะฐะฒะพะดะต.'}, {'score': 0.03575785085558891, 'token': 4059, 'token_str': 'ะฝะฐั…ะพะดะธะปัั', 'sequence': 'ะฏ ะฝะฐั…ะพะดะธะปัั ะฝะฐ ะทะฐะฒะพะดะต.'}, {'score': 0.02496381290256977, 'token': 5075, 'token_str': 'ั€ะฐะฑะพั‚ะฐะตั‚', 'sequence': 'ะฏ ั€ะฐะฑะพั‚ะฐะตั‚ ะฝะฐ ะทะฐะฒะพะดะต.'}, {'score': 0.020675526931881905, 'token': 5774, 'token_str': '
ba00a6502493d2d59e36835eca1b3ee7
apache-2.0
[]
false
ะดั€ะพ', 'sequence': 'ะฏะดั€ะพ ะฝะฐ ะทะฐะฒะพะดะต.'}] ``` Or you can load the model and tokenizer and do what you need to do: ```python >>> from transformers import AutoTokenizer, AutoModelForMaskedLM >>> tokenizer = AutoTokenizer.from_pretrained("DmitryPogrebnoy/distilbert-base-russian-cased") >>> model = AutoModelForMaskedLM.from_pretrained("DmitryPogrebnoy/distilbert-base-russian-cased") ```
0a04c4b85122a195607da9e022dfd483
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_vp-100k_age_teens-2_sixties-8_s304 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6399899309e84473719c1c9482a39292
apache-2.0
['generated_from_trainer']
false
hate_trained_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8507 - F1: 0.7719
2d702c873db6ca3b76b2622c6d882ccc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4817 | 1.0 | 563 | 0.4975 | 0.7678 | | 0.3311 | 2.0 | 1126 | 0.4965 | 0.7773 | | 0.2303 | 3.0 | 1689 | 0.7102 | 0.7613 | | 0.1429 | 4.0 | 2252 | 0.8507 | 0.7719 |
88633712e0b32479654ac5a1010ea8ef
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Punjabi Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz.
da65dd139ad375365e1c7c1eac286e57
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pa-IN", split="test") processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi") model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi") resampler = torchaudio.transforms.Resample(48_000, 16_000)
809b07cda5715807e0742ebf3db74de4
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\\\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
1cece7f7ce00f70f8d0ed25a66d9ea15
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
TODO: replace language with your {language}, *e.g.* French ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "pa-IN", split="test")
76a26734c36383f67071c1a676b48e10
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi") model = Wav2Vec2ForCTC.from_pretrained("gagan3012/wav2vec2-xlsr-punjabi") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\โ€œ]'
c93eac0c3a128d5a72b590aa58f87b03
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() \\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\\\\\\\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn)
2e1c11e5e10474a9e3205426a7706dcf
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): \\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) \\\\\\\\twith torch.no_grad(): \\\\\\\\t\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits \\\\\\\\tpred_ids = torch.argmax(logits, dim=-1) \\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids) \\\\\\\\treturn batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 58.05 %
f7fbf46b5dad08db5e5cb1265054123b
apache-2.0
['pytorch', 'text-generation', 'causal-lm', 'rwkv']
false
Model Description RWKV-4 3B is a L32-D2560 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. Use https://github.com/BlinkDL/ChatRWKV to run it. New checkpoint: RWKV-4-Pile-3B-20221110-ctx4096.pth : Fine-tuned to ctx_len = 4096 * LAMBADA ppl 5.25, acc 63.96% * PIQA acc 74.16% * SC2016 acc 70.71% * Hellaswag acc_norm 59.89% * ctx_len = 4096 n_layer = 32 n_embd = 2560 Final checkpoint: RWKV-4-Pile-3B-20221008-8023.pth : Trained on the Pile for 331B tokens. * Pile loss 1.9469 * LAMBADA ppl 5.24, acc 63.94% * PIQA acc 73.72% * SC2016 acc 70.28% * Hellaswag acc_norm 59.63% * ctx_len = 1024 n_layer = 32 n_embd = 2560
094d3cf91aedebccde12dfe954c7ffbc
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_1600k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 1600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
88a12f129afdc72fd2ce986ee8bb39f7
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_1600k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_1600k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_1600k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
a2c943fb2b203a71cfaa7cc8a334e457
apache-2.0
['automatic-speech-recognition', 'pt']
false
exp_w2v2t_pt_vp-it_s738 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
45b9c4d9c1d46143e3a2525df0061234
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-misogyny This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7913 - Accuracy: 0.8925 - F1: 0.8280 - Precision: 0.8240 - Recall: 0.8320 - Mae: 0.1075
745b1bb17cba293134f1ef8e8e2a69e7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:| | 0.328 | 1.0 | 828 | 0.3477 | 0.8732 | 0.7831 | 0.8366 | 0.7359 | 0.1268 | | 0.273 | 2.0 | 1656 | 0.2921 | 0.8910 | 0.8269 | 0.8171 | 0.8369 | 0.1090 | | 0.2342 | 3.0 | 2484 | 0.3222 | 0.8834 | 0.8176 | 0.7965 | 0.8398 | 0.1166 | | 0.2132 | 4.0 | 3312 | 0.3801 | 0.8852 | 0.8223 | 0.7933 | 0.8534 | 0.1148 | | 0.1347 | 5.0 | 4140 | 0.5474 | 0.8955 | 0.8314 | 0.8346 | 0.8282 | 0.1045 | | 0.1187 | 6.0 | 4968 | 0.5853 | 0.8886 | 0.8137 | 0.8475 | 0.7825 | 0.1114 | | 0.0968 | 7.0 | 5796 | 0.6378 | 0.8916 | 0.8267 | 0.8223 | 0.8311 | 0.1084 | | 0.0533 | 8.0 | 6624 | 0.7397 | 0.8831 | 0.8191 | 0.7899 | 0.8505 | 0.1169 | | 0.06 | 9.0 | 7452 | 0.8112 | 0.8861 | 0.8224 | 0.7987 | 0.8476 | 0.1139 | | 0.0287 | 10.0 | 8280 | 0.7913 | 0.8925 | 0.8280 | 0.8240 | 0.8320 | 0.1075 |
95f2e112a94f03eaef13643e10abded2
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
`kan-bayashi/csj_asr_train_asr_transformer_raw_char_sp_valid.acc.ave` โ™ป๏ธ Imported from https://zenodo.org/record/4037458/ This model was trained by kan-bayashi using csj/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
d97721381b0ac0d8c40b041e07510f26
mit
['gec']
false
Usage Install the necessary dependencies: ```bash pip3 install ctranslate2 pyonmttok ``` Simple tokenization & translation using Python: ```python import ctranslate2 import pyonmttok from huggingface_hub import snapshot_download model_dir = snapshot_download(repo_id="jordimas/gec-opennmt-english", revision="main") tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/sp_m.model") tokenized=tokenizer.tokenize("The water are hot. My friends are going to be late. Today mine mother is in Barcelona.") translator = ctranslate2.Translator(model_dir) translated = translator.translate_batch([tokenized[0]]) print(tokenizer.detokenize(translated[0][0]['tokens'])) ```
b1e025e9a25117e5aed857a16bef471a
mit
['gec']
false
Model The model has been training using the [clang8](https://github.com/google-research-datasets/clang8) corpus for English language. Details: * Model: TransformerBase * Tokenizer: SentencePiece * BLEU = 85.50
79b1607696a04398ae61cb25782bd45b
mit
['gec']
false
Papers Relevant papers: * [Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task](https://aclanthology.org/N18-1055.pdf) * [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf)
a49d97a24d6cd16a7e9cb75affb04027
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_age_teens-0_sixties-10_s888 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
3733126fcb90eea38f8d2161900e97e4
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Kurmanji (kmr) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-09-25 01:40:16.021
4eef2397fe69c9e6de09bba3d223b520
apache-2.0
['generated_from_trainer']
false
legal-roberta-base-filtered-cuad This model is a fine-tuned version of [saibo/legal-roberta-base](https://huggingface.co/saibo/legal-roberta-base) on the cuad dataset. It achieves the following results on the evaluation set: - Loss: 0.0428
d0d9cfb8e08c1c2495ea507a1ee7ccc9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 22 - eval_batch_size: 22 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
fe5016abd9096eadc3d9b22066c6d675
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0556 | 1.0 | 12279 | 0.0517 | | 0.0406 | 2.0 | 24558 | 0.0425 | | 0.0332 | 3.0 | 36837 | 0.0428 |
42108ee09a7c0eff2f970c7c0b47ee52
apache-2.0
['translation']
false
opus-mt-es-da * source languages: es * target languages: da * OPUS readme: [es-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-da/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.eval.txt)
361ce890a274099c5e775c5e4f696a8c
mit
['generated_from_trainer']
false
xlm-all-final This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset. It achieves the following results on the evaluation set: - Loss: 0.6038
b933b62f61a0c5424d48753ad7e85fdd
mit
['generated_from_trainer']
false
roberta-base_mnli_uf_ner_1024_train_v0 This model is a fine-tuned version of [mariolinml/roberta-base_fullMnli_10_24_v0](https://huggingface.co/mariolinml/roberta-base_fullMnli_10_24_v0) on the None dataset.
2cd8076f15b706c8434379fe6341a64f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Accuracy: 0.9255 - F1: 0.9257
12c594341ee9d288c4fb4440b382f31e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7962 | 1.0 | 250 | 0.3167 | 0.903 | 0.8984 | | 0.2475 | 2.0 | 500 | 0.2222 | 0.9255 | 0.9257 |
47e86a704b0373d1f342b5c2973038ca
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2569 - F1: 0.8254
c315950da44e9adb7eb208e0ac81ba07
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 105 | 0.3244 | 0.7521 | | No log | 2.0 | 210 | 0.2719 | 0.8104 | | No log | 3.0 | 315 | 0.2569 | 0.8254 |
ed59828aa4ebd92231e0d8aa0d3659b3
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-banking-11-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7470 - Accuracy: 0.0756
e8ebce53665a2f2c06d8fcdc190cc542
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0573 - Precision: 0.9343 - Recall: 0.9495 - F1: 0.9418 - Accuracy: 0.9868
4a9f39d3e99ba62a4b44d0392ef75b3a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0854 | 1.0 | 1756 | 0.0639 | 0.9148 | 0.9329 | 0.9238 | 0.9822 | | 0.0403 | 2.0 | 3512 | 0.0542 | 0.9370 | 0.9512 | 0.9440 | 0.9866 | | 0.0204 | 3.0 | 5268 | 0.0573 | 0.9343 | 0.9495 | 0.9418 | 0.9868 |
48273d0c34c6c15f0f36cca6a7bcde67
apache-2.0
['translation']
false
opus-mt-en-sw * source languages: en * target languages: sw * OPUS readme: [en-sw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.eval.txt)
583a20d5ee29b2f2be130925dd35d594
apache-2.0
['generated_from_trainer']
false
distilbert_sst2_int8_xml This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4463 - Accuracy: 0.9037
cedd28662fd13300c28920be3f86de82
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
14779b122a7cd4cc27787ded0c56579f
apache-2.0
['korean']
false
KoELECTRA (Base Discriminator) Pretrained ELECTRA Language Model for Korean (`koelectra-base-discriminator`) For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
e7bd3812edb0c1f168b7ceac28d85ce3
apache-2.0
['korean']
false
Load model and tokenizer ```python >>> from transformers import ElectraModel, ElectraTokenizer >>> model = ElectraModel.from_pretrained("monologg/koelectra-base-discriminator") >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator") ```
e3e2d305effaf0f00cf62ef1a06a4d3f
apache-2.0
['korean']
false
Tokenizer example ```python >>> from transformers import ElectraTokenizer >>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator") >>> tokenizer.tokenize("[CLS] ํ•œ๊ตญ์–ด ELECTRA๋ฅผ ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค. [SEP]") ['[CLS]', 'ํ•œ๊ตญ์–ด', 'E', '
b8ba82c252e250ca7539296d4ebb007e
apache-2.0
['korean']
false
Example using ElectraForPreTraining ```python import torch from transformers import ElectraForPreTraining, ElectraTokenizer discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-discriminator") tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-discriminator") sentence = "๋‚˜๋Š” ๋ฐฉ๊ธˆ ๋ฐฅ์„ ๋จน์—ˆ๋‹ค." fake_sentence = "๋‚˜๋Š” ๋‚ด์ผ ๋ฐฅ์„ ๋จน์—ˆ๋‹ค." fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) print(list(zip(fake_tokens, predictions.tolist()[1:-1]))) ```
20abb0cfaa7b55a31ccf66e26a1977ef
mit
[]
false
pen-ink-portraits-BenNorthen on Stable Diffusion This is the `<ink-portrait-by-BenNorthern>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<ink-portrait-by-BenNorthern> 0](https://huggingface.co/sd-concepts-library/pen-ink-portraits-bennorthen/resolve/main/concept_images/0.jpeg) ![<ink-portrait-by-BenNorthern> 1](https://huggingface.co/sd-concepts-library/pen-ink-portraits-bennorthen/resolve/main/concept_images/4.jpeg) ![<ink-portrait-by-BenNorthern> 2](https://huggingface.co/sd-concepts-library/pen-ink-portraits-bennorthen/resolve/main/concept_images/1.jpeg) ![<ink-portrait-by-BenNorthern> 3](https://huggingface.co/sd-concepts-library/pen-ink-portraits-bennorthen/resolve/main/concept_images/3.jpeg) ![<ink-portrait-by-BenNorthern> 4](https://huggingface.co/sd-concepts-library/pen-ink-portraits-bennorthen/resolve/main/concept_images/2.jpeg)
80cf13e2cda8c663fbe28251fd37efed
apache-2.0
['automatic-speech-recognition', 'collectivat/tv3_parla', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'projecte-aina/parlament_parla', 'robust-speech-event']
false
wav2vec2-xls-r-1b-ca-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
a203af1efff7ae2393fa8a5523c95725
apache-2.0
['automatic-speech-recognition', 'collectivat/tv3_parla', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'projecte-aina/parlament_parla', 'robust-speech-event']
false
Training results Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
69c1ed64a775e179d593a950bce09577
apache-2.0
['automatic-speech-recognition', 'collectivat/tv3_parla', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'projecte-aina/parlament_parla', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 10.0 - mixed_precision_training: Native AMP
f46f8d13330db34f834ce4564d409d66
apache-2.0
['text-classification', 'generic']
false
Hugging Face Transformers with Scikit-learn Classifiers ๐Ÿคฉ๐ŸŒŸ This repository contains a small proof-of-concept pipeline that leverages longformer embeddings with scikit-learn Logistic Regression that does sentiment analysis. The training leverages the language module of [whatlies](https://github.com/koaning/whatlies). See the tutorial notebook [here](https://www.kaggle.com/code/unofficialmerve/scikit-learn-with-transformers/notebook).
07a387512d7d5220fc58551e547f9c30
apache-2.0
['text-classification', 'generic']
false
Classification Report ๐Ÿ“ˆ Below is the classification report ๐Ÿ‘‡๐Ÿป ``` precision recall f1-score support 0 0.85 0.89 0.87 522 1 0.89 0.85 0.87 550 accuracy 0.87 1072 macro avg 0.87 0.87 0.87 1072 weighted avg 0.87 0.87 0.87 1072 ```
0d9bd273a23761277847e8da0e7413f9
apache-2.0
['text-classification', 'generic']
false
sk-40148507-8fa3-419e-8462-0cd31028ba20 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}
4694b085d7c9cd12b028fe85fd44b530
apache-2.0
['text-classification', 'generic']
false
sk-40148507-8fa3-419e-8462-0cd31028ba20 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}
9d826cb59e94ffa0e51463ad63c07777
apache-2.0
['text-classification', 'generic']
false
sk-40148507-8fa3-419e-8462-0cd31028ba20 div.sk-container {/* jupyter\'s `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}
3ad6fe69724869a70a02a724d7c32b33
apache-2.0
['text-classification', 'generic']
false
sk-40148507-8fa3-419e-8462-0cd31028ba20 div.sk-text-repr-fallback {display: none;}</style><div id="sk-40148507-8fa3-419e-8462-0cd31028ba20" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&
4183d544bbcd42fb054c90084bf117f1
apache-2.0
['text-classification', 'generic']
false
x27;, LogisticRegression())])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="9042928c-84ce-45ec-a5c0-181ce820f2c7" type="checkbox" ><label for="9042928c-84ce-45ec-a5c0-181ce820f2c7" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&
ded5aebea4580244837d4cc7deec9079
apache-2.0
['text-classification', 'generic']
false
x27;, LogisticRegression())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="55bb4869-7378-430d-a174-0c343e24018c" type="checkbox" ><label for="55bb4869-7378-430d-a174-0c343e24018c" class="sk-toggleable__label sk-toggleable__label-arrow">HFTransformersLanguage</label><div class="sk-toggleable__content"><pre>HFTransformersLanguage(model_name_or_path=&
1678b513f900c3c1c1e98c09bd41f47d
apache-2.0
['text-classification', 'generic']
false
x27;)</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c6377f91-830e-4547-9bf8-9d4f0aa2fb8c" type="checkbox" ><label for="c6377f91-830e-4547-9bf8-9d4f0aa2fb8c" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression()</pre></div></div></div></div></div></div></div>
224f2f853e9b126cfc3c4ca8e8863e88
apache-2.0
['text-classification', 'generic']
false
Hyperparameters โค๏ธ You can find hyperparameters below ๐Ÿ‘‡๐Ÿปโœจ ``` {'memory': None, 'steps': [('embedding', HFTransformersLanguage(model_name_or_path='facebook/bart-base')), ('model', LogisticRegression())], 'verbose': False, 'embedding': HFTransformersLanguage(model_name_or_path='facebook/bart-base'), 'model': LogisticRegression(), 'embedding__model_name_or_path': 'facebook/bart-base', 'model__C': 1.0, 'model__class_weight': None, 'model__dual': False, 'model__fit_intercept': True, 'model__intercept_scaling': 1, 'model__l1_ratio': None, 'model__max_iter': 100, 'model__multi_class': 'auto', 'model__n_jobs': None, 'model__penalty': 'l2', 'model__random_state': None, 'model__solver': 'lbfgs', 'model__tol': 0.0001, 'model__verbose': 0, 'model__warm_start': False} ```
6c55413ddf9cecfb6bc659ec21a63cb6
apache-2.0
['generated_from_trainer']
false
Tagged_One_100v5_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v5_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4636 - Precision: 0.2791 - Recall: 0.2144 - F1: 0.2425 - Accuracy: 0.8484
3c40a22c86833ca4720dade8adc44b89
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 41 | 0.5040 | 0.2172 | 0.1266 | 0.1599 | 0.8226 | | No log | 2.0 | 82 | 0.4381 | 0.2656 | 0.2154 | 0.2379 | 0.8475 | | No log | 3.0 | 123 | 0.4636 | 0.2791 | 0.2144 | 0.2425 | 0.8484 |
17836c234e2946c91f66c2a01d0def8a
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.7642 - Wer: 0.5894
1e7b6595846c62c3b8463ab40697eaef
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 24.5372 | 9.76 | 400 | 5.2857 | 0.9738 | | 4.3812 | 19.51 | 800 | 3.6782 | 0.7315 | | 1.624 | 29.27 | 1200 | 2.7642 | 0.5894 |
4df58bbf8e2a0d43bf7a6709b29e05f5
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Icelandic This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the samromur dataset. It achieves the following results on the evaluation set: - Loss: 0.2613 - Wer: 23.0409
3b9bcf75eb52f71ae7e3fc43788d83e0
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3551 | 0.18 | 1000 | 0.4322 | 35.0421 | | 0.2541 | 0.36 | 2000 | 0.3249 | 27.4721 | | 0.231 | 0.53 | 3000 | 0.2781 | 24.2234 | | 0.2277 | 0.71 | 4000 | 0.2613 | 23.0409 |
36734b77c0a145b581e60c1d4aa1eaa4
apache-2.0
['generated_from_trainer']
false
recipe-lr8e06-wd0.01-bs16 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2795 - Rmse: 0.5286 - Mse: 0.2795 - Mae: 0.4342
2d7993d116b56978d548e08b4e0aeb30
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
83efd68195f689f1513f84c6a1d29c25
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2767 | 1.0 | 1245 | 0.2745 | 0.5239 | 0.2745 | 0.4140 | | 0.2741 | 2.0 | 2490 | 0.2760 | 0.5254 | 0.2760 | 0.4222 | | 0.2729 | 3.0 | 3735 | 0.2795 | 0.5286 | 0.2795 | 0.4342 |
5bcd6b5fb34b0bbf123e91f56964e50d
apache-2.0
[]
false
Intended uses & limitations You can classify if the input tweet (or any others statement) about COVID-19/vaccine is `true`, `false` or `misleading`. Note that since this model was trained with data up to May 2020, the most recent information may not be reflected.
563a6deb9d0016e90f0c07db843b4459
apache-2.0
[]
false
How to use You can use this model directly on this page or using `transformers` in python. - Load pipeline and implement with input sequence ```python from transformers import pipeline pipe = pipeline("sentiment-analysis", model = "ans/vaccinating-covid-tweets") seq = "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." pipe(seq) ``` - Expected output ```python [ { "label": "false", "score": 0.07972867041826248 }, { "label": "misleading", "score": 0.019911376759409904 }, { "label": "true", "score": 0.9003599882125854 } ] ``` - `true` examples ```python "By the end of 2020, several vaccines had become available for use in different parts of the world." "Vaccines to prevent SARS-CoV-2 infection are considered the most promising approach for curbing the pandemic." "RNA vaccines were the first vaccines for SARS-CoV-2 to be produced and represent an entirely new vaccine approach." ``` - `false` examples ```python "COVID-19 vaccine caused new strain in UK." ```
ae0f7f6cc9651e1f7c6e5070769ff155