license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
mit | [] | false | xmod-base X-MOD is a multilingual masked language model trained on filtered CommonCrawl data containing 81 languages. It was introduced in the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) (Pfeiffer et al., NAACL 2022) and first released in [this repository](https://github.com/facebookresearch/fairseq/tree/main/examples/xmod). Because it has been pre-trained with language-specific modular components (_language adapters_), X-MOD differs from previous multilingual models like [XLM-R](https://huggingface.co/xlm-roberta-base). For fine-tuning, the language adapters in each transformer layer are frozen. | 10087fc523b53eda46b8a9c5ff2afb5d |
mit | [] | false | Tokenizer This model reuses the tokenizer of [XLM-R](https://huggingface.co/xlm-roberta-base), so you can load the tokenizer as follows: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") ``` | 9a03cee46fd11452b128ad829eae3060 |
mit | [] | false | Input Language Because this model uses language adapters, you need to specify the language of your input so that the correct adapter can be activated: ```python from transformers import XmodModel model = XmodModel.from_pretrained("jvamvas/xmod-base") model.set_default_language("en_XX") ``` A directory of the language adapters in this model is found at the bottom of this model card. | 136cfbe2e769d836e3c842c49910c188 |
mit | [] | false | Fine-tuning In the experiments in the original paper, the embedding layer and the language adapters are frozen during fine-tuning. A method for doing this is provided in the code: ```python model.freeze_embeddings_and_language_adapters() | 469747a35f7647d6de90b16997337189 |
mit | [] | false | Bias, Risks, and Limitations Please refer to the model card of [XLM-R](https://huggingface.co/xlm-roberta-base), because X-MOD has a similar architecture and has been trained on similar training data. | 8a1f25a00474c5269ed0d034b6c3b75a |
mit | [] | false | Citation **BibTeX:** ```bibtex @inproceedings{pfeiffer-etal-2022-lifting, title = "Lifting the Curse of Multilinguality by Pre-training Modular Transformers", author = "Pfeiffer, Jonas and Goyal, Naman and Lin, Xi and Li, Xian and Cross, James and Riedel, Sebastian and Artetxe, Mikel", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.255", doi = "10.18653/v1/2022.naacl-main.255", pages = "3479--3495" } ``` | 508696f68585180f1ef60960d4c1fe46 |
mit | [] | false | Languages This model contains the following language adapters: | lang_id (Adapter index) | Language code | Language | |-------------------------|---------------|-----------------------| | 0 | en_XX | English | | 1 | id_ID | Indonesian | | 2 | vi_VN | Vietnamese | | 3 | ru_RU | Russian | | 4 | fa_IR | Persian | | 5 | sv_SE | Swedish | | 6 | ja_XX | Japanese | | 7 | fr_XX | French | | 8 | de_DE | German | | 9 | ro_RO | Romanian | | 10 | ko_KR | Korean | | 11 | hu_HU | Hungarian | | 12 | es_XX | Spanish | | 13 | fi_FI | Finnish | | 14 | uk_UA | Ukrainian | | 15 | da_DK | Danish | | 16 | pt_XX | Portuguese | | 17 | no_XX | Norwegian | | 18 | th_TH | Thai | | 19 | pl_PL | Polish | | 20 | bg_BG | Bulgarian | | 21 | nl_XX | Dutch | | 22 | zh_CN | Chinese (simplified) | | 23 | he_IL | Hebrew | | 24 | el_GR | Greek | | 25 | it_IT | Italian | | 26 | sk_SK | Slovak | | 27 | hr_HR | Croatian | | 28 | tr_TR | Turkish | | 29 | ar_AR | Arabic | | 30 | cs_CZ | Czech | | 31 | lt_LT | Lithuanian | | 32 | hi_IN | Hindi | | 33 | zh_TW | Chinese (traditional) | | 34 | ca_ES | Catalan | | 35 | ms_MY | Malay | | 36 | sl_SI | Slovenian | | 37 | lv_LV | Latvian | | 38 | ta_IN | Tamil | | 39 | bn_IN | Bengali | | 40 | et_EE | Estonian | | 41 | az_AZ | Azerbaijani | | 42 | sq_AL | Albanian | | 43 | sr_RS | Serbian | | 44 | kk_KZ | Kazakh | | 45 | ka_GE | Georgian | | 46 | tl_XX | Tagalog | | 47 | ur_PK | Urdu | | 48 | is_IS | Icelandic | | 49 | hy_AM | Armenian | | 50 | ml_IN | Malayalam | | 51 | mk_MK | Macedonian | | 52 | be_BY | Belarusian | | 53 | la_VA | Latin | | 54 | te_IN | Telugu | | 55 | eu_ES | Basque | | 56 | gl_ES | Galician | | 57 | mn_MN | Mongolian | | 58 | kn_IN | Kannada | | 59 | ne_NP | Nepali | | 60 | sw_KE | Swahili | | 61 | si_LK | Sinhala | | 62 | mr_IN | Marathi | | 63 | af_ZA | Afrikaans | | 64 | gu_IN | Gujarati | | 65 | cy_GB | Welsh | | 66 | eo_EO | Esperanto | | 67 | km_KH | Central Khmer | | 68 | ky_KG | Kirghiz | | 69 | uz_UZ | Uzbek | | 70 | ps_AF | Pashto | | 71 | pa_IN | Punjabi | | 72 | ga_IE | Irish | | 73 | ha_NG | Hausa | | 74 | am_ET | Amharic | | 75 | lo_LA | Lao | | 76 | ku_TR | Kurdish | | 77 | so_SO | Somali | | 78 | my_MM | Burmese | | 79 | or_IN | Oriya | | 80 | sa_IN | Sanskrit | | 5e87081e343cd0757ff931e37b38fc87 |
apache-2.0 | ['translation'] | false | opus-mt-en-ha * source languages: en * target languages: ha * OPUS readme: [en-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ha/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.eval.txt) | 7e1a6f190fc4a902d04d51922bd18840 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-ner_only_actions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0931 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9844 | ceefab12cd7e90c63a3140c8ad86982a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 15 | 0.0949 | 0.0 | 0.0 | 0.0 | 0.9844 | | No log | 2.0 | 30 | 0.0951 | 0.0 | 0.0 | 0.0 | 0.9844 | | No log | 3.0 | 45 | 0.0931 | 0.0 | 0.0 | 0.0 | 0.9844 | | 9c4d05d83e9d00a710411219c6fb43c5 |
mit | ['spacy', 'token-classification'] | false | --- tags: - spacy - token-classification language: - en model-index: - name: en_ner_fashion results: - task: name: NER type: token-classification metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F Score type: f_score value: 0.0 --- | Feature | Description | | --- | --- | | **Name** | `en_ner_fashion` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | | 95d25c0322c1730a75c6e20ba4db58a7 |
afl-3.0 | [] | false | A MacBERTh model fine-tuned on SQuAD_v2. Hopefully, this will allow the model to perform well on QA tasks on historical texts.
Finetune parameters:
```
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=3e-5,
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
num_train_epochs=2,
weight_decay=0.01,
lr_scheduler_type=SchedulerType.LINEAR,
warmup_ratio=0.2
)
```
Evaluation metrics on the validation set of SQuAD_v2:
```
{'exact': 49.49886296639434, 'f1': 53.9199170778635, 'total': 11873, 'HasAns_exact': 60.08771929824562, 'HasAns_f1': 68.94250598270429, 'HasAns_total': 5928, 'NoAns_exact': 38.940285954583686, 'NoAns_f1': 38.940285954583686, 'NoAns_total': 5945, 'best_exact': 50.5095595047587, 'best_exact_thresh': 0.0, 'best_f1': 51.75825524534494, 'best_f1_thresh': 0.0}
``` | 9d0630a328b3d42877619246bfa5097a |
other | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers'] | false | If you want to use dreamlike models on your website/app/etc., check the license at the bottom first! Use the same prompts as you would for SD 1.5. Add **dreamlikeart** if the artstyle is too weak. Non-square aspect ratios work better for some prompts. If you want a portrait photo, try using a 2:3 or a 9:16 aspect ratio. If you want a landscape photo, try using a 3:2 or a 16:9 aspect ratio. Use slightly higher resolution for better results: 640x640px, 512x768px, 768x512px, etc. | 5fa8d8ef405db086c7764d2f4d7486df |
other | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers'] | false | We've just released Dreamlike Photoreal 2.0, check it out! [https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0) <img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview1.jpg" style="max-width: 400px;" width="100%"/> | 7c53bdd1b8d52cd9e003bfbbe85db109 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2772 - F1: 0.8368 | 497cc75869294c1e5bd670e1b60e283d |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.581 | 1.0 | 191 | 0.3798 | 0.7573 | | 0.2625 | 2.0 | 382 | 0.2806 | 0.8260 | | 0.1748 | 3.0 | 573 | 0.2772 | 0.8368 | | 2188e4594ff0b4ae2f7fb70fcb57a2cb |
mit | ['generated_from_trainer'] | false | codeparrot-ds-sample-gpt-small-10epoch This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0943 | 4df350a7432117ad9ca5afc3c459f87b |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.29 | 0.94 | 1000 | 2.8452 | | 2.3155 | 1.88 | 2000 | 2.3659 | | 1.8817 | 2.82 | 3000 | 2.2085 | | 1.6245 | 3.77 | 4000 | 2.1260 | | 1.4314 | 4.71 | 5000 | 2.0705 | | 1.2698 | 5.65 | 6000 | 2.0603 | | 1.1281 | 6.59 | 7000 | 2.0599 | | 1.0108 | 7.53 | 8000 | 2.0769 | | 0.9167 | 8.47 | 9000 | 2.0870 | | 0.8551 | 9.42 | 10000 | 2.0943 | | 8a9608688e275236959e20b10bead08c |
apache-2.0 | [] | false | This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78) | 2853f2b40f88dc58e2e0d2c034da9c99 |
apache-2.0 | ['generated_from_trainer'] | false | finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3062 - Accuracy: 0.8833 - F1: 0.8852 | 3c2e96fb2d2cf9846c85c7a4753a06e7 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | aimersd2-5 Dreambooth model trained by Allenbv with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:  | bce8b9fb9f5795deee374f1080f76c9a |
mit | [] | false | Model Details **Model Description:** `openai-gpt` is a transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies. - **Developed by:** Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. See [associated research paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) and [GitHub repo](https://github.com/openai/finetune-transformer-lm) for model developers and contributors. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [MIT License](https://github.com/openai/finetune-transformer-lm/blob/master/LICENSE) - **Related Models:** [GPT2](https://huggingface.co/gpt2), [GPT2-Medium](https://huggingface.co/gpt2-medium), [GPT2-Large](https://huggingface.co/gpt2-large) and [GPT2-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) - [OpenAI Blog Post](https://openai.com/blog/language-unsupervised/) - [GitHub Repo](https://github.com/openai/finetune-transformer-lm) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt | 9d074d0b34703f49d43e88a5b952465f |
mit | [] | false | How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='openai-gpt') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model,'he said, when i was finished.'ah well,'said the man,'that's"}, {'generated_text': 'Hello, I\'m a language model, " she said. \n she reached the bottom of the shaft and leaned a little further out. it was'}, {'generated_text': 'Hello, I\'m a language model, " she laughed. " we call that a\'white girl.\'or as we are called by the'}, {'generated_text': 'Hello, I\'m a language model, " said mr pin. " an\'the ones with the funny hats don\'t. " the rest of'}, {'generated_text': 'Hello, I\'m a language model, was\'ere \'bout to do some more dancin \', " he said, then his voice lowered to'}] ``` Here is how to use this model in PyTorch: ```python from transformers import OpenAIGPTTokenizer, OpenAIGPTModel import torch tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") model = OpenAIGPTModel.from_pretrained("openai-gpt") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` and in TensorFlow: ```python from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") model = TFOpenAIGPTModel.from_pretrained("openai-gpt") inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") outputs = model(inputs) last_hidden_states = outputs.last_hidden_state ``` | eb53c64f5cba31b1df55706a5c30036c |
mit | [] | false | Downstream Use Potential downstream uses of this model include tasks that leverage language models. In the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf), the model developers discuss evaluations of the model for tasks including natural language inference (NLI), question answering, semantic similarity, and text classification. | 20c821eeb6e1d1135171aa40b1395e44 |
mit | [] | false | Misuse and Out-of-scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. | 15c5ce032d748f21453f8a5432ea4518 |
mit | [] | false | Biases **CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by this model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='openai-gpt') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a teacher for the college he'}, {'generated_text': 'The man worked as a janitor at the club.'}, {'generated_text': 'The man worked as a bodyguard in america. the'}, {'generated_text': 'The man worked as a clerk for one of the'}, {'generated_text': 'The man worked as a nurse, but there was'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a medical intern but is a'}, {'generated_text': 'The woman worked as a midwife, i know that'}, {'generated_text': 'The woman worked as a prostitute in a sex club'}, {'generated_text': 'The woman worked as a secretary for one of the'}, {'generated_text': 'The woman worked as a nurse, but she had'}] ``` This bias may also affect fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. | 49c655d2ab5c0035e36b91abec38a7bb |
mit | [] | false | Risks and Limitations The model developers also wrote in a [blog post](https://openai.com/blog/language-unsupervised/) about risks and limitations of the model, including: > - **Compute Requirements:** Many previous approaches to NLP tasks train relatively small models on a single GPU from scratch. Our approach requires an expensive pre-training step - 1 month on 8 GPUs. Luckily, this only has to be done once and we’re releasing our model so others can avoid it. It is also a large model (in comparison to prior work) and consequently uses more compute and memory — we used a 37-layer (12 block) Transformer architecture, and we train on sequences of up to 512 tokens. Most experiments were conducted on 4 and 8 GPU systems. The model does fine-tune to new tasks very quickly which helps mitigate the additional resource requirements. > - **The limits and bias of learning about the world through text:** Books and text readily available on the internet do not contain complete or even accurate information about the world. Recent work ([Lucy and Gauthier, 2017](https://arxiv.org/abs/1705.11168)) has shown that certain kinds of information are difficult to learn via just text and other work ([Gururangan et al., 2018](https://arxiv.org/abs/1803.02324)) has shown that models learn and exploit biases in data distributions. > - **Still brittle generalization:** Although our approach improves performance across a broad range of tasks, current deep learning NLP models still exhibit surprising and counterintuitive behavior - especially when evaluated in a systematic, adversarial, or out-of-distribution way. Our approach is not immune to these issues, though we have observed some indications of progress. Our approach shows improved lexical robustness over previous purely neural approaches to textual entailment. On the dataset introduced in Glockner et al. (2018) our model achieves 83.75%, performing similarly to KIM, which incorporates external knowledge via WordNet. | 7fdcac8867c2e937b9fbca726bc08615 |
mit | [] | false | Training Data The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > We use the BooksCorpus dataset ([Zhu et al., 2015](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Zhu_Aligning_Books_and_ICCV_2015_paper.pdf)) for training the language model. It contains over 7,000 unique unpublished books from a variety of genres including Adventure, Fantasy, and Romance. Crucially, it contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information. | 258ede76ee1ac8fe39f2411e5441b83e |
mit | [] | false | Training Procedure The model developers [write](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf): > Our model largely follows the original transformer work [62]. We trained a 12-layer decoder-only transformer with masked self-attention heads (768 dimensional states and 12 attention heads). For the position-wise feed-forward networks, we used 3072 dimensional inner states. We used the Adam optimization scheme [27] with a max learning rate of 2.5e-4. The learning rate was increased linearly from zero over the first 2000 updates and annealed to 0 using a cosine schedule. We train for 100 epochs on minibatches of 64 randomly sampled, contiguous sequences of 512 tokens. Since layernorm [2] is used extensively throughout the model, a simple weight initialization of N (0, 0.02) was sufficient. We used a bytepair encoding (BPE) vocabulary with 40,000 merges [53] and residual, embedding, and attention dropouts with a rate of 0.1 for regularization. We also employed a modified version of L2 regularization proposed in [37], with w = 0.01 on all non bias or gain weights. For the activation function, we used the Gaussian Error Linear Unit (GELU) [18]. We used learned position embeddings instead of the sinusoidal version proposed in the original work. We use the ftfy library2 to clean the raw text in BooksCorpus, standardize some punctuation and whitespace, and use the spaCy tokenizer. See the paper for further details and links to citations. | 9481329227891c35ffe39bf3412fa8e9 |
mit | [] | false | Evaluation The following evaluation information is extracted from the [associated blog post](https://openai.com/blog/language-unsupervised/). See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for further details. | 3658044e1baf2cbc2bee6a3961efd97e |
mit | [] | false | Testing Data, Factors and Metrics The model developers report that the model was evaluated on the following tasks and datasets using the listed metrics: - **Task:** Textual Entailment - **Datasets:** [SNLI](https://huggingface.co/datasets/snli), [MNLI Matched](https://huggingface.co/datasets/glue), [MNLI Mismatched](https://huggingface.co/datasets/glue), [SciTail](https://huggingface.co/datasets/scitail), [QNLI](https://huggingface.co/datasets/glue), [RTE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Semantic Similarity - **Datasets:** [STS-B](https://huggingface.co/datasets/glue), [QQP](https://huggingface.co/datasets/glue), [MRPC](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Reading Comprehension - **Datasets:** [RACE](https://huggingface.co/datasets/race) - **Metrics:** Accuracy - **Task:** Commonsense Reasoning - **Datasets:** [ROCStories](https://huggingface.co/datasets/story_cloze), [COPA](https://huggingface.co/datasets/xcopa) - **Metrics:** Accuracy - **Task:** Sentiment Analysis - **Datasets:** [SST-2](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Linguistic Acceptability - **Datasets:** [CoLA](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy - **Task:** Multi Task Benchmark - **Datasets:** [GLUE](https://huggingface.co/datasets/glue) - **Metrics:** Accuracy | c909e7ac2974a3b273aff92bdef71c65 |
mit | [] | false | Results The model achieves the following results without any fine-tuning (zero-shot): | Task | TE | TE | TE |TE | TE | TE | SS | SS | SS | RC | CR | CR | SA | LA | MTB | |:--------:|:--:|:----------:|:-------------:|:-----:|:----:|:---:|:---:|:---:|:--:|:----:|:--------:|:----:|:----:|:----:|:----:| | Dataset |SNLI|MNLI Matched|MNLI Mismatched|SciTail| QNLI | RTE |STS-B| QQP |MPRC|RACE |ROCStories|COPA | SST-2| CoLA | GLUE | | |89.9| 82.1 | 81.4 |88.3 | 88.1 | 56.0|82.0 | 70.3|82.3|59.0 | 86.5 | 78.6 | 91.3 | 45.4 | 72.8 | | f0f5b98c0507428fa0ca704a12f106e0 |
mit | [] | false | Environmental Impact The model developers [report that](https://openai.com/blog/language-unsupervised/): > The total compute used to train this model was 0.96 petaflop days (pfs-days). > 8 P600 GPU's * 30 days * 12 TFLOPS/GPU * 0.33 utilization = .96 pfs-days Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact | 108f298c33d8fe99bbd0de13ed61d594 |
mit | [] | false | compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8 P600 GPUs - **Hours used:** 720 hours (30 days) - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown | bad83151fb1a84400051b01f4229cccc |
mit | [] | false | Technical Specifications See the [associated paper](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. | b02dcd4f56b78f5e778f7cc82a6318a7 |
mit | [] | false | Citation Information ```bibtex @article{radford2018improving, title={Improving language understanding by generative pre-training}, author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya and others}, year={2018}, publisher={OpenAI} } ``` APA: *Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.* | efd14988f8dbf95e6db1ec3504346fcb |
cc-by-sa-4.0 | ['spacy', 'token-classification'] | false | UD v2.5 benchmarking pipeline for UD_Portuguese-Bosque | Feature | Description | | --- | --- | | **Name** | `pt_udv25_portuguesebosque_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | | 524717dbbcbe52e6cfc064386c473f2c |
cc-by-sa-4.0 | ['spacy', 'token-classification'] | false | Label Scheme <details> <summary>View label scheme (2079 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJ`, `ADP`, `ADP_ADV`, `ADP_DET`, `ADP_NUM`, `ADP_PRON`, `ADP_PROPN`, `ADV`, `ADV_PRON`, `ADV_PROPN`, `AUX`, `AUX_PRON`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PART_NOUN`, `PRON`, `PRON_PRON`, `PROPN`, `PROPN_DET`, `PROPN_PROPN`, `PUNCT`, `SCONJ`, `SCONJ_DET`, `SCONJ_PRON`, `SYM`, `VERB`, `VERB_PRON`, `X` | | **`morphologizer`** | `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Def\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `NumType=Card\|POS=NUM`, `POS=ADV`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `POS=ADP`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=PROPN`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|Polarity=Neg`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=X`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Number=Plur\|POS=AUX\|Person=3\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Art`, `POS=VERB\|VerbForm=Part`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `NumType=Ord\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Unsp\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Masc\|NumType=Mult\|Number=Sing\|POS=NUM`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=PROPN\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `POS=AUX\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Rel`, `Number=Sing\|POS=DET\|PronType=Art`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Definite=Def\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Unsp\|POS=PRON\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=AUX`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `POS=INTJ`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Acc\|Gender=Unsp\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Unsp\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Unsp\|POS=VERB\|PronType=Prs\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=AUX\|Person=3\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PART`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Unsp\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=ADV`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Gender=Masc\|POS=ADJ`, `POS=NOUN`, `POS=AUX\|VerbForm=Ger`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Gender=Unsp\|POS=PRON\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Prs`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Rel`, `POS=VERB\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=PROPN\|PronType=Art`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADV\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Unsp\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=X`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=SCONJ`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Gender=Fem\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `POS=ADP\|PronType=Dem`, `Definite=Def\|Gender=Fem\|POS=ADP\|PronType=Art`, `POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=ADP\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `POS=DET`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Art`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=AUX\|Person=1\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Ind`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=2\|PronType=Prs\|VerbForm=Inf`, `Gender=Unsp\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem,Masc\|Number=Sing\|POS=PROPN`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=NUM`, `POS=PRON\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=X`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Sets\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Unsp\|POS=AUX\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Unsp\|Number=Plur\|POS=PRON\|PronType=Int`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Prs`, `Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Number=Sing\|POS=VERB`, `Number=Sing\|POS=DET`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `NumType=Mult\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Unsp\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Masc\|Number=Sing\|POS=ADV\|Polarity=Neg`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Unsp\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Unsp\|POS=NOUN`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NOUN`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=SCONJ\|PronType=Art`, `POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Unsp\|Number=Sing\|POS=DET\|PronType=Rel`, `Gender=Fem\|Number=Plur\|POS=VERB`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Gender=Unsp\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `NumType=Range\|POS=NUM`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Number=Unsp\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Dem`, `NumType=Frac\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Ind`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Rel`, `Case=Acc\|POS=VERB\|PronType=Prs\|VerbForm=Ger`, `Mood=Cnd\|POS=VERB\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf\|Voice=Pass`, `POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Unsp\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Number=Sing\|POS=X`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=ADV\|PronType=Int`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `POS=VERB`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Gender=Fem,Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Unsp\|Number=Unsp\|POS=ADV\|PronType=Int`, `Gender=Unsp\|Number=Sing\|POS=ADV\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `POS=PROPN`, `Case=Acc\|Gender=Masc\|POS=AUX\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Fem\|POS=AUX\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Fem\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=X`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|POS=DET\|PronType=Art`, `Gender=Unsp\|Number=Sing\|POS=ADV`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Unsp\|POS=PRON\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=VERB`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADV\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `POS=DET\|PronType=Ind`, `POS=SCONJ\|VerbForm=Ger`, `Mood=Cnd\|Number=Sing\|POS=VERB\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=VERB`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|POS=PRON\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=DET`, `NumType=Ord\|POS=NUM`, `POS=DET\|PronType=Int`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `POS=PART`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Inf`, `NumType=Card\|POS=ADP`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=SCONJ\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Art`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `dislocated`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `3`, `4`, `6`, `8`, `9`, `11`, `13`, `15`, `17`, `20`, `22`, `24`, `14`, `7`, `26`, `28`, `30`, `32`, `34`, `36`, `38`, `40`, `42`, `44`, `45`, `48`, `53`, `54`, `55`, `57`, `58`, `60`, `62`, `65`, `66`, `67`, `70`, `72`, `74`, `76`, `79`, `83`, `85`, `87`, `89`, `91`, `95`, `99`, `101`, `102`, `104`, `106`, `108`, `110`, `113`, `115`, `117`, `119`, `120`, `122`, `124`, `125`, `126`, `128`, `130`, `132`, `134`, `136`, `138`, `141`, `142`, `144`, `147`, `150`, `152`, `154`, `155`, `159`, `162`, `163`, `165`, `166`, `169`, `171`, `172`, `174`, `175`, `178`, `180`, `181`, `184`, `186`, `189`, `191`, `193`, `195`, `198`, `200`, `111`, `202`, `204`, `207`, `209`, `212`, `214`, `216`, `218`, `220`, `221`, `223`, `224`, `226`, `228`, `230`, `232`, `234`, `236`, `239`, `242`, `244`, `245`, `246`, `247`, `249`, `251`, `252`, `253`, `256`, `257`, `259`, `261`, `263`, `267`, `269`, `270`, `271`, `273`, `277`, `278`, `281`, `282`, `283`, `285`, `286`, `288`, `289`, `290`, `292`, `293`, `295`, `297`, `298`, `300`, `302`, `303`, `305`, `307`, `309`, `310`, `311`, `313`, `314`, `316`, `319`, `168`, `322`, `323`, `326`, `327`, `329`, `331`, `333`, `335`, `336`, `338`, `341`, `343`, `345`, `347`, `348`, `350`, `351`, `354`, `356`, `359`, `361`, `363`, `364`, `365`, `366`, `367`, `369`, `373`, `376`, `378`, `379`, `380`, `381`, `383`, `384`, `386`, `389`, `392`, `394`, `395`, `396`, `398`, `400`, `403`, `405`, `407`, `409`, `410`, `412`, `415`, `416`, `417`, `418`, `419`, `420`, `422`, `424`, `429`, `431`, `432`, `438`, `439`, `441`, `442`, `445`, `448`, `449`, `450`, `452`, `454`, `457`, `458`, `461`, `463`, `465`, `468`, `469`, `470`, `473`, `475`, `477`, `478`, `481`, `484`, `485`, `486`, `488`, `491`, `495`, `497`, `499`, `503`, `506`, `507`, `508`, `509`, `510`, `511`, `513`, `514`, `516`, `517`, `519`, `521`, `522`, `523`, `525`, `528`, `530`, `533`, `534`, `536`, `538`, `540`, `541`, `542`, `544`, `545`, `547`, `549`, `551`, `552`, `554`, `555`, `558`, `559`, `560`, `562`, `563`, `565`, `566`, `570`, `572`, `579`, `582`, `583`, `585`, `586`, `587`, `590`, `592`, `594`, `595`, `597`, `599`, `601`, `603`, `606`, `608`, `609`, `611`, `612`, `614`, `615`, `616`, `619`, `621`, `622`, `625`, `626`, `627`, `629`, `630`, `631`, `633`, `634`, `637`, `638`, `639`, `640`, `642`, `644`, `646`, `647`, `652`, `653`, `656`, `657`, `659`, `660`, `661`, `664`, `666`, `669`, `671`, `672`, `673`, `674`, `675`, `677`, `678`, `680`, `682`, `685`, `687`, `689`, `691`, `692`, `693`, `695`, `699`, `701`, `702`, `703`, `706`, `707`, `709`, `710`, `711`, `712`, `714`, `716`, `718`, `719`, `720`, `721`, `724`, `725`, `729`, `730`, `732`, `735`, `738`, `740`, `742`, `744`, `746`, `749`, `750`, `751`, `754`, `756`, `760`, `762`, `767`, `769`, `771`, `774`, `776`, `778`, `780`, `781`, `784`, `785`, `787`, `788`, `789`, `791`, `793`, `794`, `795`, `798`, `800`, `801`, `803`, `804`, `806`, `808`, `810`, `811`, `812`, `814`, `816`, `819`, `820`, `823`, `824`, `825`, `828`, `829`, `832`, `833`, `835`, `836`, `839`, `840`, `844`, `845`, `847`, `850`, `851`, `853`, `854`, `855`, `858`, `861`, `862`, `863`, `865`, `868`, `871`, `873`, `875`, `877`, `879`, `880`, `881`, `882`, `883`, `884`, `885`, `887`, `889`, `892`, `894`, `895`, `537`, `896`, `898`, `899`, `902`, `904`, `905`, `908`, `909`, `912`, `914`, `916`, `917`, `920`, `921`, `922`, `924`, `925`, `928`, `929`, `930`, `931`, `933`, `936`, `939`, `940`, `942`, `943`, `945`, `948`, `949`, `951`, `953`, `956`, `957`, `960`, `961`, `963`, `964`, `965`, `966`, `969`, `970`, `971`, `973`, `976`, `977`, `979`, `981`, `983`, `985`, `987`, `988`, `990`, `991`, `993`, `994`, `995`, `996`, `997`, `998`, `1000`, `1001`, `1004`, `1006`, `1007`, `1009`, `1011`, `1013`, `1014`, `1015`, `1019`, `1021`, `1023`, `1025`, `1026`, `1029`, `1030`, `1033`, `1034`, `1036`, `1037`, `1039`, `1041`, `1042`, `1044`, `1046`, `1048`, `1050`, `1051`, `1054`, `1056`, `1057`, `1059`, `1061`, `1062`, `1064`, `1066`, `1067`, `1068`, `1069`, `1071`, `1072`, `1073`, `1074`, `1075`, `1077`, `1078`, `1079`, `1081`, `1083`, `1084`, `1085`, `1086`, `1088`, `1089`, `1092`, `1093`, `1097`, `1100`, `1101`, `1103`, `1104`, `1106`, `1108`, `1110`, `1114`, `1115`, `1117`, `1118`, `1119`, `1121`, `1123`, `1124`, `1126`, `1127`, `1128`, `1130`, `1133`, `1135`, `1136`, `1140`, `1143`, `1146`, `1148`, `1149`, `1151`, `1152`, `1155`, `1157`, `1158`, `1160`, `1163`, `1164`, `1165`, `1167`, `1168`, `1170`, `1172`, `1176`, `1177`, `1178`, `1180`, `1182`, `1184`, `1186`, `1187`, `1189`, `1190`, `1193`, `1196`, `1198`, `1202`, `1203`, `1204`, `1205`, `1206`, `1207`, `1208`, `1209`, `1210`, `1211`, `1214`, `1215`, `1216`, `1218`, `1219`, `1220`, `1221`, `1223`, `1225`, `1226`, `1228`, `1229`, `1230`, `1233`, `1234`, `1236`, `1237`, `1238`, `1239`, `1240`, `1242`, `1244`, `1247`, `1248`, `1249`, `1250`, `1251`, `1254`, `1256`, `1257`, `1258`, `1260`, `1262`, `1263`, `1266`, `1271`, `1272`, `1273`, `1274`, `1275`, `1277`, `1278`, `1279`, `1280`, `1283`, `1285`, `1287`, `1288`, `1290`, `1293`, `1294`, `1296`, `1299`, `1301`, `1302`, `1304`, `1307`, `1308`, `1309`, `1311`, `1312`, `1314`, `1315`, `1317`, `1320`, `1322`, `1324`, `1325`, `1326`, `1329`, `1330`, `1332`, `1333`, `1334`, `1336`, `1338`, `1339`, `1340`, `1341`, `1344`, `1345`, `1346`, `1348`, `1350`, `1351`, `1352`, `1354`, `1356`, `1358`, `1359`, `1360`, `1361`, `1362`, `1363`, `1367`, `1370`, `1371`, `1373`, `1375`, `1377`, `1378`, `1379`, `1381`, `1382`, `1383`, `1385`, `1386`, `1388`, `1389`, `1393`, `1395`, `1399`, `1401`, `1402`, `1403`, `1404`, `1405`, `1407`, `1408`, `1411`, `1413`, `1417`, `1418`, `1419`, `1420`, `1421`, `1423`, `1424`, `1425`, `1429`, `1430`, `1431`, `1433`, `1434`, `1436`, `1437`, `1438`, `1439`, `1442`, `1444`, `1446`, `1447`, `1449`, `1451`, `1453`, `1454`, `1455`, `1458`, `1461`, `1463`, `1464`, `1465`, `1467`, `1468`, `1469`, `1470`, `1471`, `1473`, `1476`, `1477`, `1478`, `1479`, `1482`, `1483`, `1484`, `1489`, `1491`, `1492`, `1494`, `1496`, `1497`, `1499`, `1502`, `1504`, `1505`, `1506`, `1507`, `1508`, `1509`, `1511`, `1514`, `1515`, `1517`, `1520`, `1521`, `1524`, `1525`, `1528`, `1529`, `1530`, `1532`, `1533`, `1534`, `1536`, `1538`, `1539`, `1541`, `1543`, `1544`, `1545`, `1546`, `1547`, `1548`, `1552`, `1556`, `1558`, `1560`, `1562`, `1563`, `1566`, `1567`, `1569`, `1570`, `1572`, `1574`, `1577`, `761`, `1579`, `1583`, `1585`, `1586`, `1587`, `1590`, `1592`, `1593`, `1595`, `1596`, `1597`, `1599`, `1603`, `1605`, `1607`, `1609`, `1610`, `1612`, `1614`, `1615`, `1617`, `1618`, `1620`, `1621`, `1622`, `1625`, `1627`, `1629`, `1630`, `1631`, `1633`, `1634`, `1636`, `1637`, `1638`, `1640`, `1641`, `1643`, `1644`, `1646`, `1647`, `1648`, `1651`, `1652`, `1657`, `1658`, `1659`, `1661`, `1662`, `1663`, `1664`, `1666`, `1669`, `1672`, `1673`, `1675`, `1676`, `1677`, `1679`, `1682`, `1684`, `1409`, `1685`, `1686`, `1687`, `1688`, `1690`, `1692`, `1693`, `1694`, `1695`, `1697`, `1699`, `1700`, `1704`, `1707`, `1708`, `1709`, `1711`, `1712`, `1715`, `1716`, `1717`, `1718`, `1719`, `1721`, `1722`, `1723`, `1725`, `1726`, `1729`, `1730`, `1732`, `1733`, `1734`, `1735`, `1737`, `1738`, `1741`, `1743`, `1744`, `1746`, `1747`, `1748`, `1750`, `1752`, `1754`, `1755`, `1756`, `1758`, `1759`, `1760`, `1762`, `1765`, `1766`, `1768`, `1769`, `1770`, `1773`, `1774`, `1775`, `1777`, `1778`, `1781`, `1782`, `1783`, `1785`, `1786`, `1787`, `219`, `1788`, `1789`, `1791`, `1792`, `1793`, `1795`, `1799`, `1800`, `1801`, `1802`, `1803`, `1805`, `1806`, `1808`, `1809`, `1811`, `1812`, `1814`, `1815`, `1816`, `1821`, `1823`, `1824`, `1825`, `1826`, `1829`, `1830`, `1831`, `1832`, `1833`, `1835`, `1838`, `1839`, `1840`, `1842`, `1843`, `1845`, `1846`, `1848`, `1849`, `1850`, `1851`, `1855`, `1856`, `1857`, `1859`, `1861`, `1862`, `1864`, `1866`, `1867`, `1869`, `421`, `1870`, `1872`, `1873`, `1874`, `1875`, `1878`, `1879`, `1880`, `1882`, `1883`, `1884`, `1885`, `1888`, `1891`, `1894`, `1895`, `1898`, `1901`, `1903`, `1904`, `1906`, `1907`, `1910`, `1912`, `1915`, `1917`, `1918`, `1920`, `1921`, `1922`, `1924`, `1926`, `1927`, `1930`, `1932`, `1933`, `1936`, `1938`, `1940`, `1941`, `1942`, `1943`, `1945`, `1947`, `1949`, `1951`, `1952`, `1953`, `1954`, `1956`, `1957`, `1958`, `1960`, `1961`, `1963`, `1964`, `1966`, `1968`, `1971`, `1973`, `1974`, `1975`, `1977`, `1979`, `1981`, `1983`, `1985`, `1986`, `1987`, `1988`, `792`, `1990`, `790`, `1992`, `1994`, `1996`, `1998`, `1999`, `2000`, `2001`, `2002`, `2003`, `2005`, `2006`, `2008`, `2010`, `2011`, `2012`, `2014`, `2016`, `2017`, `2018`, `2019`, `2021`, `2022`, `2023`, `2024`, `2025`, `2026`, `2028`, `2029`, `2031`, `2034`, `2036`, `2038`, `2041`, `2042`, `2044`, `2045`, `2046`, `2050`, `2051`, `2052`, `2055`, `2056`, `2057`, `2059`, `2060`, `2061`, `2062`, `2064`, `2066`, `2068`, `2069`, `2070`, `2072`, `2073`, `2075`, `2076`, `2078`, `2079`, `2081`, `2083`, `2084`, `2086`, `2088`, `2089`, `2091`, `2093`, `2095`, `2097`, `2098`, `2099`, `2101`, `2102`, `2103`, `2104`, `2106`, `2107`, `2108`, `2109`, `2110`, `2111`, `2112`, `2114`, `2116`, `2117`, `2118`, `2119`, `2120`, `2121`, `2122`, `2123`, `2124`, `2125`, `2126`, `2127`, `1584`, `2128`, `2130`, `2131`, `2132`, `2134`, `2137`, `2138`, `2139`, `2141`, `2144`, `2145`, `2146`, `2147`, `2150`, `2151`, `2153`, `2154`, `2155`, `2156`, `2157`, `2159`, `2160`, `2161`, `2163`, `2164`, `2165`, `2166`, `2167`, `2168`, `2169`, `2170`, `2173`, `2174`, `2175`, `2176`, `2177`, `2179`, `2182`, `2185`, `2187`, `2188`, `2189`, `2191`, `2193`, `2194`, `2195`, `2196`, `2197`, `2198`, `2200`, `2202`, `2203`, `2204`, `2205`, `2206`, `2207`, `2208`, `2209`, `2210`, `2211`, `2212`, `2213`, `2216`, `2217`, `2219`, `2221`, `2224`, `2227`, `2229`, `2230`, `2232`, `2233`, `2234`, `2235`, `2237`, `2239`, `2240`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2248`, `2249`, `2250`, `2251`, `2253`, `2254`, `2255`, `2257`, `2258`, `2260`, `2261`, `2262`, `2263`, `2264`, `2265`, `2266`, `2267`, `2268`, `2269`, `2270`, `2271`, `2272`, `2273`, `2274`, `2275`, `2277`, `2278`, `2281`, `2282`, `2283`, `2284`, `2285`, `2287`, `2288`, `2290`, `2291`, `2292`, `2293`, `2294`, `2297`, `2298`, `2299`, `2300`, `2302`, `2304`, `2305`, `2307`, `2308`, `2309`, `2310`, `2312`, `2313`, `2314`, `2315`, `2316`, `2317`, `2318`, `2319`, `2321`, `2322`, `2323`, `2327`, `2329`, `2331`, `2333`, `2335`, `2337`, `2338`, `2339`, `2341`, `2342`, `2343`, `2346`, `2348`, `2349`, `2350`, `2351`, `2352`, `2353`, `37`, `2354`, `2355`, `2357`, `2358`, `2359`, `2360`, `2361`, `2362`, `2364`, `2365`, `2367`, `2368`, `2369`, `2370`, `2372`, `2375`, `2376`, `2378`, `2379`, `2380`, `2381`, `2382`, `2383`, `2384`, `2385`, `2386`, `2389`, `2390`, `2392`, `2393`, `2394`, `2395`, `2398`, `2399`, `2400`, `2402`, `2403`, `2405`, `2406`, `2407`, `2408`, `2409`, `2410`, `2413`, `2415`, `2416`, `2417`, `2418`, `2419`, `2420`, `2422`, `2424`, `2427`, `2428`, `2429`, `2430`, `2431`, `2432`, `2433`, `2435`, `2437`, `1962`, `2438`, `2439`, `2440`, `2442`, `2443`, `2444`, `2445` | </details> | ed49b2eae1889ff38bb5bc6e6c0a05de |
cc-by-sa-4.0 | ['spacy', 'token-classification'] | false | Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.92 | | `TOKEN_P` | 99.93 | | `TOKEN_R` | 99.91 | | `TOKEN_ACC` | 99.99 | | `SENTS_F` | 95.82 | | `SENTS_P` | 95.40 | | `SENTS_R` | 96.25 | | `TAG_ACC` | 98.09 | | `POS_ACC` | 98.14 | | `MORPH_ACC` | 97.34 | | `DEP_UAS` | 93.85 | | `DEP_LAS` | 91.19 | | `LEMMA_ACC` | 98.00 | | f0a779fbfce7b63f36328c9df1b7a60f |
mit | [] | false | model by martinma This your the Stable Diffusion model fine-tuned the hockey player concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks hockey** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept:     | b1bc4e81dc5b0d17e88fd0b431896e88 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2_murad This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2006 - eval_wer: 0.2084 - eval_runtime: 556.4634 - eval_samples_per_second: 8.985 - eval_steps_per_second: 0.562 - epoch: 12.32 - step: 28800 | a5dbe0d538553da9eee9c63c3b2203b8 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2206 - Accuracy: 0.9255 - F1: 0.9254 | d2a86e875aef1395f2f07153db8ecc54 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8523 | 1.0 | 250 | 0.3186 | 0.908 | 0.9064 | | 0.247 | 2.0 | 500 | 0.2206 | 0.9255 | 0.9254 | | c8ce967ddf95f8978dafca0ad155ba04 |
apache-2.0 | ['generated_from_trainer'] | false | finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3045 - Accuracy: 0.88 - F1: 0.8831 | 36a8fc0db0f89f1deb0b29aea725c6dd |
cc-by-4.0 | ['question generation'] | false | Model Card of `research-backup/t5-small-squadshifts-vanilla-nyt-qg` This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: nyt) via [`lmqg`](https://github.com/asahi417/lm-question-generation). | ceecdd0102f2c332ab0e1b7411f3dff3 |
cc-by-4.0 | ['question generation'] | false | Overview - **Language model:** [t5-small](https://huggingface.co/t5-small) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (nyt) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) | ab3354ebaed0e19dd65023b5be9dc605 |
cc-by-4.0 | ['question generation'] | false | model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-small-squadshifts-vanilla-nyt-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` | 497c623e5d7db4847b75b1009e8c6181 |
cc-by-4.0 | ['question generation'] | false | Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squadshifts-vanilla-nyt-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 49.72 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 4.84 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 1.7 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 0.82 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 0.47 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 4.08 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 48.95 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 3.43 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | b2823035c837401b732cdf8897da3418 |
cc-by-4.0 | ['question generation'] | false | Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: nyt - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-small - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 32 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squadshifts-vanilla-nyt-qg/raw/main/trainer_config.json). | 327566553f68c6d73aabbdd045336efc |
apache-2.0 | ['generated_from_trainer'] | false | t5-small-finetuned-en-to-it This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the ccmatrix dataset. It achieves the following results on the evaluation set: - Loss: 2.2698 - Bleu: 7.3298 - Gen Len: 62.3753 | c9ad2c22cd5b505fa7d7bbc3fbdf6edf |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 - mixed_precision_training: Native AMP | 091d2a08de0f049ed8a3d02d90130c83 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 125 | 3.0010 | 2.7294 | 56.4513 | | No log | 2.0 | 250 | 2.8999 | 2.3228 | 81.4993 | | No log | 3.0 | 375 | 2.8281 | 2.3065 | 92.3353 | | 3.3202 | 4.0 | 500 | 2.7722 | 2.5982 | 91.8093 | | 3.3202 | 5.0 | 625 | 2.7254 | 2.9279 | 89.0907 | | 3.3202 | 6.0 | 750 | 2.6839 | 3.0747 | 89.2827 | | 3.3202 | 7.0 | 875 | 2.6470 | 3.207 | 87.948 | | 3.0355 | 8.0 | 1000 | 2.6132 | 3.355 | 85.2487 | | 3.0355 | 9.0 | 1125 | 2.5835 | 3.8401 | 80.578 | | 3.0355 | 10.0 | 1250 | 2.5552 | 4.2905 | 75.818 | | 3.0355 | 11.0 | 1375 | 2.5323 | 4.3866 | 75.2433 | | 2.8903 | 12.0 | 1500 | 2.5079 | 4.5687 | 74.906 | | 2.8903 | 13.0 | 1625 | 2.4881 | 4.7844 | 71.5773 | | 2.8903 | 14.0 | 1750 | 2.4668 | 4.876 | 71.68 | | 2.8903 | 15.0 | 1875 | 2.4485 | 5.1292 | 70.118 | | 2.7891 | 16.0 | 2000 | 2.4322 | 5.3297 | 68.894 | | 2.7891 | 17.0 | 2125 | 2.4161 | 5.555 | 68.2293 | | 2.7891 | 18.0 | 2250 | 2.4010 | 5.7113 | 67.2907 | | 2.7891 | 19.0 | 2375 | 2.3892 | 5.9105 | 66.6287 | | 2.713 | 20.0 | 2500 | 2.3756 | 6.0057 | 66.112 | | 2.713 | 21.0 | 2625 | 2.3643 | 6.3118 | 64.6193 | | 2.713 | 22.0 | 2750 | 2.3533 | 6.476 | 64.31 | | 2.713 | 23.0 | 2875 | 2.3432 | 6.7102 | 63.5467 | | 2.6584 | 24.0 | 3000 | 2.3342 | 6.7604 | 63.6567 | | 2.6584 | 25.0 | 3125 | 2.3253 | 6.8418 | 63.6573 | | 2.6584 | 26.0 | 3250 | 2.3180 | 6.9165 | 63.5893 | | 2.6584 | 27.0 | 3375 | 2.3120 | 7.0217 | 63.1033 | | 2.616 | 28.0 | 3500 | 2.3056 | 6.9148 | 63.598 | | 2.616 | 29.0 | 3625 | 2.2987 | 6.9961 | 63.6267 | | 2.616 | 30.0 | 3750 | 2.2935 | 7.2238 | 62.8373 | | 2.616 | 31.0 | 3875 | 2.2892 | 7.1906 | 62.7793 | | 2.587 | 32.0 | 4000 | 2.2849 | 7.2052 | 63.126 | | 2.587 | 33.0 | 4125 | 2.2815 | 7.3272 | 62.526 | | 2.587 | 34.0 | 4250 | 2.2782 | 7.3603 | 62.4313 | | 2.587 | 35.0 | 4375 | 2.2756 | 7.3072 | 62.6307 | | 2.5673 | 36.0 | 4500 | 2.2737 | 7.3586 | 62.1633 | | 2.5673 | 37.0 | 4625 | 2.2718 | 7.3485 | 62.358 | | 2.5673 | 38.0 | 4750 | 2.2707 | 7.3406 | 62.298 | | 2.5673 | 39.0 | 4875 | 2.2700 | 7.3233 | 62.42 | | 2.5591 | 40.0 | 5000 | 2.2698 | 7.3298 | 62.3753 | | 51055764461071245bf0f65dfbedb6f8 |
apache-2.0 | ['grammatical error correction', 'text2text', 't5'] | false | This model is an implementation of the paper [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf) from Google where they report the State of the art score in the task of Grammatical Error Correction (GEC). We implement the version with the T5-small with the reported F_0.5 score in the paper (60.70). To effectively use the "Hosted inference API", write "gec: [YOUR SENTENCE HERE]". In order to use the model, look at the following snippet: ```python from transformers import T5ForConditionalGeneration, T5Tokenizer model = T5ForConditionalGeneration.from_pretrained("Unbabel/gec-t5_small") tokenizer = T5Tokenizer.from_pretrained('t5-small') sentence = "I like to swimming" tokenized_sentence = tokenizer('gec: ' + sentence, max_length=128, truncation=True, padding='max_length', return_tensors='pt') corrected_sentence = tokenizer.decode( model.generate( input_ids = tokenized_sentence.input_ids, attention_mask = tokenized_sentence.attention_mask, max_length=128, num_beams=5, early_stopping=True, )[0], skip_special_tokens=True, clean_up_tokenization_spaces=True ) print(corrected_sentence) | 803e21a04f45d4c0dbc24ee0eb6efd57 |
apache-2.0 | ['setfit', 'sentence-transformers', 'text-classification'] | false | fathyshalab/massive_social-roberta-large-v1-2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. | 8c7d35a37c38f50acfb0323892a32538 |
apache-2.0 | ['AnimeGanv2'] | false | Model Description Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer. AnimeGANv2_Paprika Made by Asher Chan. The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2) | 3fc4a85187a23c36d5786832896ca139 |
cc-by-4.0 | ['espnet', 'image-to-text', 'ocr', 'handwriting-recognition'] | false | Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 2169367022b8939d22005e8cf45a65bb20bc0768 pip install -e . cd egs2/iam/ocr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/iam_handwriting_ocr ``` <!-- Generated by scripts/utils/show_asr_result.sh --> | aae1b9b012a20ae4c73a7527c3c53e7e |
cc-by-4.0 | ['espnet', 'image-to-text', 'ocr', 'handwriting-recognition'] | false | Environments - date: `Mon Nov 7 13:40:17 EST 2022` - python version: `3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0]` - espnet version: `espnet 202209` - pytorch version: `pytorch 1.10.0` - Git hash: `2169367022b8939d22005e8cf45a65bb20bc0768` - Commit date: `Thu Nov 3 20:38:03 2022 -0400` | 686c2578a447f47c334f602bfecfae56 |
cc-by-4.0 | ['espnet', 'image-to-text', 'ocr', 'handwriting-recognition'] | false | ASR config <details><summary>expand</summary> ``` config: conf/train_asr_conformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_extracted_en_char ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 35197 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 200 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 64 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_extracted_en_char/train/speech_shape - exp/asr_stats_extracted_en_char/train/text_shape.char valid_shape_file: - exp/asr_stats_extracted_en_char/valid/speech_shape - exp/asr_stats_extracted_en_char/valid/text_shape.char batch_type: folded valid_batch_type: null fold_length: - 800 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/extracted/train/feats.scp - speech - kaldi_ark - - dump/extracted/train/text - text - text valid_data_path_and_name_and_type: - - dump/extracted/valid/feats.scp - speech - kaldi_ark - - dump/extracted/valid/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - <space> - e - t - a - o - n - i - r - s - h - l - d - c - u - m - f - p - g - y - w - b - . - ',' - v - k - '-' - T - '''' - M - I - A - '"' - S - P - H - B - C - W - N - G - x - R - E - L - F - '0' - D - '1' - j - O - q - U - K - '!' - '3' - '9' - ( - z - ) - ':' - V - ; - '5' - '2' - J - '8' - Y - '4' - '6' - '?' - ' | 0c35d706dc0741a3b5793e3be49b5cf8 |
cc-by-4.0 | ['espnet', 'image-to-text', 'ocr', 'handwriting-recognition'] | false | ' - '&' - '7' - / - '*' - Q - X - Z - + - <sos/eos> init: xavier_uniform input_size: 100 ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: null frontend_conf: {} specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_extracted_en_char/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 1024 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: '202209' distributed: true ``` </details> | 3b9aec3849d12bbe0197657134b956cd |
other | [] | false | Tile and Grout Cleaning Richardson TX https://carpetcleaning-richardson.com/tile-and-grout-cleaning.html (972) 454-9815 We have a Cheap Tile Cleaning service that brightens your floor and gives your home a clean look if you've been putting off cleaning your tiles because of the cost.Carpet cleaning in Richardson, Texas, doesn't just clean carpets.We cover everything when it comes to cleaning your home, from your ducts and vents to your tile and grout. | 0d32ea38f1078b6de6558586093c65cb |
apache-2.0 | ['speech'] | false | Wav2Vec2-Base [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec | 9214c6d0d92df21b208f5ba5917a9aff |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper Base Pashto - Augmented This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the google/fleurs dataset. It achieves the following results on the evaluation set: - Loss: 0.7901 - Wer: 59.6482 - Cer: 27.0947 | ecfa1679aebe2d48f5af505aff0ca087 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 30 - training_steps: 600 - mixed_precision_training: Native AMP | ac8ff79c6263c60176a8e5a8f4b842e1 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.1215 | 2.38 | 100 | 0.9444 | 68.3354 | 30.2694 | | 0.8268 | 4.75 | 200 | 0.8267 | 63.2440 | 28.2636 | | 0.6912 | 7.14 | 300 | 0.7959 | 62.2443 | 28.2123 | | 0.5725 | 9.52 | 400 | 0.7896 | 60.5859 | 27.6920 | | 0.5231 | 11.89 | 500 | 0.7884 | 59.8574 | 27.1273 | | 0.4752 | 14.28 | 600 | 0.7901 | 59.6482 | 27.0947 | | 530a491db9b9936a28b2a1ab16c469d9 |
apache-2.0 | ['summarization', 'generated_from_trainer'] | false | mt5-small-finetuned-17jan-1 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6637 - Rouge1: 8.3942 - Rouge2: 0.8333 - Rougel: 8.2847 - Rougelsum: 8.3183 | f230a38f96a37f27969fde29645e1670 |
apache-2.0 | ['summarization', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 | 471c40d78b7dce992b37423bebfae045 |
apache-2.0 | ['summarization', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 11.5311 | 1.0 | 60 | 3.3693 | 3.5755 | 0.6 | 3.6 | 3.5118 | | 4.9804 | 2.0 | 120 | 2.9852 | 5.1928 | 0.9667 | 5.205 | 5.1941 | | 4.0171 | 3.0 | 180 | 2.8622 | 5.8468 | 0.5889 | 5.9029 | 5.8766 | | 3.7179 | 4.0 | 240 | 2.7056 | 8.4114 | 0.5 | 8.5056 | 8.4553 | | 3.514 | 5.0 | 300 | 2.7171 | 9.3353 | 0.8333 | 9.2709 | 9.3029 | | 3.4154 | 6.0 | 360 | 2.7082 | 8.6179 | 0.4167 | 8.5622 | 8.5483 | | 3.3356 | 7.0 | 420 | 2.6801 | 8.3942 | 0.8333 | 8.2847 | 8.3183 | | 3.3008 | 8.0 | 480 | 2.6757 | 8.2384 | 0.4167 | 8.1169 | 8.1087 | | 3.2493 | 9.0 | 540 | 2.6646 | 8.2384 | 0.4167 | 8.1169 | 8.1087 | | 3.2307 | 10.0 | 600 | 2.6637 | 8.3942 | 0.8333 | 8.2847 | 8.3183 | | 1eaad733559fe4d0868f5bee892c2054 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6145, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 | 26067036ffa26ccf1a2a92f57741fe12 |
other | [] | false | Training data The training data contains around 2500 ebooks in various genres (the "Pike" dataset), a CYOA dataset called "CYS" and 50 Asian "Light Novels" (the "Manga-v1" dataset). Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` This dataset has been cleaned in the same way as fairseq-dense-13B-Nerys-v2 | 64fef09f71bd8032a7f727b67e25a2e4 |
other | [] | false | How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/OPT-2.7B-Nerys-v2') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` | 0ccdd645758ce7ccf4405cc614ae1860 |
apache-2.0 | ['summarisation', 'generated_from_trainer'] | false | distilbart-xsum-6-6-finetuned-bbc-news This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2624 - Rouge1: 62.1927 - Rouge2: 54.4754 - Rougel: 55.868 - Rougelsum: 60.9345 | ec2d71192437af3cfbd5b178c1ccfefe |
apache-2.0 | ['summarisation', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 | 8b5445c35611dfe70066dfaee7778afa |
apache-2.0 | ['summarisation', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.4213 | 1.0 | 445 | 0.2005 | 59.4886 | 51.7791 | 53.5126 | 58.3405 | | 0.1355 | 2.0 | 890 | 0.1887 | 61.7474 | 54.2823 | 55.7324 | 60.5787 | | 0.0891 | 3.0 | 1335 | 0.1932 | 61.1312 | 53.103 | 54.6992 | 59.8923 | | 0.0571 | 4.0 | 1780 | 0.2141 | 60.8797 | 52.6195 | 54.4402 | 59.5298 | | 0.0375 | 5.0 | 2225 | 0.2318 | 61.7875 | 53.8753 | 55.5068 | 60.5448 | | 0.0251 | 6.0 | 2670 | 0.2484 | 62.3535 | 54.6029 | 56.2804 | 61.031 | | 0.0175 | 7.0 | 3115 | 0.2542 | 61.6351 | 53.8248 | 55.6399 | 60.3765 | | 0.0133 | 8.0 | 3560 | 0.2624 | 62.1927 | 54.4754 | 55.868 | 60.9345 | | 0d1c0c9e12d439d6e6b844d80ac9b2ba |
cc-by-4.0 | ['seq2seq'] | false | 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴 This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything. The following setting were used in training: ```bash ./run_t5_mlm_flax_streaming.py \ --output_dir="./" \ --model_type="t5" \ --config_name="./" \ --tokenizer_name="./" \ --dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \ --max_seq_length="512" \ --weight_decay="0.01" \ --per_device_train_batch_size="32" \ --per_device_eval_batch_size="32" \ --learning_rate="8e-3" \ --warmup_steps="0" \ --overwrite_output_dir \ --cache_dir /mnt/disks/flaxdisk/cache/ \ --num_train_epochs="5" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --logging_steps="500" \ --num_train_steps="1000000" \ --num_eval_samples="5000" \ --save_steps="5000" \ --eval_steps="5000" \ --preprocessing_num_workers 96 \ --adafactor \ --push_to_hub ``` | 8648c7de347a15bccdcf244a824330fb |
afl-3.0 | [] | false | This model is used detecting **abusive speech** in **Bengali**. It is finetuned on MuRIL model using bengali abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive | 031d752e3cd008929dfe85f1bb61188d |
apache-2.0 | ['generated_from_trainer'] | false | vit-base-patch16-224-in21k-finetuned-cifar10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.2564 - Accuracy: 0.9788 | 8940d4467de32a0f039ad8cff624b51b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4291 | 1.0 | 390 | 0.2564 | 0.9788 | | fee5bce1a9431c251f49b505d30c8f36 |
openrail | [] | false | Generating unprompted oracle characters using the oracle dataset.  | 772068e2d9d9308297cfd32fe0a57c49 |
cc-by-sa-4.0 | ['spacy', 'token-classification'] | false | ro_core_news_md Romanian pipeline optimized for CPU. Components: tok2vec, tagger, parser, lemmatizer (trainable_lemmatizer), senter, ner, attribute_ruler. | Feature | Description | | --- | --- | | **Name** | `ro_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` | | **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [UD Romanian RRT v2.8](https://github.com/UniversalDependencies/UD_Romanian-RRT) (Barbu Mititelu, Verginica; Irimia, Elena; Perez, Cenel-Augusto; Ion, Radu; Simionescu, Radu; Popel, Martin)<br />[RONEC - the Romanian Named Entity Corpus (ca9ce460)](https://github.com/dumitrescustefan/ronec) (Dumitrescu, Stefan Daniel; Avram, Andrei-Marius; Morogan, Luciana; Toma; Stefan)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | | 6cca6bb64e353ebffaa087a45d3218ba |
cc-by-sa-4.0 | ['spacy', 'token-classification'] | false | Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.80 | | `TOKEN_P` | 99.67 | | `TOKEN_R` | 99.57 | | `TOKEN_F` | 99.59 | | `TAG_ACC` | 96.29 | | `SENTS_P` | 96.14 | | `SENTS_R` | 96.01 | | `SENTS_F` | 96.07 | | `DEP_UAS` | 88.56 | | `DEP_LAS` | 83.41 | | `LEMMA_ACC` | 95.32 | | `POS_ACC` | 93.68 | | `MORPH_ACC` | 94.78 | | `MORPH_MICRO_P` | 98.74 | | `MORPH_MICRO_R` | 95.62 | | `MORPH_MICRO_F` | 96.89 | | `ENTS_P` | 74.87 | | `ENTS_R` | 76.22 | | `ENTS_F` | 75.54 | | 361836d68de6a94a0e214a0288b4e727 |
apache-2.0 | ['generated_from_trainer'] | false | cvt-13-384-22k-fv-finetuned-memes This model is a fine-tuned version of [microsoft/cvt-13-384-22k](https://huggingface.co/microsoft/cvt-13-384-22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5761 - Accuracy: 0.8315 - Precision: 0.8302 - Recall: 0.8315 - F1: 0.8292 | bb6c158d6bca29b69aafdfa19206bfc1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.3821 | 0.99 | 20 | 1.2780 | 0.4969 | 0.5083 | 0.4969 | 0.4458 | | 1.0785 | 1.99 | 40 | 0.8633 | 0.6669 | 0.6658 | 0.6669 | 0.6500 | | 0.8862 | 2.99 | 60 | 0.7110 | 0.7218 | 0.7258 | 0.7218 | 0.7013 | | 0.665 | 3.99 | 80 | 0.5515 | 0.8045 | 0.8137 | 0.8045 | 0.8050 | | 0.6056 | 4.99 | 100 | 0.5956 | 0.7960 | 0.8041 | 0.7960 | 0.7846 | | 0.4779 | 5.99 | 120 | 0.6229 | 0.7937 | 0.7945 | 0.7937 | 0.7857 | | 0.4554 | 6.99 | 140 | 0.5355 | 0.8099 | 0.8126 | 0.8099 | 0.8086 | | 0.4249 | 7.99 | 160 | 0.5447 | 0.8269 | 0.8275 | 0.8269 | 0.8236 | | 0.4313 | 8.99 | 180 | 0.5530 | 0.8153 | 0.8140 | 0.8153 | 0.8132 | | 0.423 | 9.99 | 200 | 0.5346 | 0.8238 | 0.8230 | 0.8238 | 0.8223 | | 0.3997 | 10.99 | 220 | 0.5413 | 0.8338 | 0.8347 | 0.8338 | 0.8338 | | 0.4095 | 11.99 | 240 | 0.5999 | 0.8207 | 0.8231 | 0.8207 | 0.8177 | | 0.3979 | 12.99 | 260 | 0.5632 | 0.8284 | 0.8255 | 0.8284 | 0.8250 | | 0.3408 | 13.99 | 280 | 0.5725 | 0.8207 | 0.8198 | 0.8207 | 0.8196 | | 0.3828 | 14.99 | 300 | 0.5631 | 0.8277 | 0.8258 | 0.8277 | 0.8260 | | 0.3595 | 15.99 | 320 | 0.6005 | 0.8308 | 0.8297 | 0.8308 | 0.8275 | | 0.3789 | 16.99 | 340 | 0.5840 | 0.8300 | 0.8271 | 0.8300 | 0.8273 | | 0.3545 | 17.99 | 360 | 0.5983 | 0.8246 | 0.8226 | 0.8246 | 0.8222 | | 0.3472 | 18.99 | 380 | 0.5795 | 0.8416 | 0.8382 | 0.8416 | 0.8390 | | 0.355 | 19.99 | 400 | 0.5761 | 0.8315 | 0.8302 | 0.8315 | 0.8292 | | 2087242c47b9856920b708318310c57b |
apache-2.0 | ['bert-large-portuguese-cased', 'semantic role labeling', 'finetuned', 'dependency parsing'] | false | Model description This model is the [`neuralmind/bert-large-portuguese-cased`](https://huggingface.co/neuralmind/bert-large-portuguese-cased) fine-tuned first on the Universal Dependencies Portuguese dataset and then fine-tuned on the PropBank.Br data. This is part of a project from which resulted the following models: * [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base) * [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large) * [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base) * [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large) * [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base) * [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base) * [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large) * [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base) * [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base) * [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large) * [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base) * [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large) * [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large) * [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large) For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). | 4a4577a059ab2db54a4b7fdce207b91e |
apache-2.0 | ['bert-large-portuguese-cased', 'semantic role labeling', 'finetuned', 'dependency parsing'] | false | How to use To use the transformers portion of this model: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("liaad/ud_srl-pt_bertimbau-large") model = AutoModel.from_pretrained("liaad/ud_srl-pt_bertimbau-large") ``` To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). | a7930e465f37300bb390843911465469 |
apache-2.0 | ['bert-large-portuguese-cased', 'semantic role labeling', 'finetuned', 'dependency parsing'] | false | Training procedure The model was trained on the Universal Dependencies Portuguese dataset; then on the CoNLL formatted OntoNotes v5.0; then on Portuguese semantic role labeling data (PropBank.Br) using 10-fold Cross-Validation. The 10 resulting models were tested on the folds as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt). | 61b243644e778bda4554ae9594738a8d |
apache-2.0 | ['bert-large-portuguese-cased', 'semantic role labeling', 'finetuned', 'dependency parsing'] | false | Eval results | Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) | | --------------- | ------ | ----- | | `srl-pt_bertimbau-base` | 76.30 | 73.33 | | `srl-pt_bertimbau-large` | 77.42 | 74.85 | | `srl-pt_xlmr-base` | 75.22 | 72.82 | | `srl-pt_xlmr-large` | 77.59 | 73.84 | | `srl-pt_mbert-base` | 72.76 | 66.89 | | `srl-en_xlmr-base` | 66.59 | 65.24 | | `srl-en_xlmr-large` | 67.60 | 64.94 | | `srl-en_mbert-base` | 63.07 | 58.56 | | `srl-enpt_xlmr-base` | 76.50 | 73.74 | | `srl-enpt_xlmr-large` | **78.22** | 74.55 | | `srl-enpt_mbert-base` | 74.88 | 69.19 | | `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 | | `ud_srl-pt_xlmr-large` | 77.69 | 74.91 | | `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** | | 97398747a7213ebf5342b21cfa981b3e |
mit | [] | false | canary cap on Stable Diffusion This is the `<canary-cap>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:      | b2c8b7d3c4a68d8327ad7eb179e3a6c8 |
apache-2.0 | ['automatic-speech-recognition', 'en'] | false | exp_w2v2r_en_xls-r_accent_us-5_england-5_s69 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 3e01b99753eb7a92427c76f77e27cfd4 |
apache-2.0 | ['generated_from_trainer'] | false | trialzz This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1097 | 381fd74aa522c628ab9947a7d1beb544 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 113 | 2.2090 | | No log | 2.0 | 226 | 2.1168 | | No log | 3.0 | 339 | 2.1097 | | 8da9ebb9ba2f7ecfa09a81cd75b22538 |
mit | [] | false | huayecai820 greyscale on Stable Diffusion This is the `<huayecaigreyscale-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:      | 06bd56b5844c889bcf592b35a0c93569 |
mit | [] | false | model by kellempxt This your the Stable Diffusion model fine-tuned the evangelion mech unit 01 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **rendering of sks evangelion mech** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept:         | 5258185bc7c32bf58615163638d7883a |
apache-2.0 | ['generated_from_trainer'] | false | HateXplain-top10-majority-annotator This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2282 - Accuracy: 0.6493 | 8a749ff3a3eca2c142fb6e03a3582454 |
apache-2.0 | [] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 4 - gradient_accumulation_steps: 20 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 | 9fb5eabc20102b208e8e23228f09d89d |
mit | ['translation', 'generated_from_trainer'] | false | m2m100_418M-fr This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.7021 - Bleu: 51.1340 | 28675cddbe0689f2e5f2530d2fc35267 |
mit | ['translation', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.749 | 1.0 | 23645 | 0.7021 | 51.1344 | | 1f96bf4cfb8935d984c6792e3e028a81 |
apache-2.0 | ['generated_from_keras_callback'] | false | georgivelkov/bert-finetuned-squad_v2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Squad_v2 dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5716 - Validation Loss: 0.0 - Epoch: 4 | a2e76fec8729bf25a592f26f745b9cfc |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20585, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | 1f2d34697128de3010244bc55040bc95 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.3480 | 0.0 | 0 | | 0.8160 | 0.0 | 1 | | 0.6012 | 0.0 | 2 | | 0.5722 | 0.0 | 3 | | 0.5716 | 0.0 | 4 | | 2ed26fbe5fb3073f01f73c965cb1b46d |
mit | [] | false | KOJIMA Ayami on Stable Diffusion This is the `<KOJIMA>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:      | 37d6b2e91c4b72a76366112169847c92 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.