license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer', 'irish']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 63 | 0.4902 | 0.5579 | 0.5269 | 0.5420 | 0.8458 | | No log | 2.0 | 126 | 0.3227 | 0.7169 | 0.7417 | 0.7291 | 0.8991 | | No log | 3.0 | 189 | 0.2720 | 0.7895 | 0.7839 | 0.7867 | 0.9186 | | No log | 4.0 | 252 | 0.2585 | 0.8128 | 0.8296 | 0.8211 | 0.9264 | | No log | 5.0 | 315 | 0.2468 | 0.8191 | 0.8363 | 0.8276 | 0.9307 |
9f7696b640a309c784a238a9d1079678
creativeml-openrail-m
['text-to-image']
false
I'm a digital artist learning these new tools to work with, this is my first style model I'm on Instagram: @ashenhard84 and Twitter: @ashenhard This model was trained with 85 images, at 8500 steps 1e-6 in Shivam Shrirao Google colab. I think the potential of this model is to merge it with others. The token is **Ashenhard style** **Generated by the model without merge:** ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/modelalone1.png) ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/modelalone2.png) ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/modelalone3.png) ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/modelalone4.png) **Generated by the model merged with (A) Anything V3 at 0.4 - (B) Ashenhard:** ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/modelanything1.png) ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/modelanything2.png) **Testing Img2Img with the model+anything** ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/modelanythingimg2img.png) **Generated by the model merged with (A) Ashenhard at 0.4 - (B) F222:** ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/ModelF222.png) ![Samples](https://huggingface.co/Ashenhard/Ashenhard-style/resolve/main/ModelF222-2.png)
0bf13fe0d3c505b60146ba6d05ea488c
mit
['generated_from_trainer']
false
bert-base-french-europeana-cased-squad-fr This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7031
a336fe4ddb9c3c25a39086ef1966dc35
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9069 | 1.0 | 3539 | 1.7853 | | 1.6263 | 2.0 | 7078 | 1.7031 |
b280f6c1d97a29bc0c3c347977c65e8d
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_fold_2_ternary This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5810 - F1: 0.7620
ce14f94d65bff938ffc2b59f5252e996
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 294 | 0.5886 | 0.7239 | | 0.557 | 2.0 | 588 | 0.5085 | 0.7524 | | 0.557 | 3.0 | 882 | 0.6332 | 0.7530 | | 0.2456 | 4.0 | 1176 | 0.8749 | 0.7161 | | 0.2456 | 5.0 | 1470 | 1.0601 | 0.7371 | | 0.1112 | 6.0 | 1764 | 1.1885 | 0.7451 | | 0.0484 | 7.0 | 2058 | 1.3027 | 0.7240 | | 0.0484 | 8.0 | 2352 | 1.4647 | 0.7259 | | 0.0259 | 9.0 | 2646 | 1.4476 | 0.7322 | | 0.0259 | 10.0 | 2940 | 1.4826 | 0.7388 | | 0.0164 | 11.0 | 3234 | 1.5869 | 0.7333 | | 0.0109 | 12.0 | 3528 | 1.5954 | 0.7539 | | 0.0109 | 13.0 | 3822 | 1.5810 | 0.7620 | | 0.0082 | 14.0 | 4116 | 1.7165 | 0.7335 | | 0.0082 | 15.0 | 4410 | 1.8152 | 0.7414 | | 0.004 | 16.0 | 4704 | 1.7411 | 0.7474 | | 0.004 | 17.0 | 4998 | 1.8692 | 0.7355 | | 0.0034 | 18.0 | 5292 | 1.8727 | 0.7303 | | 0.0009 | 19.0 | 5586 | 1.9813 | 0.7305 | | 0.0009 | 20.0 | 5880 | 1.9764 | 0.7391 | | 0.0012 | 21.0 | 6174 | 2.0170 | 0.7291 | | 0.0012 | 22.0 | 6468 | 2.0240 | 0.7391 | | 0.0004 | 23.0 | 6762 | 2.0311 | 0.7352 | | 0.0014 | 24.0 | 7056 | 2.0174 | 0.7334 | | 0.0014 | 25.0 | 7350 | 2.0282 | 0.7381 |
7a9237e504f3be05d8b05fcb10af42fd
mit
[]
false
обученный rubert от sberbank-ai/ruBert-base. размер выборки - 4. Эпохи - 4. ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="Den4ikAI/rubert_large_squad_2", tokenizer="Den4ikAI/rubert_large_squad_2" ) predictions = qa_pipeline({ 'context': "Пушкин родился 6 июля 1799 года", 'question': "Когда родился Пушкин?" }) print(predictions)
1ad602024b069804f267131bfc47a8cc
cc
['generated_from_trainer']
false
racism-finetuned-detests This model is a fine-tuned version of [davidmasip/racism](https://huggingface.co/davidmasip/racism) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0150 - Accuracy: 0.8560
a4c4e05546843890efd3a6aa795991fd
cc
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2659 | 1.0 | 153 | 0.3250 | 0.8429 | | 0.1191 | 2.0 | 306 | 0.5344 | 0.8380 | | 0.0074 | 3.0 | 459 | 0.8188 | 0.8396 | | 0.0001 | 4.0 | 612 | 0.9264 | 0.8462 | | 0.0001 | 5.0 | 765 | 0.9551 | 0.8462 | | 0.0001 | 6.0 | 918 | 0.9771 | 0.8527 | | 0.0001 | 7.0 | 1071 | 0.9937 | 0.8527 | | 0.0001 | 8.0 | 1224 | 1.0054 | 0.8560 | | 0.0 | 9.0 | 1377 | 1.0126 | 0.8560 | | 0.0001 | 10.0 | 1530 | 1.0150 | 0.8560 |
2689292ad9df756fb2b07255d33fe1fa
creativeml-openrail-m
[]
false
Model info --- This is a dreambooth model trained with the data set of [FloralMarble](https://huggingface.co/datasets/spaablauw/FloralMarble_dataset) on top of stable diffusion 1.5, all creadits to [spaablauw](https://huggingface.co/spaablauw) for original images. I left several models uploaded, all the intermediate steps + two anime models that I merged into. I would recomend try [the 4000 steps model](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/FloralMarble_step_4000.ckpt) or the [7000 steps one](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/FloralMarble_step_7000.ckpt), it depends a bit in what you want, I had relly good result in booth. For img2img 7000 step version is better. [Download Eimis Merge](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/EimisAnimeDiffusion_1-0v_0-FloralMarble_step_3000.safetensors) [Download Anything Merge](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/Anything-V3.0_0-FloralMarble_step_3000_1.safetensors) Use whatever VAE you want. ---
bd8f6dddb3e1f341aee1390d8a095607
creativeml-openrail-m
[]
false
Examples, download images to get prompts from exif data ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0002-3659297088.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0004-3659297088.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0012-2092274985.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0013-2092274985.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0023-774684095.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0046-4269222975.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0055-2404365075.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/xy_grid-0003-3279396972.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/xy_grid-0004-1720742584.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/xy_grid-0006-1034387134.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0072-2870034878.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0071-2870034878.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0069-2870034878.png) ![comparison_image](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/grid-0004-1540360593.png) ---
c94641fac44e3040562d0eca86f21ffd
creativeml-openrail-m
[]
false
Tag list [Get the tag list images had here](https://huggingface.co/N75242/FloralMarbles_Model/resolve/main/tags.txt) I used "flrmrbl" as an unique token, so it should activate the model traing data, also "floral marble" is present in all images, but its more generic si probably less powerfull. But as an alternative use "in the style of flrmrbl" or "flrmrbl style". Have fun!
9758668dd5349f0d177042c295f1d858
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9947
4d45eae81f36fef427b3e6a9d600b574
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 285 | 2.0524 | | 2.2183 | 2.0 | 570 | 1.9742 | | 2.2183 | 3.0 | 855 | 1.9947 |
2c1cc72ad89286f0193e0d9246f9e337
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
i modelli 'Unico' <img src=https://i.imgur.com/5KfDOik.png width=100% height=100%> Unico is the series of custom mixed models. Based on Inizio Unico and AbyssOrange2 models with U-Net Merge, and support .safetensors format only. [WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)-amicable
b14d1dfa198f670536313e2a42509f34
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Summary This model repository includes 5 main models currently: 1. | Model: A | Model: B | Merge Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | [Inizio Fantasma+Inizio Inseguitore+Inizio Foschia](https://huggingface.co/Cinnamomo/inizio) | [Inizio Replicante+Inizio Skinjob+Inizio Deckard](https://huggingface.co/Cinnamomo/inizio) | weighted, M=0.66666666+M=0.66666666 | N/A | *Unico* | Unico is another form of [Inizio Unico](https://huggingface.co/Cinnamomo/inizio). 2. | Model: A | Model: B | Merge Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | [Inizio Unico](https://huggingface.co/Cinnamomo/inizio) | [AbyssOrange2 SFW](https://huggingface.co/WarriorMama777/OrangeMixs) | weighted, M=0.75. | N/A | *Unico Arancia* | Unico Arancia('Orange🍊') is the closest model from AbyssOrange2 SFW. Anime~Semi-realistic. 3. | Model: A | Model: B | U-Net Merge Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | Unico Arancia | [Openniji](https://huggingface.co/Korakoe/OpenNiji) | 1,1,1,1,0,0,1,1,0,0,0,1,0,0,0,0,1,1,1,0,0,0,0,1,1 | 0 | *Unico Bergamotto* | Unico Bergamotto('Bergamot🍊') is significantly improved model of Unico Arancia for lightning and hand details. Anime~Semi-realistic. 4. | Model: A | Model: B | U-Net Merge Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | Unico Vaniglia | [Openniji](https://huggingface.co/Korakoe/OpenNiji) | 1,1,1,1,0,0,1,1,0,0,0,1,0,0,0,0,1,1,1,0,0,0,0,1,1 | 0 | *Unico Vaniglia 1.5* | Unico Vaniglia('Vanilla🍦') 1.5 is significantly improved model of Unico Vaniglia for lightning and hand details. Anime~Semi-realistic. - NOTE: Another models are moved to legacy folder.
59898ad0608113e8de9a932099708783
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Basic prompts for anime "txt2img/Prompt/value": "(best quality, extreme intricate detailed, octane render, very delicate cinematic light, colourful), (/*place tags*/), (solo girl/*character tags*/), (/*pose tags*/), (big breasts, big pelvis, slim waist, long legs, best ratio four finger and one thumb, /*body tags*/), (beautiful eyes and smooth radiant face, bishoujo), (/*colour of hair tag*/ hair, /*colour of eyes*/ eyes, thick lips, lip gloss), (/*clothing tags*/)", "txt2img/Negative prompt/value": "(nsfw, worst quality, low quality:1.4), (greyscale), (fingers(missing, fused, interlocked, abnormal, too many, bad anatomy, fused, fusion, lose, bad detailed, mutated), digit(extra, fewer), hands(greater than 4 fingers, less than 4 fingers, cropped, mutated):1.4), (fat, chubby, curvy, watermark, signature:1.4), (3d, realistic)" ``` - Variational Automatic Encoder: [SD MSE 840k.vae.safetensors](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors) - Clip Skip: 2 - Resolution: 1024x576 w/ HighRes. Fix - HighRes. Fix: R-ESRGAN General WDN 4xV3; upscale by 1.25
3e59ef300ea3513f0a35243089cf2a5b
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
3 > > <img src=https://i.imgur.com/aYIyVFJ.png > width=100% height=100%> > > <img src=https://i.imgur.com/pKNd2XO.png > width=100% height=100%> > > <img src=https://i.imgur.com/GknH4e0.png > width=100% height=100%> > > <img src=https://i.imgur.com/rVblL4d.png > width=100% height=100%> > ▲ Unico Arancia > > <img src=https://i.imgur.com/8vCjbUK.png > width=100% height=100%> > > <img src=https://i.imgur.com/HKvXAFx.png > width=100% height=100%> > ▲ Unico Bergamotto
c5bd055452454ec4e206ce6b4c0f5de1
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
License Information This model follows Creative ML Open RAIL-M: [Stable Diffusion License](https://huggingface.co/spaces/CompVis/stable-diffusion-license) But, You may use whatever you want. I don't like to set such restriction.
4173c7de29d1580465d5bf675b2788b8
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
false
LoRA text2image fine-tuning - https://huggingface.co/erkam/sd-pokemon-model-lora These are LoRA adaption weights for https://huggingface.co/erkam/sd-pokemon-model-lora. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
2d88b13845904000637fddfb0bfcd8a1
mit
['generated_from_trainer']
false
TExAS-SQuAD-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-de dataset. It achieves the following results on the evaluation set: - Exact match: 61.45% - F1-score: 66.12%
5605357bb7f16f1c2846a825d6e471e4
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
cae079940de05f71ea0344a8c9d7a21c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.8084 | 1.0 | 4233 | 1.5897 | | 1.5696 | 2.0 | 8466 | 1.5478 | | 1.4196 | 3.0 | 12699 | 1.5754 |
b1ad4e2ea7b3e7c93e4a5e9d82a047b1
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-image_quality This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.5242 - Accuracy: 0.9091
fa59963d6f27b330222f535e109dd859
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.6762 | 0.6364 | | No log | 2.0 | 2 | 0.6309 | 0.7273 | | No log | 3.0 | 3 | 0.6095 | 0.6364 | | No log | 4.0 | 4 | 0.5775 | 0.6364 | | No log | 5.0 | 5 | 0.5443 | 0.8182 | | No log | 6.0 | 6 | 0.5242 | 0.9091 | | No log | 7.0 | 7 | 0.5149 | 0.8182 | | No log | 8.0 | 8 | 0.5094 | 0.8182 | | No log | 9.0 | 9 | 0.5038 | 0.8182 | | 0.4095 | 10.0 | 10 | 0.4992 | 0.8182 |
a562ef62f095dac8a1052bbb1434c456
mit
['generated_from_trainer']
false
smalldata-microsoft-deberta-base-mnli-eng-only-sentiment-single-finetuned-memes This model is a fine-tuned version of [jayantapaul888/twitter-data-microsoft-deberta-base-mnli-sentiment-finetuned-memes](https://huggingface.co/jayantapaul888/twitter-data-microsoft-deberta-base-mnli-sentiment-finetuned-memes) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7400 - Accuracy: 0.8816 - Precision: 0.8946 - Recall: 0.8937 - F1: 0.8937
5ab1981f6a2f3e9ebd0768da915fff75
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
6d00e7e5d15474f3656bee96ed8ccbc0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 378 | 0.2962 | 0.8764 | 0.8917 | 0.8881 | 0.8884 | | 0.3387 | 2.0 | 756 | 0.2803 | 0.8831 | 0.8950 | 0.8942 | 0.8946 | | 0.1693 | 3.0 | 1134 | 0.4289 | 0.8764 | 0.8912 | 0.8892 | 0.8886 | | 0.0772 | 4.0 | 1512 | 0.5436 | 0.8690 | 0.8822 | 0.8823 | 0.8822 | | 0.0772 | 5.0 | 1890 | 0.6566 | 0.8831 | 0.8960 | 0.8947 | 0.8949 | | 0.024 | 6.0 | 2268 | 0.7400 | 0.8816 | 0.8946 | 0.8937 | 0.8937 |
e193c728b05987a39de7a8e0eb6270d8
apache-2.0
['generated_from_trainer']
false
bigbird-pegasus-large-arxiv-finetuned-pubmed This model is a fine-tuned version of [google/bigbird-pegasus-large-arxiv](https://huggingface.co/google/bigbird-pegasus-large-arxiv) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.6049 - Rouge1: 45.4807 - Rouge2: 20.0199 - Rougel: 28.3621 - Rougelsum: 41.4618 - Gen Len: 219.144
f357f4bcebede468a2c5f86056460d3e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.594 | 1.0 | 500 | 1.9879 | 33.6364 | 13.5074 | 21.4286 | 29.7158 | 189.014 | | 1.9146 | 2.0 | 1000 | 1.6494 | 44.0056 | 19.0069 | 27.5142 | 40.0492 | 210.528 | | 1.7378 | 3.0 | 1500 | 1.6213 | 44.7071 | 19.3559 | 27.6806 | 40.6124 | 213.596 | | 1.692 | 4.0 | 2000 | 1.6081 | 45.1505 | 19.7355 | 28.06 | 41.0108 | 213.674 | | 1.6656 | 5.0 | 2500 | 1.6049 | 45.4807 | 20.0199 | 28.3621 | 41.4618 | 219.144 |
52ba76791e1b251fd6feee95b6449f3a
apache-2.0
['generated_from_trainer']
false
opus-mt-en-ru-finetuned_v2 This model is a fine-tuned version of [kazandaev/opus-mt-en-ru-finetuned_v2](https://huggingface.co/kazandaev/opus-mt-en-ru-finetuned_v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8471 - Bleu: 37.5148 - Gen Len: 29.8495
0399a4751985b8602744348e5afa0231
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 49 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
1006af7d06e3c0cc3071c8617648bb50
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:| | 0.7688 | 1.0 | 50906 | 0.8533 | 37.1941 | 29.8644 | | 0.764 | 2.0 | 101812 | 0.8504 | 37.1506 | 29.8481 | | 0.7637 | 3.0 | 152718 | 0.8485 | 37.3499 | 29.7743 | | 0.7593 | 4.0 | 203624 | 0.8477 | 37.4428 | 29.8165 | | 0.7579 | 5.0 | 254530 | 0.8471 | 37.5148 | 29.8495 |
0b785f4e20bb5e9f37115ae68a9cf912
apache-2.0
['generated_from_trainer']
false
Graphcore/bert-large-uncased Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
a4a519322cf3371c06a3c00e79eb06c6
apache-2.0
['generated_from_trainer']
false
Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks.
3dae4bc7650bc84d1255c72093fa0637
apache-2.0
['generated_from_trainer']
false
Intended uses & limitations This model is a pre-trained BERT-Large trained in two phases on the [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) and [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) datasets.
6b828f1b45299ed4cca47fb3a0ff0394
apache-2.0
['generated_from_trainer']
false
Training and evaluation data Trained on wikipedia datasets: - [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) - [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512)
31ee396f128142777447cafb970a014d
apache-2.0
['generated_from_trainer']
false
Training procedure Trained MLM and NSP pre-training scheme from [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962). Trained on 64 Graphcore Mk2 IPUs using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore) Command lines: Phase 1: ``` python examples/language-modeling/run_pretraining.py \ --config_name bert-large-uncased \ --tokenizer_name bert-large-uncased \ --ipu_config_name Graphcore/bert-large-ipu \ --dataset_name Graphcore/wikipedia-bert-128 \ --do_train \ --logging_steps 5 \ --max_seq_length 128 \ --max_steps 10550 \ --is_already_preprocessed \ --dataloader_num_workers 64 \ --dataloader_mode async_rebatched \ --lamb \ --lamb_no_bias_correction \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 512 \ --pod_type pod64 \ --learning_rate 0.006 \ --lr_scheduler_type linear \ --loss_scaling 32768 \ --weight_decay 0.01 \ --warmup_ratio 0.28 \ --config_overrides "layer_norm_eps=0.001" \ --ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \ --output_dir output-pretrain-bert-large-phase1 ``` Phase 2: ``` python examples/language-modeling/run_pretraining.py \ --config_name bert-large-uncased \ --tokenizer_name bert-large-uncased \ --model_name_or_path ./output-pretrain-bert-large-phase1 \ --ipu_config_name Graphcore/bert-large-ipu \ --dataset_name Graphcore/wikipedia-bert-512 \ --do_train \ --logging_steps 5 \ --max_seq_length 512 \ --max_steps 2038 \ --is_already_preprocessed \ --dataloader_num_workers 96 \ --dataloader_mode async_rebatched \ --lamb \ --lamb_no_bias_correction \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 512 \ --pod_type pod64 \ --learning_rate 0.002828 \ --lr_scheduler_type linear \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.128 \ --config_overrides "layer_norm_eps=0.001" \ --ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \ --output_dir output-pretrain-bert-large-phase2 ```
5c4d346abb841f97db8011ecd88c9df9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during phase 1 training: - learning_rate: 0.006 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 512 - total_train_batch_size: 65536 - total_eval_batch_size: 512 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.28 - training_steps: 10550 - training precision: Mixed Precision The following hyperparameters were used during phase 2 training: - learning_rate: 0.002828 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 512 - total_train_batch_size: 16384 - total_eval_batch_size: 512 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.128 - training_steps: 2038 - training precision: Mixed Precision
b59433a90d38c335245aa0cc6bc94b8e
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Stable_Diffusion-trained-on-YUJIRO-HANMA-images(Baki-anime)-Fun-project model trained by nicky007 Trained on YUJIRO HANMA character of Baki-the grappler anime ..its just a fun project coz i was bored.. try Text on the prompt like: **'yujiro hanma clay statue'**, **'yujiro hanma laughing and angry pose'**, **'yujiro hanma posing very angry'** etc Or you can try your own unique text **Enjoy ,have a wonderfull day !!**
bceb499921cd1db32b5be14ca824b08f
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
`kan-bayashi/libritts_tts_train_gst+xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4409702/ This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
73e64409f2a66b8765d58ff0bed59206
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_wavlm_s26 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5bd6b41d29df2c2ad9fe713773df2e65
mit
['generated_from_trainer']
false
IndoBERT-exam-qa This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8274
f8e4bbcbb3279a91819ea8f91bf04589
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.395 | 1.0 | 8202 | 1.3536 | | 1.1534 | 2.0 | 16404 | 1.4040 | | 1.3661 | 3.0 | 24606 | 1.8274 |
bfbcc0dfa2373dc548e0dde8d4a84755
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_vp-sv_s149 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
eaa360cf55913d72e6cf05ec406ed346
mit
['music', 'audio', 'audio-to-audio', 'SFI']
false
Sampling-frequency-independent (SFI) Conv-TasNet trained with the MUSDB18-HQ dataset for music source separation This model was proposed in [our IEEE/ACM Trans. ASLP paper](https://doi.org/10.1109/TASLP.2022.3203907) and works well with untrained sampling frequencies by using sampling-frequency-independent convolutional layers with the time domain filter design. The latent analog filter is a modulated Gaussian filter. It was trained by Tomohiko Nakamura using [the codebase](https://github.com/TomohikoNakamura/sfi_convtasnet)). This model was trained with 32 kHz-sampled data but works well with untrained sampling frequencies (e.g., 8, 16 kHz).
9c6d6b3ef53f9b3bc7b93db0316baf0f
mit
['music', 'audio', 'audio-to-audio', 'SFI']
false
Citation Please cite the following paper. ``` @article{KSaito2022IEEEACMTASLP, author={Saito, Koichi and Nakamura, Tomohiko and Yatabe, Kohei and Saruwatari, Hiroshi}, journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing}, title = {Sampling-frequency-independent convolutional layer and its application to audio source separation}, year=2022, month=sep, volume=30, pages={2928--2943}, doi={10.1109/TASLP.2022.3203907}, } ```
31d0fc042541d75ed15abbf3bc8ac3f6
apache-2.0
['pytorch', 'causal-lm']
false
GPT-sl-base This model is a Slovene GPT model, based on the [bigscience workshop](https://github.com/bigscience-workshop/Megatron-DeepSpeed) fork of the Megatron. GPT-sl-base was trained on large Slovene corpora: Gigafida, KAS, slWaC, and MaCoCu.
7bb095b4a59288d873c62c87b9a9a7df
apache-2.0
['pytorch', 'causal-lm']
false
Model architecture GPT-sl-base has about 110 million parameters. It consists of 12 transformer layers with a dimension of 768. It has 16 attention heads and can process sequences up to 1024 tokens in length. The tokenizer was trained on a smaller subset of the corpora, and has the vocabulary of 60k tokens.
d18c1fc8286ee7a7541b1cb03f38b113
apache-2.0
['pytorch', 'causal-lm']
false
Training The model was trained for about 20 epochs, a total of 390k steps or 102B tokens seen during training. | Step | Validation Perplexity | |:------:|:---------------------:| | 50000 | 26.801 | | 100000 | 25.574 | | 150000 | 24.773 | | 200000 | 24.099 | | 250000 | 23.336 | | 300000 | 22.607 | | 350000 | 22.329 | | 390000 | 22.293 |
5d063fc6caa968197dfbb071d8b664b1
apache-2.0
['generated_from_trainer']
false
wav2vec_mle This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3076 - Wer: 1.0
585dae6aedc102eed16bcea1c8a80a6d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 60
6acf05b51ef5d9f6dede5aa4f42f50b2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 7.3604 | 3.33 | 30 | 4.4612 | 1.0 | | 4.502 | 6.67 | 60 | 4.5906 | 1.0 | | 4.2842 | 10.0 | 90 | 4.4217 | 1.0 | | 4.3833 | 13.33 | 120 | 4.3967 | 1.0 | | 4.2631 | 16.67 | 150 | 4.3469 | 1.0 | | 4.3357 | 20.0 | 180 | 4.3372 | 1.0 | | 4.3941 | 23.33 | 210 | 4.3187 | 1.0 | | 4.393 | 26.67 | 240 | 4.2981 | 1.0 | | 4.3619 | 30.0 | 270 | 4.3049 | 1.0 | | 4.3849 | 33.33 | 300 | 4.3138 | 1.0 | | 4.3186 | 36.67 | 330 | 4.3123 | 1.0 | | 4.3196 | 40.0 | 360 | 4.3097 | 1.0 | | 4.3212 | 43.33 | 390 | 4.3279 | 1.0 | | 4.3108 | 46.67 | 420 | 4.3249 | 1.0 | | 4.3112 | 50.0 | 450 | 4.3093 | 1.0 | | 4.2994 | 53.33 | 480 | 4.3198 | 1.0 | | 4.2958 | 56.67 | 510 | 4.3071 | 1.0 | | 4.2905 | 60.0 | 540 | 4.3076 | 1.0 |
0550f18faf00033f2fa75771ac3ac74a
apache-2.0
['generated_from_trainer']
false
bert-base-cased-finetuned-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6212
2812c02f934e5f4f528fb00f6fb4392b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8335 | 1.0 | 2393 | 1.7164 | | 1.738 | 2.0 | 4786 | 1.6589 | | 1.7029 | 3.0 | 7179 | 1.6216 |
0b59f87d0d48225eb928954b990b32ee
apache-2.0
['automatic-speech-recognition', 'fr', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Fine-tuned XLS-R 1B model for speech recognition in French Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on French using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [MediaSpeech](https://www.openslr.org/108/), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
eea62cff6c83df56893bd2a5807e3ac7
apache-2.0
['automatic-speech-recognition', 'fr', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Usage Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-french") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fr" MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-french" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
dd59694b6863e38c354ada62415a01f2
apache-2.0
['automatic-speech-recognition', 'fr', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset mozilla-foundation/common_voice_8_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
858b441a8bf33ea8ad5d13d7657624a7
apache-2.0
['automatic-speech-recognition', 'fr', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr-1b-french, title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {F}rench}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-french}}, year={2022} } ```
a27dc5b7a411108d994dc7f342ab0b4d
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2176 - Accuracy: 0.927 - F1: 0.9273
eaffb0afebb9a042f3097f4a640deac8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8252 | 1.0 | 250 | 0.3121 | 0.916 | 0.9140 | | 0.2471 | 2.0 | 500 | 0.2176 | 0.927 | 0.9273 |
c15e42c174b46351d4ba6dac70e6b07a
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
maherkou Dreambooth model trained by cobraxx with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
8a11af9f11529e59362619b32c344143
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_vp-sv_s445 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
c0376d42ed928e64ce0fb27ceffb8778
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-0']
false
MultiBERTs Seed 0 Checkpoint 20k (uncased) Seed 0 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
9626e7919e2d0dab6044125fac823bce
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-0']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-20k') model = BertModel.from_pretrained("multiberts-seed-0-20k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
be65f4b3172f6da930335c346ce9a25c
other
[]
false
Carpet Cleaning Arlington TX https://carpetcleaning-arlington-tx.com/ (817) 381-5072 At Rug Cleaning Plano in TX we likewise have a truck mounted cover cleaning framework. These versatile vehicles have a force to be reckoned with of hardware. They generally have these on them and they can finish any occupation properly. Whether it is a little home, an enormous house or a gigantic modern intricate, the undertaking is rarely too large or intense.
38f4b3195e2fd34f0eb18f48a68ff654
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1392 - F1: 0.8649
a9be0cc80cb8bcf5f05e3d3092b057c2
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
159e005398625df0ecb297a8673d4f8c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1616 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1419 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1392 | 0.8649 |
9fa2cdd5e6f397fa72ff91f8161b74be
apache-2.0
['generated_from_trainer']
false
hubert_zeroth_gpu_freeze This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the zeroth_korean_asr dataset. It achieves the following results on the evaluation set: - Loss: 4.8310 - Wer: 1.0
03b1615f6ce50db84e4b1352e8ca209b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
06be123ba5b4ccde81892df05bf7ee58
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:---:| | 26.2877 | 0.14 | 100 | 10.6810 | 1.0 | | 6.4696 | 0.29 | 200 | 4.8799 | 1.0 | | 4.841 | 0.43 | 300 | 4.8521 | 1.0 | | 4.8366 | 0.57 | 400 | 4.8736 | 1.0 | | 4.8311 | 0.72 | 500 | 4.8559 | 1.0 | | 4.8383 | 0.86 | 600 | 4.8601 | 1.0 | | 4.8288 | 1.01 | 700 | 4.8474 | 1.0 | | 4.8283 | 1.15 | 800 | 4.8436 | 1.0 | | 4.8283 | 1.29 | 900 | 4.8440 | 1.0 | | 4.8299 | 1.44 | 1000 | 4.8518 | 1.0 | | 4.8274 | 1.58 | 1100 | 4.8406 | 1.0 | | 4.8308 | 1.72 | 1200 | 4.8384 | 1.0 | | 4.8316 | 1.87 | 1300 | 4.8427 | 1.0 | | 4.8298 | 2.01 | 1400 | 4.8423 | 1.0 | | 4.8291 | 2.16 | 1500 | 4.8481 | 1.0 | | 4.8326 | 2.3 | 1600 | 4.8426 | 1.0 | | 4.83 | 2.44 | 1700 | 4.8362 | 1.0 | | 4.8286 | 2.59 | 1800 | 4.8424 | 1.0 | | 4.8269 | 2.73 | 1900 | 4.8362 | 1.0 | | 4.8234 | 2.87 | 2000 | 4.8452 | 1.0 | | 4.8179 | 3.02 | 2100 | 4.8416 | 1.0 | | 4.825 | 3.16 | 2200 | 4.8519 | 1.0 | | 4.8185 | 3.3 | 2300 | 4.8384 | 1.0 | | 4.827 | 3.45 | 2400 | 4.8519 | 1.0 | | 4.8316 | 3.59 | 2500 | 4.8467 | 1.0 | | 4.825 | 3.74 | 2600 | 4.8465 | 1.0 | | 4.8246 | 3.88 | 2700 | 4.8422 | 1.0 | | 4.8228 | 4.02 | 2800 | 4.8326 | 1.0 | | 4.8277 | 4.17 | 2900 | 4.8353 | 1.0 | | 4.822 | 4.31 | 3000 | 4.8349 | 1.0 | | 4.82 | 4.45 | 3100 | 4.8395 | 1.0 | | 4.8252 | 4.6 | 3200 | 4.8350 | 1.0 | | 4.8283 | 4.74 | 3300 | 4.8377 | 1.0 | | 4.8229 | 4.89 | 3400 | 4.8344 | 1.0 | | 4.8264 | 5.03 | 3500 | 4.8352 | 1.0 | | 4.8237 | 5.17 | 3600 | 4.8337 | 1.0 | | 4.8271 | 5.32 | 3700 | 4.8385 | 1.0 | | 4.8332 | 5.46 | 3800 | 4.8392 | 1.0 | | 4.8189 | 5.6 | 3900 | 4.8353 | 1.0 | | 4.8209 | 5.75 | 4000 | 4.8355 | 1.0 | | 4.8179 | 5.89 | 4100 | 4.8297 | 1.0 | | 4.821 | 6.03 | 4200 | 4.8505 | 1.0 | | 4.8243 | 6.18 | 4300 | 4.8371 | 1.0 | | 4.8224 | 6.32 | 4400 | 4.8378 | 1.0 | | 4.8261 | 6.47 | 4500 | 4.8368 | 1.0 | | 4.8233 | 6.61 | 4600 | 4.8326 | 1.0 | | 4.8252 | 6.75 | 4700 | 4.8364 | 1.0 | | 4.8247 | 6.9 | 4800 | 4.8438 | 1.0 | | 4.8139 | 7.04 | 4900 | 4.8435 | 1.0 | | 4.8204 | 7.18 | 5000 | 4.8398 | 1.0 | | 4.8197 | 7.33 | 5100 | 4.8382 | 1.0 | | 4.82 | 7.47 | 5200 | 4.8371 | 1.0 | | 4.8266 | 7.61 | 5300 | 4.8431 | 1.0 | | 4.826 | 7.76 | 5400 | 4.8390 | 1.0 | | 4.8216 | 7.9 | 5500 | 4.8381 | 1.0 | | 4.82 | 8.05 | 5600 | 4.8339 | 1.0 | | 4.8281 | 8.19 | 5700 | 4.8316 | 1.0 | | 4.8246 | 8.33 | 5800 | 4.8361 | 1.0 | | 4.8169 | 8.48 | 5900 | 4.8338 | 1.0 | | 4.8175 | 8.62 | 6000 | 4.8341 | 1.0 | | 4.8283 | 8.76 | 6100 | 4.8358 | 1.0 | | 4.8232 | 8.91 | 6200 | 4.8356 | 1.0 | | 4.8193 | 9.05 | 6300 | 4.8325 | 1.0 | | 4.8146 | 9.2 | 6400 | 4.8297 | 1.0 | | 4.8207 | 9.34 | 6500 | 4.8283 | 1.0 | | 4.8221 | 9.48 | 6600 | 4.8334 | 1.0 | | 4.8229 | 9.63 | 6700 | 4.8308 | 1.0 | | 4.8239 | 9.77 | 6800 | 4.8352 | 1.0 | | 4.8245 | 9.91 | 6900 | 4.8314 | 1.0 | | 4.8173 | 10.06 | 7000 | 4.8300 | 1.0 | | 4.8189 | 10.2 | 7100 | 4.8341 | 1.0 | | 4.8209 | 10.34 | 7200 | 4.8287 | 1.0 | | 4.823 | 10.49 | 7300 | 4.8320 | 1.0 | | 4.8226 | 10.63 | 7400 | 4.8273 | 1.0 | | 4.8241 | 10.78 | 7500 | 4.8308 | 1.0 | | 4.8177 | 10.92 | 7600 | 4.8316 | 1.0 | | 4.8235 | 11.06 | 7700 | 4.8274 | 1.0 | | 4.8188 | 11.21 | 7800 | 4.8290 | 1.0 | | 4.8183 | 11.35 | 7900 | 4.8355 | 1.0 | | 4.8226 | 11.49 | 8000 | 4.8312 | 1.0 | | 4.8209 | 11.64 | 8100 | 4.8307 | 1.0 | | 4.8208 | 11.78 | 8200 | 4.8300 | 1.0 | | 4.8221 | 11.93 | 8300 | 4.8281 | 1.0 | | 4.82 | 12.07 | 8400 | 4.8306 | 1.0 | | 4.8199 | 12.21 | 8500 | 4.8343 | 1.0 | | 4.8212 | 12.36 | 8600 | 4.8314 | 1.0 | | 4.8212 | 12.5 | 8700 | 4.8309 | 1.0 | | 4.8228 | 12.64 | 8800 | 4.8310 | 1.0 | | 4.8225 | 12.79 | 8900 | 4.8325 | 1.0 | | 4.8146 | 12.93 | 9000 | 4.8364 | 1.0 | | 4.8174 | 13.07 | 9100 | 4.8328 | 1.0 | | 4.816 | 13.22 | 9200 | 4.8338 | 1.0 | | 4.822 | 13.36 | 9300 | 4.8378 | 1.0 | | 4.8253 | 13.51 | 9400 | 4.8411 | 1.0 | | 4.8173 | 13.65 | 9500 | 4.8379 | 1.0 | | 4.8227 | 13.79 | 9600 | 4.8374 | 1.0 | | 4.8138 | 13.94 | 9700 | 4.8372 | 1.0 | | 4.8191 | 14.08 | 9800 | 4.8327 | 1.0 | | 4.8259 | 14.22 | 9900 | 4.8335 | 1.0 | | 4.8098 | 14.37 | 10000 | 4.8301 | 1.0 | | 4.8248 | 14.51 | 10100 | 4.8315 | 1.0 | | 4.8199 | 14.66 | 10200 | 4.8304 | 1.0 | | 4.8202 | 14.8 | 10300 | 4.8312 | 1.0 | | 4.8159 | 14.94 | 10400 | 4.8316 | 1.0 | | 4.8181 | 15.09 | 10500 | 4.8306 | 1.0 | | 4.8217 | 15.23 | 10600 | 4.8350 | 1.0 | | 4.8095 | 15.37 | 10700 | 4.8328 | 1.0 | | 4.8249 | 15.52 | 10800 | 4.8329 | 1.0 | | 4.8178 | 15.66 | 10900 | 4.8355 | 1.0 | | 4.8192 | 15.8 | 11000 | 4.8342 | 1.0 | | 4.8249 | 15.95 | 11100 | 4.8366 | 1.0 | | 4.8096 | 16.09 | 11200 | 4.8385 | 1.0 | | 4.8196 | 16.24 | 11300 | 4.8390 | 1.0 | | 4.8271 | 16.38 | 11400 | 4.8352 | 1.0 | | 4.8166 | 16.52 | 11500 | 4.8371 | 1.0 | | 4.8206 | 16.67 | 11600 | 4.8348 | 1.0 | | 4.817 | 16.81 | 11700 | 4.8347 | 1.0 | | 4.8165 | 16.95 | 11800 | 4.8386 | 1.0 | | 4.8159 | 17.1 | 11900 | 4.8376 | 1.0 | | 4.8202 | 17.24 | 12000 | 4.8374 | 1.0 | | 4.8157 | 17.39 | 12100 | 4.8370 | 1.0 | | 4.8175 | 17.53 | 12200 | 4.8405 | 1.0 | | 4.8189 | 17.67 | 12300 | 4.8321 | 1.0 | | 4.8167 | 17.82 | 12400 | 4.8322 | 1.0 | | 4.8229 | 17.96 | 12500 | 4.8353 | 1.0 | | 4.8179 | 18.1 | 12600 | 4.8322 | 1.0 | | 4.8183 | 18.25 | 12700 | 4.8379 | 1.0 | | 4.8151 | 18.39 | 12800 | 4.8375 | 1.0 | | 4.8211 | 18.53 | 12900 | 4.8355 | 1.0 | | 4.8241 | 18.68 | 13000 | 4.8352 | 1.0 | | 4.8185 | 18.82 | 13100 | 4.8350 | 1.0 | | 4.8175 | 18.97 | 13200 | 4.8352 | 1.0 | | 4.8094 | 19.11 | 13300 | 4.8337 | 1.0 | | 4.8149 | 19.25 | 13400 | 4.8344 | 1.0 | | 4.8131 | 19.4 | 13500 | 4.8386 | 1.0 | | 4.8227 | 19.54 | 13600 | 4.8350 | 1.0 | | 4.8175 | 19.68 | 13700 | 4.8325 | 1.0 | | 4.8204 | 19.83 | 13800 | 4.8344 | 1.0 | | 4.8228 | 19.97 | 13900 | 4.8322 | 1.0 | | 4.8177 | 20.11 | 14000 | 4.8365 | 1.0 | | 4.824 | 20.26 | 14100 | 4.8338 | 1.0 | | 4.8151 | 20.4 | 14200 | 4.8342 | 1.0 | | 4.8189 | 20.55 | 14300 | 4.8339 | 1.0 | | 4.8115 | 20.69 | 14400 | 4.8325 | 1.0 | | 4.8162 | 20.83 | 14500 | 4.8291 | 1.0 | | 4.8182 | 20.98 | 14600 | 4.8321 | 1.0 | | 4.8189 | 21.12 | 14700 | 4.8314 | 1.0 | | 4.8123 | 21.26 | 14800 | 4.8318 | 1.0 | | 4.8165 | 21.41 | 14900 | 4.8320 | 1.0 | | 4.8247 | 21.55 | 15000 | 4.8315 | 1.0 | | 4.8165 | 21.7 | 15100 | 4.8311 | 1.0 | | 4.8151 | 21.84 | 15200 | 4.8352 | 1.0 | | 4.8234 | 21.98 | 15300 | 4.8298 | 1.0 | | 4.8136 | 22.13 | 15400 | 4.8282 | 1.0 | | 4.8179 | 22.27 | 15500 | 4.8297 | 1.0 | | 4.8128 | 22.41 | 15600 | 4.8307 | 1.0 | | 4.8216 | 22.56 | 15700 | 4.8290 | 1.0 | | 4.8177 | 22.7 | 15800 | 4.8286 | 1.0 | | 4.8209 | 22.84 | 15900 | 4.8311 | 1.0 | | 4.8183 | 22.99 | 16000 | 4.8276 | 1.0 | | 4.8135 | 23.13 | 16100 | 4.8284 | 1.0 | | 4.8116 | 23.28 | 16200 | 4.8279 | 1.0 | | 4.8161 | 23.42 | 16300 | 4.8291 | 1.0 | | 4.8202 | 23.56 | 16400 | 4.8292 | 1.0 | | 4.8199 | 23.71 | 16500 | 4.8298 | 1.0 | | 4.8203 | 23.85 | 16600 | 4.8293 | 1.0 | | 4.8177 | 23.99 | 16700 | 4.8286 | 1.0 | | 4.8153 | 24.14 | 16800 | 4.8273 | 1.0 | | 4.8202 | 24.28 | 16900 | 4.8260 | 1.0 | | 4.8189 | 24.43 | 17000 | 4.8289 | 1.0 | | 4.8219 | 24.57 | 17100 | 4.8279 | 1.0 | | 4.8148 | 24.71 | 17200 | 4.8284 | 1.0 | | 4.8113 | 24.86 | 17300 | 4.8286 | 1.0 | | 4.8133 | 25.0 | 17400 | 4.8299 | 1.0 | | 4.8164 | 25.14 | 17500 | 4.8309 | 1.0 | | 4.8231 | 25.29 | 17600 | 4.8279 | 1.0 | | 4.8135 | 25.43 | 17700 | 4.8296 | 1.0 | | 4.8118 | 25.57 | 17800 | 4.8293 | 1.0 | | 4.8139 | 25.72 | 17900 | 4.8279 | 1.0 | | 4.8144 | 25.86 | 18000 | 4.8281 | 1.0 | | 4.8207 | 26.01 | 18100 | 4.8284 | 1.0 | | 4.8096 | 26.15 | 18200 | 4.8285 | 1.0 | | 4.8177 | 26.29 | 18300 | 4.8275 | 1.0 | | 4.8221 | 26.44 | 18400 | 4.8288 | 1.0 | | 4.8147 | 26.58 | 18500 | 4.8281 | 1.0 | | 4.8148 | 26.72 | 18600 | 4.8281 | 1.0 | | 4.819 | 26.87 | 18700 | 4.8282 | 1.0 | | 4.8138 | 27.01 | 18800 | 4.8297 | 1.0 | | 4.8094 | 27.16 | 18900 | 4.8291 | 1.0 | | 4.8236 | 27.3 | 19000 | 4.8288 | 1.0 | | 4.8208 | 27.44 | 19100 | 4.8292 | 1.0 | | 4.816 | 27.59 | 19200 | 4.8279 | 1.0 | | 4.8103 | 27.73 | 19300 | 4.8290 | 1.0 | | 4.8152 | 27.87 | 19400 | 4.8296 | 1.0 | | 4.8158 | 28.02 | 19500 | 4.8304 | 1.0 | | 4.8122 | 28.16 | 19600 | 4.8293 | 1.0 | | 4.8199 | 28.3 | 19700 | 4.8293 | 1.0 | | 4.8185 | 28.45 | 19800 | 4.8287 | 1.0 | | 4.8198 | 28.59 | 19900 | 4.8294 | 1.0 | | 4.8102 | 28.74 | 20000 | 4.8291 | 1.0 | | 4.8168 | 28.88 | 20100 | 4.8290 | 1.0 | | 4.8117 | 29.02 | 20200 | 4.8303 | 1.0 | | 4.8156 | 29.17 | 20300 | 4.8295 | 1.0 | | 4.8127 | 29.31 | 20400 | 4.8298 | 1.0 | | 4.8193 | 29.45 | 20500 | 4.8301 | 1.0 | | 4.8174 | 29.6 | 20600 | 4.8301 | 1.0 | | 4.8167 | 29.74 | 20700 | 4.8301 | 1.0 | | 4.8137 | 29.89 | 20800 | 4.8310 | 1.0 |
d8013c2a2d73809a88aea08b4d671f7e
cc-by-sa-4.0
['spacy', 'token-classification']
false
hr_core_news_md Croatian pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner. | Feature | Description | | --- | --- | | **Name** | `hr_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` | | **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` | | **Vectors** | floret (50000, 300) | | **Sources** | [Training corpus hr500k 1.0](http://hdl.handle.net/11356/1183) (Ljubešić, Nikola ; Agić, Željko ; Klubička, Filip ; Batanović, Vuk and Erjavec, Tomaž)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) |
8e739de53baf781f760b9d528f75500f
cc-by-sa-4.0
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (1518 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `Agcfpay`, `Agcfpdy`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmpny`, `Agcmsany`, `Agcmsay`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnpay`, `Agcnpdy`, `Agcnpgy`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsin`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpfsvy`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmsan`, `Agpmsann`, `Agpmsany`, `Agpmsay`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsln`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpmsvy`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsln`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpdy`, `Agsfpgy`, `Agsfpiy`, `Agsfply`, `Agsfpny`, `Agsfsay`, `Agsfsdy`, `Agsfsgy`, `Agsfsiy`, `Agsfsly`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmpiy`, `Agsmply`, `Agsmpny`, `Agsmsany`, `Agsmsayn`, `Agsmsayy`, `Agsmsdy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnply`, `Agsnpny`, `Agsnsay`, `Agsnsdy`, `Agsnsgy`, `Agsnsiy`, `Agsnsly`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpdy`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsany`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgn`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpdy`, `Appnpgy`, `Appnpiy`, `Appnply`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfpiy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmply`, `Aspmpny`, `Aspmsayn`, `Aspmsayy`, `Aspmsdn`, `Aspmsdy`, `Aspmsgn`, `Aspmsgy`, `Aspmsiy`, `Aspmsln`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnpgy`, `Aspnpny`, `Aspnsay`, `Aspnsgn`, `Aspnsgy`, `Aspnsln`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--g`, `Mlc--i`, `Mlc--l`, `Mlcf-a`, `Mlcf-d`, `Mlcf-g`, `Mlcf-n`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-g`, `Mlcm-l`, `Mlcm-n`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsg`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsd`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonpn`, `Mlonsa`, `Mlonsd`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-g`, `Mlsm-l`, `Mlsm-n`, `Mlsmpn`, `Mlsn-n`, `Mrc`, `Mro`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfpv`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncfsv`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmpv`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Ncnsv`, `Npfpa`, `Npfpg`, `Npfpl`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpl`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npmsv`, `Npnpg`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpi`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npg`, `Pd-npi`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npi`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsd`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-i`, `Pi3m-n`, `Pi3n-a`, `Pi3n-d`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-sg`, `Pp1-si`, `Pp1-sl`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pl`, `Pp2-pn`, `Pp2-sa`, `Pp2-sd`, `Pp2-sg`, `Pp2-sl`, `Pp2-sn`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsi`, `Pp3nsn`, `Pq-fpa`, `Pq-fpn`, `Pq-fsa`, `Pq-fsi`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3m-d`, `Pq3m-n`, `Pq3n-a`, `Pq3n-l`, `Pq3n-n`, `Ps1fpa`, `Ps1fpg`, `Ps1fpl`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsi`, `Ps1fsl`, `Ps1fsn`, _(truncated: full list in pipeline meta)_ | | **`morphologizer`** | `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Ins\|POS=ADP`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Degree=Pos\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=PART`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=CCONJ`, `Case=Gen\|POS=ADP`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PART\|Polarity=Neg`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|POS=ADP`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `NumType=Ord\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `NumType=Card\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Degree=Pos\|POS=ADV\|PronType=Int,Rel`, `Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=X`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=NOUN`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|POS=ADP`, `Degree=Sup\|POS=ADV`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `POS=ADV\|Tense=Pres\|VerbForm=Conv`, `Case=Ins\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `NumType=Mult\|POS=NUM`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=ADV\|Tense=Past\|VerbForm=Conv`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Degree=Pos\|POS=ADV\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Gen\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Mult\|POS=SYM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, _(truncated: full list in pipeline meta)_ | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl:pv`, `fixed`, `flat`, `flat:foreign`, `goeswith`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`ner`** | `DERIV_PER`, `LOC`, `MISC`, `ORG`, `PER` | </details>
6dbe796a75709b4f9aa1ee03b9858f55
cc-by-sa-4.0
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.89 | | `TOKEN_P` | 97.28 | | `TOKEN_R` | 98.71 | | `TOKEN_F` | 97.99 | | `TAG_ACC` | 91.69 | | `POS_ACC` | 97.33 | | `MORPH_ACC` | 92.31 | | `MORPH_MICRO_P` | 95.98 | | `MORPH_MICRO_R` | 95.56 | | `MORPH_MICRO_F` | 95.77 | | `SENTS_P` | 95.12 | | `SENTS_R` | 93.41 | | `SENTS_F` | 94.25 | | `DEP_UAS` | 86.45 | | `DEP_LAS` | 80.05 | | `LEMMA_ACC` | 92.81 | | `ENTS_P` | 82.44 | | `ENTS_R` | 81.34 | | `ENTS_F` | 81.89 |
1393036ec579daa390cc006a36263523
apache-2.0
['generated_from_trainer']
false
bert-finetuned-math-prob-classification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the part of the [competition_math dataset](https://huggingface.co/datasets/competition_math). Specifically, it was trained as a multi-class multi-label model on the problem text. The problem types (labels) used here are "Counting & Probability", "Prealgebra", "Algebra", "Number Theory", "Geometry", "Intermediate Algebra", and "Precalculus".
1c543a8cdb4c600ac394e01155e8ee09
apache-2.0
['generated_from_trainer']
false
Model description See the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model for more details. The only architectural modification made was to the classification head. Here, 7 classes were used.
05fc719acc2f1cf4245ba38b7442ed09
apache-2.0
['generated_from_trainer']
false
Training and evaluation data The `problem` field of [competition_math dataset](https://huggingface.co/datasets/competition_math) was used for training and evaluation input data. The target data was taken from the `type` field.
7438d60f78b68b7152348113ca436f32
apache-2.0
['generated_from_trainer']
false
Training results This fine-tuned model achieves the following result on the problem type competition math test set: ``` precision recall f1-score support Algebra 0.78 0.79 0.79 1187 Counting & Probability 0.75 0.81 0.78 474 Geometry 0.76 0.83 0.79 479 Intermediate Algebra 0.86 0.84 0.85 903 Number Theory 0.79 0.82 0.80 540 Prealgebra 0.66 0.61 0.63 871 Precalculus 0.95 0.89 0.92 546 accuracy 0.79 5000 macro avg 0.79 0.80 0.79 5000 weighted avg 0.79 0.79 0.79 5000 ```
84c33b9f60651bc452944b5fb20a0f3a
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_xlsr-53_s109 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
97208a9f84dc2373dc9e379c866012a9
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-TPU-cv-fine-tune-2 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-TPU-cv-fine-tune](https://huggingface.co/jiobiala24/wav2vec2-base-TPU-cv-fine-tune) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.6051 - Wer: 0.5484
50f47094351b05c3365eaef4e9c8d67c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.522 | 6.45 | 400 | 1.2550 | 0.5649 | | 0.2874 | 12.9 | 800 | 1.4235 | 0.6054 | | 0.152 | 19.35 | 1200 | 1.5743 | 0.5806 | | 0.0857 | 25.8 | 1600 | 1.6051 | 0.5484 |
6a6c4a8e6812c3cbd1c589843a1cdef1
apache-2.0
['sagemaker', 'roberta-bne', 'TextClassification', 'SentimentAnalysis']
false
**A finetuned model for Sentiment analysis in Spanish** This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container, The base model is **RoBERTa-base-bne** which is a RoBERTa base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB. It was trained by The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) **RoBERTa BNE Citation** Check out the paper for all the details: https://arxiv.org/abs/2107.07253 ``` @article{gutierrezfandino2022, author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas}, title = {MarIA: Spanish Language Models}, journal = {Procesamiento del Lenguaje Natural}, volume = {68}, number = {0}, year = {2022}, issn = {1989-7553}, url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405}, pages = {39--60} } ```
faedcbea49f20af28154d97dc8888427
apache-2.0
['sagemaker', 'roberta-bne', 'TextClassification', 'SentimentAnalysis']
false
Dataset The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages. Sizes of datasets: - Train dataset: 42,500 - Validation dataset: 3,750 - Test dataset: 3,750
8fb459d41b46bf01132a8815cf60e4a5
apache-2.0
['sagemaker', 'roberta-bne', 'TextClassification', 'SentimentAnalysis']
false
Hyperparameters { "epochs": "4", "train_batch_size": "32", "eval_batch_size": "8", "fp16": "true", "learning_rate": "3e-05", "model_name": "\"PlanTL-GOB-ES/roberta-base-bne\"", "sagemaker_container_log_level": "20", "sagemaker_program": "\"train.py\"", }
2eb42166cddecaecfbc67c0e4c2b1e3a
apache-2.0
['sagemaker', 'roberta-bne', 'TextClassification', 'SentimentAnalysis']
false
Usage for Sentiment Analysis ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es") model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/roberta_bne_sentiment_analysis_es") text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal" input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0) outputs = model(input_ids) output = outputs.logits.argmax(1) ``` Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
f01dff99a6fec0dcfed25098b99905ad
apache-2.0
['generated_from_trainer']
false
bert-base-casedepoch3_sexist_baseline_with_reddit_and_gabfortest This model is a fine-tuned version of [Wiebke/bert-base-casedepoch3_sexist_baseline_with_reddit_and_gab](https://huggingface.co/Wiebke/bert-base-casedepoch3_sexist_baseline_with_reddit_and_gab) on an unknown dataset.
0ad22be1544517ede6f43d3feaa85b48
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-devops1-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9870 - Precision: 0.0572 - Recall: 0.2689 - F1: 0.0944 - Accuracy: 0.7842
e3e5d9e5d2383626180643168029e73c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 72 | 0.6027 | 0.0484 | 0.2269 | 0.0798 | 0.7861 | | No log | 2.0 | 144 | 0.8631 | 0.0573 | 0.2857 | 0.0955 | 0.7771 | | No log | 3.0 | 216 | 0.9870 | 0.0572 | 0.2689 | 0.0944 | 0.7842 |
7361f2d5c5a78e35350cbbdc220f9d10
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53-Enlgish-FT-ASCEND-colab This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the ascend dataset.
3e588a6b492d5b31a02e003b40a41668
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 10000 - total_train_batch_size: 160000 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
756f5b85a7d94f1b0e137687709b641c
apache-2.0
['generated_from_trainer', 'translation']
false
mt-uk-sv-finetuned This model is a fine-tuned version of [Helsinki-NLP/opus-mt-uk-sv](https://huggingface.co/Helsinki-NLP/opus-mt-uk-sv) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 1.4210 - eval_bleu: 40.6634 - eval_runtime: 966.5303 - eval_samples_per_second: 18.744 - eval_steps_per_second: 4.687 - epoch: 6.0 - step: 40764
ccccb602425889b3dfe0c3c68ccf510f
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
glpn-nyu-finetuned-diode-221116-110652 This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.4018 - Mae: 0.3272 - Rmse: 0.4546 - Abs Rel: 0.3934 - Log Mae: 0.1380 - Log Rmse: 0.1907 - Delta1: 0.4598 - Delta2: 0.7659 - Delta3: 0.9082
4aa30bbee55e5ee52ffc74d74fe8a80b
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 24 - eval_batch_size: 48 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP
fa7c81b3d1b23ecc45818a82261e5ace
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:| | 1.3984 | 1.0 | 72 | 1.1606 | 3.2154 | 3.2710 | 4.6927 | 0.6627 | 0.7082 | 0.0 | 0.0053 | 0.0893 | | 0.8305 | 2.0 | 144 | 0.5445 | 0.6035 | 0.8404 | 0.8013 | 0.2102 | 0.2726 | 0.2747 | 0.5358 | 0.7609 | | 0.4601 | 3.0 | 216 | 0.4484 | 0.4041 | 0.5376 | 0.5417 | 0.1617 | 0.2188 | 0.3771 | 0.6932 | 0.8692 | | 0.4211 | 4.0 | 288 | 0.4251 | 0.3634 | 0.4914 | 0.4800 | 0.1499 | 0.2069 | 0.4136 | 0.7270 | 0.8931 | | 0.4162 | 5.0 | 360 | 0.4170 | 0.3537 | 0.4833 | 0.4483 | 0.1455 | 0.2005 | 0.4303 | 0.7444 | 0.8992 | | 0.3776 | 6.0 | 432 | 0.4115 | 0.3491 | 0.4692 | 0.4558 | 0.1449 | 0.1999 | 0.4281 | 0.7471 | 0.9018 | | 0.3729 | 7.0 | 504 | 0.4058 | 0.3337 | 0.4590 | 0.4135 | 0.1396 | 0.1935 | 0.4517 | 0.7652 | 0.9072 | | 0.3235 | 8.0 | 576 | 0.4035 | 0.3304 | 0.4602 | 0.4043 | 0.1383 | 0.1929 | 0.4613 | 0.7679 | 0.9073 | | 0.3382 | 9.0 | 648 | 0.3990 | 0.3254 | 0.4546 | 0.3937 | 0.1365 | 0.1900 | 0.4671 | 0.7717 | 0.9102 | | 0.3265 | 10.0 | 720 | 0.4018 | 0.3272 | 0.4546 | 0.3934 | 0.1380 | 0.1907 | 0.4598 | 0.7659 | 0.9082 |
fcf95d8e91f4258b6912b8eaad791d69
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_pretrain_qnli This model is a fine-tuned version of [gokuls/distilbert_add_pre-training-complete](https://huggingface.co/gokuls/distilbert_add_pre-training-complete) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3579 - Accuracy: 0.6522
45d298afd83616259b7d2239e982e9dc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4059 | 1.0 | 410 | 0.4016 | 0.5585 | | 0.3907 | 2.0 | 820 | 0.3735 | 0.6094 | | 0.3715 | 3.0 | 1230 | 0.3602 | 0.6480 | | 0.352 | 4.0 | 1640 | 0.3579 | 0.6522 | | 0.3314 | 5.0 | 2050 | 0.3626 | 0.6670 | | 0.309 | 6.0 | 2460 | 0.3650 | 0.6776 | | 0.2865 | 7.0 | 2870 | 0.3799 | 0.6776 | | 0.2679 | 8.0 | 3280 | 0.3817 | 0.6903 | | 0.2525 | 9.0 | 3690 | 0.3942 | 0.6822 |
6a4bcf1bc3e301aec9bf90abdd07c735
apache-2.0
['generated_from_trainer']
false
finetuning-misinfo-model-700-Zhaohui-1_misinfo This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5343 - Accuracy: 0.8571 - F1: 0.8571
6b2bd0c7205b0f29af64a013b08b1249
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
252971fdcb9d997184021080ab6d9a03
apache-2.0
['whisper-event']
false
Whisper Kannada Tiny This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Kannada data available from multiple publicly available ASR corpuses. It has been fine-tuned as a part of the Whisper fine-tuning sprint.
260fcb33c08587018a64f64810e60d93