license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
bsd-3-clause
['pytorch-lightning', 'audio-to-audio']
false
BibTeX entry and citation info ```bibtex @inproceedings{lee21nuwave, author={Junhyeok Lee and Seungu Han}, title={{NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling}}, year=2021, booktitle={Proc. Interspeech 2021}, pages={1634--1638}, doi={10.21437/Interspeech.2021-36} } ```
f7a2083f576477f77ba200541a7dcfef
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
glpn-kitti-finetuned-diode-221214-123047 This model is a fine-tuned version of [vinvino02/glpn-kitti](https://huggingface.co/vinvino02/glpn-kitti) on the diode-subset dataset. It achieves the following results on the evaluation set: - Loss: 0.3497 - Mae: 0.2847 - Rmse: 0.3977 - Abs Rel: 0.3477 - Log Mae: 0.1203 - Log Rmse: 0.1726 - Delta1: 0.5217 - Delta2: 0.8246 - Delta3: 0.9436
bb5026d01e1722bb09dfea743268f016
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 48 - seed: 2022 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - num_epochs: 25 - mixed_precision_training: Native AMP
ae5d3fdbae4568b773a7dddf9df324f6
apache-2.0
['vision', 'depth-estimation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:| | 0.6103 | 1.0 | 72 | 0.4449 | 0.3914 | 0.5513 | 0.4625 | 0.1615 | 0.2186 | 0.3918 | 0.6910 | 0.8549 | | 0.3762 | 2.0 | 144 | 0.4095 | 0.3583 | 0.4876 | 0.4281 | 0.1505 | 0.2015 | 0.4065 | 0.7121 | 0.8901 | | 0.341 | 3.0 | 216 | 0.3768 | 0.3046 | 0.4061 | 0.4016 | 0.1313 | 0.1840 | 0.4757 | 0.7938 | 0.9309 | | 0.291 | 4.0 | 288 | 0.3853 | 0.3227 | 0.4495 | 0.3724 | 0.1360 | 0.1869 | 0.4646 | 0.7680 | 0.9127 | | 0.2861 | 5.0 | 360 | 0.3786 | 0.3151 | 0.4257 | 0.4065 | 0.1344 | 0.1876 | 0.4597 | 0.7785 | 0.9329 | | 0.2539 | 6.0 | 432 | 0.3687 | 0.3158 | 0.4546 | 0.3329 | 0.1316 | 0.1821 | 0.4732 | 0.7869 | 0.9138 | | 0.2199 | 7.0 | 504 | 0.3705 | 0.3122 | 0.4479 | 0.3378 | 0.1312 | 0.1820 | 0.4784 | 0.7888 | 0.9189 | | 0.1728 | 8.0 | 576 | 0.3578 | 0.2895 | 0.4008 | 0.3675 | 0.1235 | 0.1766 | 0.5101 | 0.8178 | 0.9420 | | 0.1877 | 9.0 | 648 | 0.3589 | 0.2846 | 0.3846 | 0.3721 | 0.1235 | 0.1764 | 0.5144 | 0.8170 | 0.9403 | | 0.1541 | 10.0 | 720 | 0.3521 | 0.2831 | 0.3997 | 0.3283 | 0.1201 | 0.1712 | 0.5241 | 0.8260 | 0.9422 | | 0.1414 | 11.0 | 792 | 0.3460 | 0.2735 | 0.3772 | 0.3419 | 0.1173 | 0.1691 | 0.5409 | 0.8360 | 0.9469 | | 0.1643 | 12.0 | 864 | 0.3530 | 0.2878 | 0.4100 | 0.3313 | 0.1214 | 0.1736 | 0.5249 | 0.8214 | 0.9344 | | 0.1724 | 13.0 | 936 | 0.3606 | 0.2995 | 0.4249 | 0.3459 | 0.1255 | 0.1775 | 0.5057 | 0.8069 | 0.9323 | | 0.1514 | 14.0 | 1008 | 0.3477 | 0.2832 | 0.3881 | 0.3596 | 0.1206 | 0.1726 | 0.5174 | 0.8253 | 0.9437 | | 0.1535 | 15.0 | 1080 | 0.3535 | 0.2961 | 0.4242 | 0.3412 | 0.1231 | 0.1753 | 0.5186 | 0.8080 | 0.9332 | | 0.1233 | 16.0 | 1152 | 0.3508 | 0.2896 | 0.4104 | 0.3391 | 0.1213 | 0.1727 | 0.5225 | 0.8165 | 0.9398 | | 0.116 | 17.0 | 1224 | 0.3519 | 0.2874 | 0.3989 | 0.3533 | 0.1215 | 0.1731 | 0.5200 | 0.8179 | 0.9407 | | 0.1532 | 18.0 | 1296 | 0.3532 | 0.2965 | 0.4200 | 0.3459 | 0.1236 | 0.1747 | 0.5147 | 0.8035 | 0.9353 | | 0.1179 | 19.0 | 1368 | 0.3497 | 0.2828 | 0.3896 | 0.3557 | 0.1204 | 0.1728 | 0.5200 | 0.8260 | 0.9457 | | 0.1326 | 20.0 | 1440 | 0.3467 | 0.2787 | 0.3848 | 0.3475 | 0.1185 | 0.1704 | 0.5257 | 0.8330 | 0.9479 | | 0.1069 | 21.0 | 1512 | 0.3471 | 0.2807 | 0.3922 | 0.3418 | 0.1187 | 0.1707 | 0.5288 | 0.8297 | 0.9452 | | 0.1049 | 22.0 | 1584 | 0.3474 | 0.2864 | 0.4048 | 0.3387 | 0.1199 | 0.1717 | 0.5227 | 0.8251 | 0.9428 | | 0.103 | 23.0 | 1656 | 0.3483 | 0.2840 | 0.3991 | 0.3416 | 0.1196 | 0.1717 | 0.5254 | 0.8269 | 0.9431 | | 0.1184 | 24.0 | 1728 | 0.3473 | 0.2839 | 0.3960 | 0.3450 | 0.1198 | 0.1717 | 0.5223 | 0.8251 | 0.9443 | | 0.1258 | 25.0 | 1800 | 0.3497 | 0.2847 | 0.3977 | 0.3477 | 0.1203 | 0.1726 | 0.5217 | 0.8246 | 0.9436 |
88f40a2416d8d25c70901c6957df5715
mit
[]
false
flash base + globalpointer 04/08/2022 10:53:34 - INFO - __main__ - ADDRESS = Score(f1=0.607703, precision=0.64939, recall=0.571046, tp=213, pred=328, gold=373) 04/08/2022 10:53:34 - INFO - __main__ - BOOK = Score(f1=0.8125, precision=0.873134, recall=0.75974, tp=117, pred=134, gold=154) 04/08/2022 10:53:34 - INFO - __main__ - COMPANY = Score(f1=0.818304, precision=0.832877, recall=0.804233, tp=304, pred=365, gold=378) 04/08/2022 10:53:34 - INFO - __main__ - GAME = Score(f1=0.854305, precision=0.834951, recall=0.874576, tp=258, pred=309, gold=295) 04/08/2022 10:53:34 - INFO - __main__ - GOVERNMENT = Score(f1=0.823529, precision=0.775, recall=0.878543, tp=217, pred=280, gold=247) 04/08/2022 10:53:34 - INFO - __main__ - MOVIE = Score(f1=0.810997, precision=0.842857, recall=0.781457, tp=118, pred=140, gold=151) 04/08/2022 10:53:34 - INFO - __main__ - NAME = Score(f1=0.874042, precision=0.890625, recall=0.858065, tp=399, pred=448, gold=465) 04/08/2022 10:53:34 - INFO - __main__ - ORGANIZATION = Score(f1=0.813986, precision=0.836207, recall=0.792916, tp=291, pred=348, gold=367) 04/08/2022 10:53:34 - INFO - __main__ - POSITION = Score(f1=0.78478, precision=0.808824, recall=0.762125, tp=330, pred=408, gold=433) 04/08/2022 10:53:34 - INFO - __main__ - SCENE = Score(f1=0.683805, precision=0.738889, recall=0.636364, tp=133, pred=180, gold=209) 04/08/2022 10:53:34 - INFO - __main__ - micro_f1 = Score(f1=0.79175, precision=0.809524, recall=0.77474, tp=2380, pred=2940, gold=3072) 04/08/2022 10:53:34 - INFO - __main__ - macro_f1 = Score(f1=0.788395, precision=0.808275, recall=0.771906, tp=0, pred=0, gold=0) 04/08/2022 10:53:34 - INFO - __main__ - mean_f1 = 0.790072
c2b6dabec2729789f8bd67b3c16529ff
mit
[]
false
flash base + softmax 04/08/2022 11:10:44 - INFO - __main__ - ADDRESS = Score(f1=0.568987, precision=0.522422, recall=0.624665, tp=233, pred=446, gold=373) 04/08/2022 11:10:44 - INFO - __main__ - BOOK = Score(f1=0.750789, precision=0.730061, recall=0.772727, tp=119, pred=163, gold=154) 04/08/2022 11:10:44 - INFO - __main__ - COMPANY = Score(f1=0.75528, precision=0.711944, recall=0.804233, tp=304, pred=427, gold=378) 04/08/2022 11:10:44 - INFO - __main__ - GAME = Score(f1=0.811502, precision=0.767372, recall=0.861017, tp=254, pred=331, gold=295) 04/08/2022 11:10:44 - INFO - __main__ - GOVERNMENT = Score(f1=0.738636, precision=0.69395, recall=0.789474, tp=195, pred=281, gold=247) 04/08/2022 11:10:44 - INFO - __main__ - MOVIE = Score(f1=0.74359, precision=0.720497, recall=0.768212, tp=116, pred=161, gold=151) 04/08/2022 11:10:44 - INFO - __main__ - NAME = Score(f1=0.831967, precision=0.794521, recall=0.873118, tp=406, pred=511, gold=465) 04/08/2022 11:10:44 - INFO - __main__ - ORGANIZATION = Score(f1=0.754054, precision=0.747989, recall=0.760218, tp=279, pred=373, gold=367) 04/08/2022 11:10:44 - INFO - __main__ - POSITION = Score(f1=0.742729, precision=0.720174, recall=0.766744, tp=332, pred=461, gold=433) 04/08/2022 11:10:44 - INFO - __main__ - SCENE = Score(f1=0.628842, precision=0.621495, recall=0.636364, tp=133, pred=214, gold=209) 04/08/2022 11:10:44 - INFO - __main__ - micro_f1 = Score(f1=0.736335, precision=0.703979, recall=0.77181, tp=2371, pred=3368, gold=3072) 04/08/2022 11:10:44 - INFO - __main__ - macro_f1 = Score(f1=0.732638, precision=0.703043, recall=0.765677, tp=0, pred=0, gold=0) 04/08/2022 11:10:44 - INFO - __main__ - mean_f1 = 0.734486
0a4418a50db64cdf6192061e88906b8c
mit
[]
false
bert base + globalpointer 04/08/2022 11:22:48 - INFO - __main__ - ADDRESS = Score(f1=0.641558, precision=0.622166, recall=0.662198, tp=247, pred=397, gold=373) 04/08/2022 11:22:48 - INFO - __main__ - BOOK = Score(f1=0.813115, precision=0.821192, recall=0.805195, tp=124, pred=151, gold=154) 04/08/2022 11:22:48 - INFO - __main__ - COMPANY = Score(f1=0.823684, precision=0.819372, recall=0.828042, tp=313, pred=382, gold=378) 04/08/2022 11:22:48 - INFO - __main__ - GAME = Score(f1=0.841762, precision=0.811321, recall=0.874576, tp=258, pred=318, gold=295) 04/08/2022 11:22:48 - INFO - __main__ - GOVERNMENT = Score(f1=0.827324, precision=0.778571, recall=0.882591, tp=218, pred=280, gold=247) 04/08/2022 11:22:48 - INFO - __main__ - MOVIE = Score(f1=0.82392, precision=0.826667, recall=0.821192, tp=124, pred=150, gold=151) 04/08/2022 11:22:48 - INFO - __main__ - NAME = Score(f1=0.861345, precision=0.840164, recall=0.883621, tp=410, pred=488, gold=464) 04/08/2022 11:22:48 - INFO - __main__ - ORGANIZATION = Score(f1=0.804911, precision=0.806011, recall=0.803815, tp=295, pred=366, gold=367) 04/08/2022 11:22:48 - INFO - __main__ - POSITION = Score(f1=0.805046, precision=0.799544, recall=0.810624, tp=351, pred=439, gold=433) 04/08/2022 11:22:48 - INFO - __main__ - SCENE = Score(f1=0.702703, precision=0.722222, recall=0.684211, tp=143, pred=198, gold=209) 04/08/2022 11:22:48 - INFO - __main__ - micro_f1 = Score(f1=0.795833, precision=0.783528, recall=0.808531, tp=2483, pred=3169, gold=3071) 04/08/2022 11:22:48 - INFO - __main__ - macro_f1 = Score(f1=0.794537, precision=0.784723, recall=0.805606, tp=0, pred=0, gold=0) 04/08/2022 11:22:48 - INFO - __main__ - mean_f1 = 0.795185 ```
deac7ee4f7a1a33a3313b9c07dd66848
mit
[]
false
cmeee + globalpointer ```python 04/08/2022 11:50:41 - INFO - __main__ - bod = Score(f1=0.639522, precision=0.642318, recall=0.63675, tp=3746, pred=5832, gold=5883) 04/08/2022 11:50:41 - INFO - __main__ - dep = Score(f1=0.473988, precision=0.650794, recall=0.372727, tp=41, pred=63, gold=110) 04/08/2022 11:50:41 - INFO - __main__ - dis = Score(f1=0.716959, precision=0.704479, recall=0.729889, tp=3602, pred=5113, gold=4935) 04/08/2022 11:50:41 - INFO - __main__ - dru = Score(f1=0.756328, precision=0.829329, recall=0.695139, tp=1001, pred=1207, gold=1440) 04/08/2022 11:50:41 - INFO - __main__ - equ = Score(f1=0.518703, precision=0.638037, recall=0.436975, tp=104, pred=163, gold=238) 04/08/2022 11:50:41 - INFO - __main__ - ite = Score(f1=0.322533, precision=0.503448, recall=0.23727, tp=219, pred=435, gold=923) 04/08/2022 11:50:41 - INFO - __main__ - mic = Score(f1=0.746967, precision=0.75614, recall=0.738014, tp=431, pred=570, gold=584) 04/08/2022 11:50:41 - INFO - __main__ - pro = Score(f1=0.611138, precision=0.614138, recall=0.608167, tp=1251, pred=2037, gold=2057) 04/08/2022 11:50:41 - INFO - __main__ - sym = Score(f1=0.47969, precision=0.495738, recall=0.464649, tp=1919, pred=3871, gold=4130) 04/08/2022 11:50:41 - INFO - __main__ - micro_f1 = Score(f1=0.622061, precision=0.638329, recall=0.606601, tp=12314, pred=19291, gold=20300) 04/08/2022 11:50:41 - INFO - __main__ - macro_f1 = Score(f1=0.585092, precision=0.648269, recall=0.54662, tp=0, pred=0, gold=0) 04/08/2022 11:50:41 - INFO - __main__ - mean_f1 = 0.603576 ```
f02414443fe4b672076e6c9389814bbc
mit
[]
false
usage ```python import torch from flash import FLASHForMaskedLM from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("junnyu/flash_base_wwm_cluecorpussmall") model = FLASHForMaskedLM.from_pretrained("junnyu/flash_base_wwm_cluecorpussmall") model.eval() text = "天气预报说今天的天[MASK]很好,那么我[MASK]一起去公园玩吧!" inputs = tokenizer(text, return_tensors="pt", padding="max_length", max_length=512, return_token_type_ids=False)
c3d7b06dff21126b96b01f8af5b5eb2a
mit
[]
false
这里必须是512,不然结果可能不对。 with torch.no_grad(): pt_outputs = model(**inputs).logits[0] pt_outputs_sentence = "pytorch: " for i, id in enumerate(tokenizer.encode(text)): if id == tokenizer.mask_token_id: val,idx = pt_outputs[i].softmax(-1).topk(k=5) tokens = tokenizer.convert_ids_to_tokens(idx) new_tokens = [] for v,t in zip(val.cpu(),tokens): new_tokens.append(f"{t}+{round(v.item(),4)}") pt_outputs_sentence += "[" + "||".join(new_tokens) + "]" else: pt_outputs_sentence += "".join( tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)) print(pt_outputs_sentence)
125119a78613980c3dc7d9de16c13ab4
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
waifu-diffusion v1.3 - Diffusion for Weebs waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning. <img src=https://i.imgur.com/Y5Tmw1S.png width=75% height=75%> [Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-3)
72e8211da2223a4b8ffa5ade9e4ed1e5
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Gradio & Colab We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run Waifu Diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/hakurei/waifu-diffusion-demo) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE
e1c51894bf662a12607645cadd64e89a
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Example Code ```python import torch from torch import autocast from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( 'waifu-diffusion', torch_dtype=torch.float32 ).to('cuda') prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt" with autocast("cuda"): image = pipe(prompt, guidance_scale=6)["sample"][0] image.save("test.png") ```
c9c3f62328e89fa32a6d8ad72df27edf
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Team Members and Acknowledgements This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/). - [Anthony Mercurio](https://github.com/harubaru) - [Salt](https://github.com/sALTaccount/) - [Sta @ Bit192](https://twitter.com/naclbbr) In order to reach us, you can join our [Discord server](https://discord.gg/touhouai). [![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai)
5ac06f8643fb749a0df4c88893080add
mit
['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation']
false
Model description LightWeight GAN model for unconditional generation. NFT collection available [here](https://opensea.io/collection/cyberkongz). Dataset is available [here](https://huggingface.co/datasets/huggingnft/cyberkongz). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
fa2f3ff25c0fa5082776299784c8f039
mit
[]
false
Model Description A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on LAION-2B (english), a subset of [LAION-5B](https://arxiv.org/abs/2210.08402), using [OpenCLIP](https://github.com/mlfoundations/open_clip). Goals: * Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution Firsts: * First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-L/16, ViT-L14, and RN50x16 * First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth) The models utilize: * the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower * a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models * a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768). The models are trained at 256x256 (working on 384 variants) image resolution. At 256x256, the ConvNext-Large-D used roughly 1/2 the training FLOPs to achieve accuracy greater than previous L/14 model trained on LAION-2B. L/14 model is ~1.65x more GMAC, 1.45x more activations, and 1.22x more parameters. The ConvNeXt was trained with 26B samples-seen and L/14 with 34B. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 | | [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 | | [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster.
cea961655b5440dd79bf877ae6141b2b
mit
[]
false
Training Procedure All models were trained with a global batch size of 102400 for 128 checkpoint intervals of 203.7M samples for a total of ~26B samples seen over training. For 256x256 models, a slurm script w/ srun below was used on 16 8-GPU (A100 80GB) nodes (Stability). ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_large_256" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --beta2 0.98 \ --warmup 10000 \ --batch-size=800 \ --epochs=128 \ --dataset-resampled \ --aug-cfg use_timm=True scale='(0.33, 1.0)' re_prob=0.35 \ --clip-grad-norm 5.0 \ --lr 1.667e-3 \ --workers=6 \ --model "convnext_large_d" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ```
ad56b253031722abebd256b53ecc1057
mit
[]
false
Results The models achieve between 75.9 top-1 zero-shot accuracy on ImageNet-1k. ![](convnext_large_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
d31106a6f3c4951e1660e3afe96e1ce7
apache-2.0
['text2text-generation', 'paraphrase-generation']
false
About the model The model has been trained on [a dataset containing 138927 article titles](https://www.englishvoice.ai/p/keywords-and-titles/ "a dataset containing 138927 article titles") along with their keywords. The purpose of the model is to generate suggestions of article headlines, given a keyword or multiple keywords.
00c6a357838c9e6513744e62fc926894
apache-2.0
['text2text-generation', 'paraphrase-generation']
false
Generation examples | Input | Output | | :------------ | :------------ | | weight loss | The Last Weight Loss Plan: Lose Weight, Feel Great, and Get in Shape <br/>How to Lose Weight Without Giving Up Your Favorite Foods <br/> I Lost Weight and Finally Feel Good About My Body | | property rental, property management | Property rental: The new way to make money <br/> We take the hassle out of property rental <br/> Is property management your new best friend? | | diabetic diet plan | A diabetic diet plan that actually works! <br/> Lose weight, feel great, and live better with our diabetic diet plan! <br/> Diet has never been so tasty: Our diabetic diet plan puts you to the test! | You can supply multiple keywords by separating them with commas. Higher temperature settings result in more creative headlines; we recommend testing first with the temperature set to 1.5.
b5c9483ba566677306f84ed5d97cb97c
apache-2.0
['text2text-generation', 'paraphrase-generation']
false
Sample code Python code for generating headlines: ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained("EnglishVoice/t5-base-keywords-to-headline") tokenizer = T5Tokenizer.from_pretrained("EnglishVoice/t5-base-keywords-to-headline") model = model.to(device) keywords = "weight loss, weight pills" text = "headline: " + keywords encoding = tokenizer.encode_plus(text, return_tensors = "pt") input_ids = encoding["input_ids"].to(device) attention_masks = encoding["attention_mask"].to(device) beam_outputs = model.generate( input_ids = input_ids, attention_mask = attention_masks, do_sample = True, num_return_sequences = 5, temperature = 0.95, early_stopping = True, top_k = 50, top_p = 0.95, ) for i in range(len(beam_outputs)): result = tokenizer.decode(beam_outputs[i], skip_special_tokens=True) print(result) ``` Sample result: I Am Losing Weight and I Love It! New Weight Loss Pill Helps You Get the Body You Want! I Lost Weight By Taking Pills! The Truth About Weight Loss Pills! The Best Weight Loss Pills Money Can Buy!
e7c493cdf9cabf60a3abfeece056e795
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ur', 'robust-speech-event', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [HarrisDePerceptron/xls-r-300m-ur](https://huggingface.co/HarrisDePerceptron/xls-r-300m-ur) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 1.0517 - WER: 0.5151291512915129 - CER: 0.23689640940982254
0c6961faa38906fc179da938ee4ec8a6
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ur', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 100.0 - mixed_precision_training: Native AMP
3a116c21a635ea080a6506b0ec6f01e6
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ur', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.2991 | 1.96 | 100 | 0.9769 | 0.6627 | | 1.3415 | 3.92 | 200 | 0.9701 | 0.6594 | | 1.2998 | 5.88 | 300 | 0.9678 | 0.6668 | | 1.2881 | 7.84 | 400 | 0.9650 | 0.6613 | | 1.2369 | 9.8 | 500 | 0.9392 | 0.6502 | | 1.2293 | 11.76 | 600 | 0.9536 | 0.6480 | | 1.1709 | 13.73 | 700 | 0.9265 | 0.6402 | | 1.1492 | 15.69 | 800 | 0.9636 | 0.6506 | | 1.1044 | 17.65 | 900 | 0.9305 | 0.6351 | | 1.0704 | 19.61 | 1000 | 0.9329 | 0.6280 | | 1.0039 | 21.57 | 1100 | 0.9413 | 0.6295 | | 0.9756 | 23.53 | 1200 | 0.9718 | 0.6185 | | 0.9633 | 25.49 | 1300 | 0.9731 | 0.6133 | | 0.932 | 27.45 | 1400 | 0.9659 | 0.6199 | | 0.9252 | 29.41 | 1500 | 0.9766 | 0.6196 | | 0.9172 | 31.37 | 1600 | 1.0052 | 0.6199 | | 0.8733 | 33.33 | 1700 | 0.9955 | 0.6203 | | 0.868 | 35.29 | 1800 | 1.0069 | 0.6240 | | 0.8547 | 37.25 | 1900 | 0.9783 | 0.6258 | | 0.8451 | 39.22 | 2000 | 0.9845 | 0.6052 | | 0.8374 | 41.18 | 2100 | 0.9496 | 0.6137 | | 0.8153 | 43.14 | 2200 | 0.9756 | 0.6122 | | 0.8134 | 45.1 | 2300 | 0.9712 | 0.6096 | | 0.8019 | 47.06 | 2400 | 0.9565 | 0.5970 | | 0.7746 | 49.02 | 2500 | 0.9864 | 0.6096 | | 0.7664 | 50.98 | 2600 | 0.9988 | 0.6092 | | 0.7708 | 52.94 | 2700 | 1.0181 | 0.6255 | | 0.7468 | 54.9 | 2800 | 0.9918 | 0.6148 | | 0.7241 | 56.86 | 2900 | 1.0150 | 0.6018 | | 0.7165 | 58.82 | 3000 | 1.0439 | 0.6063 | | 0.7104 | 60.78 | 3100 | 1.0016 | 0.6037 | | 0.6954 | 62.75 | 3200 | 1.0117 | 0.5970 | | 0.6753 | 64.71 | 3300 | 1.0191 | 0.6037 | | 0.6803 | 66.67 | 3400 | 1.0190 | 0.6033 | | 0.661 | 68.63 | 3500 | 1.0284 | 0.6007 | | 0.6597 | 70.59 | 3600 | 1.0060 | 0.5967 | | 0.6398 | 72.55 | 3700 | 1.0372 | 0.6048 | | 0.6105 | 74.51 | 3800 | 1.0048 | 0.6044 | | 0.6164 | 76.47 | 3900 | 1.0398 | 0.6148 | | 0.6354 | 78.43 | 4000 | 1.0272 | 0.6133 | | 0.5952 | 80.39 | 4100 | 1.0364 | 0.6081 | | 0.5814 | 82.35 | 4200 | 1.0418 | 0.6092 | | 0.6079 | 84.31 | 4300 | 1.0277 | 0.5967 | | 0.5748 | 86.27 | 4400 | 1.0362 | 0.6041 | | 0.5624 | 88.24 | 4500 | 1.0427 | 0.6007 | | 0.5767 | 90.2 | 4600 | 1.0370 | 0.5919 | | 0.5793 | 92.16 | 4700 | 1.0442 | 0.6011 | | 0.547 | 94.12 | 4800 | 1.0516 | 0.5982 | | 0.5513 | 96.08 | 4900 | 1.0461 | 0.5989 | | 0.5429 | 98.04 | 5000 | 1.0504 | 0.5996 | | 0.5404 | 100.0 | 5100 | 1.0517 | 0.5967 |
62ae1ad4539416a8ad299d0d49b9b369
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8976 - Mae: 0.4268
0efbb980d70e8232ff4b4523e7207053
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.092 | 1.0 | 235 | 0.9514 | 0.5122 | | 0.9509 | 2.0 | 470 | 0.8976 | 0.4268 |
173e7dcf571a5ca0cacb0b1a6a3cf6c6
apache-2.0
['generated_from_trainer']
false
bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the squad dataset.
41a04cefb9349e85e9dcfba0d9cf3801
apache-2.0
['translation']
false
opus-mt-en-fi * source languages: en * target languages: fi * OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md) * dataset: opus+bt-news * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip) * test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt) * test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt)
2f09039aeb86e5a44d273e1dfacb82f0
mit
['pytorch', 'causal-lm']
false
Lit-6B - A Large Fine-tuned Model For Fictional Storytelling Lit-6B is a GPT-J 6B model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text.
7b664038d81da3106c523285566cf907
mit
['pytorch', 'causal-lm']
false
Model Description The model used for fine-tuning is [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax), which is a 6 billion parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/).
b2fd602bee90f575b0975fce3774c81d
mit
['pytorch', 'causal-lm']
false
Training Data & Annotative Prompting The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations. ``` [ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ] *** When a traveler in north central Massachusetts takes the wrong fork... ``` The annotations can be mixed and matched to help generate towards a specific style.
3efef0b3a5a9e8da11cb2e0a0ff5d6fd
mit
['pytorch', 'causal-lm']
false
Example Code ``` from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained('hakurei/lit-6B') tokenizer = AutoTokenizer.from_pretrained('hakurei/lit-6B') prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ] *** When a traveler''' input_ids = tokenizer.encode(prompt, return_tensors='pt') output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id) generated_text = tokenizer.decode(output[0]) print(generated_text) ``` An example output from this code produces a result that will look similar to: ``` [ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ] *** When a traveler comes to an unknown region, his thoughts turn inevitably towards the old gods and legends which cluster around its appearance. It is not that he believes in them or suspects their reality—but merely because they are present somewhere else in creation just as truly as himself, and so belong of necessity in any landscape whose features cannot be altogether strange to him. Moreover, man has been prone from ancient times to brood over those things most connected with the places where he dwells. Thus the Olympian deities who ruled Hyper ```
7d70cedb40739637cc9e5db3f5e59466
mit
['pytorch', 'causal-lm']
false
Team members and Acknowledgements This project would not have been possible without the computational resources graciously provided by the [TPU Research Cloud](https://sites.research.google/trc/) - [Anthony Mercurio](https://github.com/harubaru) - Imperishable_NEET
08c8763ddc5b42f5fc1aa87c12cbc306
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53_full_train This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the Swissdial dataset. It achieves the following results on the evaluation set: - Loss: 0.2811 - Wer: 0.2909
60d502fe9534c91e3ebab096cad83f66
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP
b584f2537cff6bcd3b0bbab26a4f36a2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.7666 | 2.69 | 1000 | 0.4356 | 0.4954 | | 0.7868 | 5.39 | 2000 | 0.2693 | 0.3180 | | 0.6948 | 8.09 | 3000 | 0.2811 | 0.2909 |
67bf433f05da53fef25c65417f8aba49
apache-2.0
['generated_from_trainer']
false
DistilBERT-POWO_MGH_Lifecycle_Finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0728
25e4dede5617ba97f8480230e1e0e4e7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0716 | 1.0 | 1625 | 0.0843 | | 0.0695 | 2.0 | 3250 | 0.0701 | | 0.0603 | 3.0 | 4875 | 0.0728 |
99d39739b5cf76d8f3a959895db263e9
mit
['generated_from_trainer']
false
adr-ner This model is a fine-tuned version of [austin/Austin-MeDeBERTa](https://huggingface.co/austin/Austin-MeDeBERTa) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0434 - Precision: 0.7305 - Recall: 0.6934 - F1: 0.7115 - Accuracy: 0.9941
956ff4330127ee3f7b20c9f7eeacc1c8
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15
a7654e198d1937557b3929fdac18c68c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 107 | 0.0630 | 0.0 | 0.0 | 0.0 | 0.9876 | | No log | 2.0 | 214 | 0.0308 | 0.4282 | 0.3467 | 0.3832 | 0.9900 | | No log | 3.0 | 321 | 0.0254 | 0.5544 | 0.5603 | 0.5573 | 0.9920 | | No log | 4.0 | 428 | 0.0280 | 0.6430 | 0.5751 | 0.6071 | 0.9929 | | 0.0465 | 5.0 | 535 | 0.0266 | 0.5348 | 0.7146 | 0.6118 | 0.9915 | | 0.0465 | 6.0 | 642 | 0.0423 | 0.7632 | 0.5793 | 0.6587 | 0.9939 | | 0.0465 | 7.0 | 749 | 0.0336 | 0.6957 | 0.6765 | 0.6860 | 0.9939 | | 0.0465 | 8.0 | 856 | 0.0370 | 0.6876 | 0.6702 | 0.6788 | 0.9936 | | 0.0465 | 9.0 | 963 | 0.0349 | 0.6555 | 0.7040 | 0.6789 | 0.9932 | | 0.0044 | 10.0 | 1070 | 0.0403 | 0.6910 | 0.6808 | 0.6858 | 0.9938 | | 0.0044 | 11.0 | 1177 | 0.0415 | 0.7140 | 0.6808 | 0.6970 | 0.9939 | | 0.0044 | 12.0 | 1284 | 0.0440 | 0.7349 | 0.6681 | 0.6999 | 0.9941 | | 0.0044 | 13.0 | 1391 | 0.0423 | 0.7097 | 0.6977 | 0.7036 | 0.9941 | | 0.0044 | 14.0 | 1498 | 0.0435 | 0.7174 | 0.6977 | 0.7074 | 0.9941 | | 0.0006 | 15.0 | 1605 | 0.0434 | 0.7305 | 0.6934 | 0.7115 | 0.9941 |
638794870e3526c1633ba8930d234626
mit
[]
false
These are the midjourney styles that are pre-loaded in [Whatchamacallit](https://colab.research.google.com/github/aicrumb/whatchamacallit/blob/main/Whatchamacallit.ipynb) Using original textual inversion bins that are compatible with most webuis/notebooks that support text inversion loading. They can be easily converted to diffusers-style and in Whatchamacallit there is code to do that already if you need reference. \- midj-strong: <br> good at that weird surreal melty almost golden sort of style, looks like clip guided diffusion in my opinion \- midj-portrait: <br> a bit more subtle but still very cinematic and changes the image significantly but less so than midj-strong \- midj-anthro: <br> was finetuned on some anthropomorphic animals (not traditional furry style, but just animals standing like humans). good on other subjects though. ![](https://pbs.twimg.com/media/Fc-K-oQX0AEyvUr?format=jpg&name=large)
036bdb1d924257e63b7f3c24a1f7a251
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 25177 - mixed_precision_training: Native AMP
ea2ce2fac1393d7769c8c3b29a1428f5
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'batch_size': 128, 'every_n_steps': 512, 'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 640, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_hits_threshold': 0, 'num_samples': 2048}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_hits_threshold': 0, 'num_samples': 2048, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'every_n_steps': 512, 'force_call_on': [25177], 'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': 'cf05a2b0558c03b08c78f07662c22989785b9520'}, 'path_or_name': 'kejian/mighty-mle'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'curious-mle', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
a54b9a0af99c3f4c7fab681701f3f127
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-sst2-target-glue-wnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-sst2](https://huggingface.co/muhtasham/tiny-mlm-glue-sst2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2687 - Accuracy: 0.1127
609c41b1f303c97f188ae856be26e788
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6895 | 25.0 | 500 | 0.7649 | 0.2535 | | 0.6628 | 50.0 | 1000 | 1.1357 | 0.1268 | | 0.6042 | 75.0 | 1500 | 1.7250 | 0.0986 | | 0.5319 | 100.0 | 2000 | 2.2687 | 0.1127 |
8e7cb0f981fad0e8ad4ded7d32f9e89b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/sentence-t5-large This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model works well for sentence similarity tasks, but doesn't perform that well for semantic search tasks. This model was converted from the Tensorflow model [st5-large-1](https://tfhub.dev/google/sentence-t5/st5-large/1) to PyTorch. When using this model, have a look at the publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results. The model uses only the encoder from a T5-large model. The weights are stored in FP16.
d5f3aab4bd89afc13c90df6bff126017
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/sentence-t5-large') embeddings = model.encode(sentences) print(embeddings) ``` The model requires sentence-transformers version 2.2.0 or newer.
3bb12ad0ba7fd97828aea3efad50e2f9
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/sentence-t5-large)
61da753dd8d2a44f360c46fdbd66337d
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Citing & Authors If you find this model helpful, please cite the respective publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877)
626a8009e0463594aede13ed103cf569
mit
['spacy', 'token-classification']
false
English pipeline for part-of-speech and rhetorical tagging. | Feature | Description | | --- | --- | | **Name** | `en_docusco_spacy_fc_trf` | | **Version** | `1.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `transformer`, `tagger`, `ner` | | **Components** | `transformer`, `tagger`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | `MIT` | | **Author** | [David Brown](https://browndw.github.io/docuscope-docs/) |
26442e3f3fdd323fe2f197a31514e5da
mit
['spacy', 'token-classification']
false
Label Scheme <details> <summary>View label scheme (269 labels for 2 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `APPGE`, `AT`, `AT1`, `BCL21`, `BCL22`, `CC`, `CCB`, `CS`, `CS21`, `CS22`, `CS31`, `CS32`, `CS33`, `CS41`, `CS42`, `CS43`, `CS44`, `CSA`, `CSN`, `CST`, `CSW`, `CSW31`, `CSW32`, `CSW33`, `DA`, `DA1`, `DA2`, `DAR`, `DAT`, `DB`, `DB2`, `DD`, `DD1`, `DD2`, `DDQ`, `DDQGE`, `DDQV`, `DDQV31`, `DDQV32`, `DDQV33`, `EX`, `FO`, `FU`, `FW`, `GE`, `IF`, `II`, `II21`, `II22`, `II31`, `II32`, `II33`, `II41`, `II42`, `II43`, `II44`, `IO`, `IW`, `JJ`, `JJ21`, `JJ22`, `JJ31`, `JJ32`, `JJ33`, `JJR`, `JJT`, `JK`, `MC`, `MC1`, `MC2`, `MC221`, `MC222`, `MCMC`, `MD`, `MF`, `ND1`, `NN`, `NN1`, `NN121`, `NN122`, `NN131`, `NN132`, `NN133`, `NN141`, `NN142`, `NN143`, `NN144`, `NN2`, `NN21`, `NN22`, `NN221`, `NN222`, `NN231`, `NN232`, `NN233`, `NN31`, `NN33`, `NNA`, `NNB`, `NNL1`, `NNL2`, `NNO`, `NNO2`, `NNT1`, `NNT2`, `NNU`, `NNU1`, `NNU2`, `NNU21`, `NNU22`, `NP`, `NP1`, `NP2`, `NPD1`, `NPD2`, `NPM1`, `NPM2`, `PN`, `PN1`, `PN121`, `PN122`, `PN21`, `PN22`, `PNQO`, `PNQS`, `PNQS31`, `PNQS32`, `PNQS33`, `PNQV`, `PNX1`, `PPGE`, `PPH1`, `PPHO1`, `PPHO2`, `PPHS1`, `PPHS2`, `PPIO1`, `PPIO2`, `PPIS1`, `PPIS2`, `PPX1`, `PPX121`, `PPX122`, `PPX2`, `PPX221`, `PPX222`, `PPY`, `RA`, `RA21`, `RA22`, `REX`, `REX21`, `REX22`, `REX41`, `REX42`, `REX43`, `REX44`, `RG`, `RG21`, `RG22`, `RGQ`, `RGQV`, `RGQV31`, `RGQV32`, `RGQV33`, `RGR`, `RGT`, `RL`, `RL21`, `RL22`, `RP`, `RPK`, `RR`, `RR21`, `RR22`, `RR31`, `RR32`, `RR33`, `RR41`, `RR42`, `RR43`, `RR44`, `RR51`, `RR52`, `RR53`, `RR54`, `RR55`, `RRQ`, `RRQV`, `RRQV31`, `RRQV32`, `RRQV33`, `RRR`, `RRT`, `RT`, `RT21`, `RT22`, `RT31`, `RT32`, `RT33`, `RT41`, `RT42`, `RT43`, `RT44`, `TO`, `UH`, `UH21`, `UH22`, `UH31`, `UH32`, `UH33`, `VB0`, `VBDR`, `VBDZ`, `VBG`, `VBI`, `VBM`, `VBN`, `VBR`, `VBZ`, `VD0`, `VDD`, `VDG`, `VDI`, `VDN`, `VDZ`, `VH0`, `VHD`, `VHG`, `VHI`, `VHN`, `VHZ`, `VM`, `VM21`, `VM22`, `VMK`, `VV0`, `VVD`, `VVG`, `VVGK`, `VVI`, `VVN`, `VVNK`, `VVZ`, `XX`, `Y`, `ZZ1`, `ZZ2`, `ZZ221`, `ZZ222` | | **`ner`** | `ActorsAbstractions`, `ActorsFirstPerson`, `ActorsPeople`, `ActorsPublicEntities`, `CitationAuthority`, `CitationControversy`, `CitationNeutral`, `ConfidenceHedged`, `ConfidenceHigh`, `OrganizationNarrative`, `OrganizationReasoning`, `PlanningFuture`, `PlanningStrategy`, `SentimentNegative`, `SentimentPositive`, `SignpostingAcademicWritingMoves`, `SignpostingMetadiscourse`, `StanceEmphatic`, `StanceModerated` | </details>
794a6d1c19d221f9ffb6303da6a546e5
mit
['spacy', 'token-classification']
false
Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 98.39 | | `ENTS_F` | 88.62 | | `ENTS_P` | 88.90 | | `ENTS_R` | 88.34 | | `TRANSFORMER_LOSS` | 2319800.36 | | `TAGGER_LOSS` | 669777.78 | | `NER_LOSS` | 2048423.35 |
f911d1a2bbfc6ca6915d71c55f1f711f
mit
['generated_from_trainer']
false
camembert-base-cae-fait-ext This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3098 - Precision: 0.7339 - Recall: 0.7107 - F1: 0.7161
1998ae443b959d1474548d9687d4f79b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 1.2626 | 1.0 | 61 | 1.1255 | 0.2541 | 0.5041 | 0.3379 | | 1.0858 | 2.0 | 122 | 0.9264 | 0.6300 | 0.6198 | 0.5705 | | 0.8364 | 3.0 | 183 | 0.8741 | 0.6460 | 0.6446 | 0.6391 | | 0.5045 | 4.0 | 244 | 0.7836 | 0.7252 | 0.7273 | 0.7171 | | 0.2866 | 5.0 | 305 | 0.9903 | 0.7352 | 0.6860 | 0.6918 | | 0.1896 | 6.0 | 366 | 1.0289 | 0.7422 | 0.7190 | 0.7257 | | 0.0975 | 7.0 | 427 | 1.1272 | 0.7565 | 0.7355 | 0.7396 | | 0.0679 | 8.0 | 488 | 1.2209 | 0.7389 | 0.7190 | 0.7237 | | 0.058 | 9.0 | 549 | 1.2647 | 0.7318 | 0.7025 | 0.7079 | | 0.0431 | 10.0 | 610 | 1.3098 | 0.7339 | 0.7107 | 0.7161 |
b412fdcef87917981751849b71a83c1d
apache-2.0
['generated_from_trainer', 'automatic-speech-recognition', 'speech', 'openslr', 'nepali']
false
wav2vec2-large-xls-r-300m-nepali-openslr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an [OpenSLR Nepali ASR](https://huggingface.co/datasets/spktsagar/openslr-nepali-asr-cleaned) dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1767 - eval_wer: 0.2127 - eval_runtime: 595.3962 - eval_samples_per_second: 36.273 - eval_steps_per_second: 4.535 - epoch: 6.07 - step: 23200
efbbd97cbfe0fb192c4ed02b0926cc69
apache-2.0
['generated_from_trainer', 'automatic-speech-recognition', 'speech', 'openslr', 'nepali']
false
Model description Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR) and was released in September 2020 by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after the superior performance of Wav2Vec2 was demonstrated on one of the most popular English datasets for ASR, called LibriSpeech, Facebook AI presented a multi-lingual version of Wav2Vec2, called XLSR. XLSR stands for cross-lingual speech representations and refers to model's ability to learn speech representations that are useful across multiple languages.
206cf90b96541690aa9db3637f066aa0
apache-2.0
['generated_from_trainer', 'automatic-speech-recognition', 'speech', 'openslr', 'nepali']
false
How to use? 1. Install transformers and librosa ``` pip install librosa, transformers ``` 2. Run the following code which loads your audio file, preprocessor, models, and returns your prediction ```python import librosa from transformers import pipeline audio, sample_rate = librosa.load("<path to your audio file>", sr=16000) recognizer = pipeline("automatic-speech-recognition", model="spktsagar/wav2vec2-large-xls-r-300m-nepali-openslr") prediction = recognizer(audio) ```
91146bac358d66433e60abbc8c7a498e
apache-2.0
['generated_from_trainer', 'automatic-speech-recognition', 'speech', 'openslr', 'nepali']
false
Intended uses & limitations The model is trained on the OpenSLR Nepali ASR dataset, which in itself has some incorrect transcriptions, so it is obvious that the model will not have perfect predictions for your transcript. Similarly, due to colab's resource limit utterances longer than 5 sec are filtered out from the dataset during training and evaluation. Hence, the model might not perform as expected when given audio input longer than 5 sec.
2e253816c6e187be24123451e2b9801c
mit
['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation']
false
Model description LightWeight GAN model for unconditional generation. NFT collection available [here](https://opensea.io/collection/mini-mutants). Dataset is available [here](https://huggingface.co/datasets/huggingnft/mini-mutants). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft)
4e556d6dbf5642d6ca773e662458a421
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1586
227520f2e1d8ff9d4d593604da8269a8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2203 | 1.0 | 5533 | 1.1569 | | 0.9452 | 2.0 | 11066 | 1.1234 | | 0.7656 | 3.0 | 16599 | 1.1586 |
ebbcdcabe856b6aa909cfea2e4b92340
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0}, 'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True}, 'generation': {'batch_size': 64, 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 704, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 512, 'prefix': '<|aligned|>', 'use_prompt_for_scoring': False}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_samples': 512, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'codeparrot/codeparrot-small'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'kejian/final-cond-10-0.01', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0008, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 5000, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
61c11cc93ae214444d82d30a1ef6789d
apache-2.0
['generated_from_trainer']
false
wav2vec2-burak-new-300-v2-4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3402 - Wer: 0.2237
3f606f428e5714708113aecae7d32646
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 131
50ca1bc8bdf1437e4a58e0a83b12632a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 7.7711 | 2.45 | 500 | 3.1768 | 1.0 | | 3.1194 | 4.9 | 1000 | 2.6401 | 1.0 | | 1.4593 | 7.35 | 1500 | 0.5243 | 0.5960 | | 0.7581 | 9.8 | 2000 | 0.3534 | 0.4432 | | 0.5843 | 12.25 | 2500 | 0.3159 | 0.4157 | | 0.4703 | 14.71 | 3000 | 0.3003 | 0.3668 | | 0.4045 | 17.16 | 3500 | 0.2891 | 0.3414 | | 0.3581 | 19.61 | 4000 | 0.2609 | 0.3207 | | 0.3268 | 22.06 | 4500 | 0.2622 | 0.3207 | | 0.3063 | 24.51 | 5000 | 0.2805 | 0.3193 | | 0.2729 | 26.96 | 5500 | 0.2674 | 0.2884 | | 0.249 | 29.41 | 6000 | 0.2740 | 0.2953 | | 0.2275 | 31.86 | 6500 | 0.2729 | 0.2753 | | 0.2295 | 34.31 | 7000 | 0.2801 | 0.2691 | | 0.2105 | 36.76 | 7500 | 0.2992 | 0.2801 | | 0.1905 | 39.22 | 8000 | 0.2967 | 0.2663 | | 0.1884 | 41.67 | 8500 | 0.2911 | 0.2691 | | 0.1773 | 44.12 | 9000 | 0.2966 | 0.2753 | | 0.1672 | 46.57 | 9500 | 0.3051 | 0.2505 | | 0.1632 | 49.02 | 10000 | 0.2872 | 0.2491 | | 0.1553 | 51.47 | 10500 | 0.3121 | 0.2629 | | 0.1619 | 53.92 | 11000 | 0.3044 | 0.2581 | | 0.1444 | 56.37 | 11500 | 0.3135 | 0.2567 | | 0.1451 | 58.82 | 12000 | 0.3033 | 0.2519 | | 0.1386 | 61.27 | 12500 | 0.3079 | 0.2622 | | 0.1261 | 63.73 | 13000 | 0.3037 | 0.2395 | | 0.1287 | 66.18 | 13500 | 0.3221 | 0.2409 | | 0.1236 | 68.63 | 14000 | 0.3179 | 0.2464 | | 0.1215 | 71.08 | 14500 | 0.3521 | 0.2429 | | 0.1208 | 73.53 | 15000 | 0.3481 | 0.2540 | | 0.1128 | 75.98 | 15500 | 0.3288 | 0.2402 | | 0.1108 | 78.43 | 16000 | 0.3238 | 0.2450 | | 0.1074 | 80.88 | 16500 | 0.3178 | 0.2416 | | 0.1086 | 83.33 | 17000 | 0.3461 | 0.2361 | | 0.1059 | 85.78 | 17500 | 0.3342 | 0.2457 | | 0.0981 | 88.24 | 18000 | 0.3382 | 0.2354 | | 0.0995 | 90.69 | 18500 | 0.3466 | 0.2416 | | 0.0995 | 93.14 | 19000 | 0.3326 | 0.2271 | | 0.0929 | 95.59 | 19500 | 0.3526 | 0.2237 | | 0.0944 | 98.04 | 20000 | 0.3516 | 0.2347 | | 0.089 | 100.49 | 20500 | 0.3504 | 0.2271 | | 0.0915 | 102.94 | 21000 | 0.3425 | 0.2285 | | 0.0845 | 105.39 | 21500 | 0.3309 | 0.2306 | | 0.0887 | 107.84 | 22000 | 0.3196 | 0.2264 | | 0.0812 | 110.29 | 22500 | 0.3285 | 0.2264 | | 0.0856 | 112.75 | 23000 | 0.3347 | 0.2251 | | 0.0778 | 115.2 | 23500 | 0.3403 | 0.2271 | | 0.0748 | 117.65 | 24000 | 0.3427 | 0.2278 | | 0.0803 | 120.1 | 24500 | 0.3380 | 0.2223 | | 0.0768 | 122.55 | 25000 | 0.3392 | 0.2189 | | 0.0764 | 125.0 | 25500 | 0.3423 | 0.2278 | | 0.0786 | 127.45 | 26000 | 0.3423 | 0.2230 | | 0.0766 | 129.9 | 26500 | 0.3402 | 0.2237 |
416530b3810a03577855aabc31fc6a45
apache-2.0
['text generation', 'pytorch', 'causal-lm']
false
Model Description GPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture https://huggingface.co/EleutherAI/gpt-neo-125M. It generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.
99a7b5953e956c0d9de2cc4b07bc8b91
apache-2.0
['text generation', 'pytorch', 'causal-lm']
false
Training data This model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following styles of beer: * Strong American Ale * Pale American Ale * India Pale Ale (IPA) * Standard American Beer * Stout * English Pale Ale * IPA * American Porter and Stout * Sour Ale * Irish Beer * Strong British Ale * Belgian and French Ale * German Wheat and Rye Beer * Czech Lager * Spice/Herb/Vegetable Beer * Specialty Beer * American Ale * Pilsner * Belgian Ale * Strong Belgian Ale * Bock * Brown British Beer * German Wheat Beer * Fruit Beer * Amber Malty European Lager * Pale Malty European Lager * British Bitter * Amber and Brown American Beer * Light Hybrid Beer * Pale Commonwealth Beer * American Wild Ale * European Amber Lager * Belgian Strong Ale * International Lager * Amber Bitter European Lager * Light Lager * Scottish and Irish Ale * European Sour Ale * Trappist Ale * Strong European Beer * Porter * Historical Beer * Pale Bitter European Beer * Amber Hybrid Beer * Smoke Flavored/Wood-Aged Beer * Spiced Beer * Dark European Lager * Alternative Fermentables Beer * Mead * Strong Ale * Dark British Beer * Scottish Ale * Smoked Beer * English Brown Ale * Dark Lager * Cider or Perry * Wood Beer
f5f19421a76f1c6ba7ddf8081591fd2a
apache-2.0
['text generation', 'pytorch', 'causal-lm']
false
How to use You can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='b3ck1/gpt-neo-125M-finetuned-beer-recipes') >>> generator("style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", do_sample=True, min_length=50, max_length=500) >>> print(output[0]['generated_text']) style: Pilsner batch_size: 20 efficiency: 70 boil_size: 24 boil_time: 60 fermentables: - name: Pale Ale type: Grain amount: 6.5 hops: - name: Saaz alpha: 3.5 use: Boil time: 60 amount: 0.06 - name: Saaz alpha: 3.5 use: Boil time: 30 amount: 0.06 - name: Saaz alpha: 3.5 use: Boil time: 10 amount: 0.06 - name: Saaz alpha: 3.5 use: Boil time: 0 amount: 0.06 yeasts: - name: Safale - American Ale Yeast US-05 amount: 0.11 min_temperature: 12 max_temperature: 25 primary_temp: null mash_steps: - step_temp: 65 step_time: 60 miscs: [] ```
d55de1d8d5d3c9997affe1d8d08d6def
apache-2.0
['translation']
false
cel-eng * source group: Celtic languages * target group: English * OPUS readme: [cel-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md) * model: transformer * source language(s): bre cor cym gla gle glv * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.eval.txt)
5d4d2c1a7c291dd5aaa45a9852a41fad
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bre-eng.bre.eng | 17.2 | 0.385 | | Tatoeba-test.cor-eng.cor.eng | 3.0 | 0.172 | | Tatoeba-test.cym-eng.cym.eng | 41.5 | 0.582 | | Tatoeba-test.gla-eng.gla.eng | 15.4 | 0.330 | | Tatoeba-test.gle-eng.gle.eng | 50.8 | 0.668 | | Tatoeba-test.glv-eng.glv.eng | 11.0 | 0.297 | | Tatoeba-test.multi.eng | 22.8 | 0.398 |
9992e040828a73ab74517e3fca1c4bdf
apache-2.0
['translation']
false
System Info: - hf_name: cel-eng - source_languages: cel - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel', 'en'] - src_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt - src_alpha3: cel - tgt_alpha3: eng - short_pair: cel-en - chrF2_score: 0.39799999999999996 - bleu: 22.8 - brevity_penalty: 1.0 - ref_len: 42097.0 - src_name: Celtic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: cel - tgt_alpha2: en - prefer_old: False - long_pair: cel-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
963d148917fe5f7434b967ffb8d3b4ab
apache-2.0
['CTC', 'Attention', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard']
false
wav2vec 2.0 with CTC/Attention trained on CommonVoice Kinyarwanda (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (Kinyarwanda Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test WER | GPUs | |:--------------:|:--------------:| :--------:| | 03-06-21 | 18.91 | 2xV100 32GB |
c6a3922e6f686252748ec0f6c0586a9a
apache-2.0
['CTC', 'Attention', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard']
false
Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions (train.tsv) of CommonVoice (RW). - Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on CommonVoice En. The obtained final acoustic representation is given to the CTC and attention decoders. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
61ea96d085d9ffdb0e2a500b13659785
apache-2.0
['CTC', 'Attention', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard']
false
Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io).
9cbc817a15b72655a972269e46540a68
apache-2.0
['CTC', 'Attention', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard']
false
Transcribing your own audio files (in Kinyarwanda) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-rw", savedir="pretrained_models/asr-wav2vec2-commonvoice-rw") asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-rw/example.mp3") ```
1d4a9653389bb2aad7e0fb65527f1966
apache-2.0
['CTC', 'Attention', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard']
false
Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/CommonVoice/ASR/seq2seq python train_with_wav2vec.py hparams/train_rw_with_wav2vec.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing).
9a27bd4b5ebca13eba4c4ac81a5227eb
apache-2.0
['generated_from_trainer']
false
finetuned_token_3e-05_all_16_02_2022-16_16_08 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1630 - Precision: 0.3684 - Recall: 0.3714 - F1: 0.3699 - Accuracy: 0.9482
0fe8ca44b7b3bbfce87d27523942dddc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 | | No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 | | No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 | | No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 | | No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
835c6960344b0f4fb942a6d58c8d56d4
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_unispeech_s569 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
da1feffd84cf73cb2a000ab9d367e9ad
mit
['spacy', 'token-classification', 'text-classification']
false
To install this model: pip install https://huggingface.co/PlanTL-GOB-ES/es_bsc_demo_md/resolve/main/es_bsc_demo_md-any-py3-none-any.whl Spanish light weight pipeline by BSC. Components: floret static vectors, morphologizer, parser, attribute_ruler, lemmatizer, text classification. | Feature | Description | | --- | --- | | **Name** | `es_bsc_demo_md` | | **Version** | `3.4.1` | | **spaCy** | `>=3.4.1,<3.5.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `textcat` | | **Components** | `tok2vec`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `textcat` | | **Vectors** | -1 keys, 50000 unique vectors (300 dimensions) | | **Sources** | [UD Spanish AnCora v2.10](https://github.com/UniversalDependencies/UD_Spanish-AnCora) (Martínez Alonso, Héctor; Zeman, Daniel)<br /> [Spanish floret embeddings from BNE corpus] (https://zenodo.org/record/7314098) <br /> | | **License** | `mit` | | **Author** | [Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)](https://huggingface.co/PlanTL-GOB-ES/es_bsc_demo_md) | | **Copyright** | Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) | | **Funding** | This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL |
94ddd4e086ec43bc7c05a8c1bc62e799
mit
['spacy', 'token-classification', 'text-classification']
false
Label Scheme <details> <summary>View label scheme (734 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X`, `ao0fp0`, `ao0fs0`, `ao0mp0`, `ao0ms0`, `aq0000`, `aq00p0`, `aq00s0`, `aq0cc0`, `aq0cn0`, `aq0cp0`, `aq0cs0`, `aq0fp0`, `aq0fpp`, `aq0fs0`, `aq0fsp`, `aq0fsp-B2`, `aq0mn0`, `aq0mp0`, `aq0mpp`, `aq0ms0`, `aq0msp`, `cc`, `cs`, `da0fp0`, `da0fs0`, `da0m00`, `da0mp0`, `da0ms0`, `da0ns0`, `dd0cp0`, `dd0cs0`, `dd0fp0`, `dd0fs0`, `dd0mp0`, `dd0ms0`, `de0cn0`, `di00p0`, `di0cp0`, `di0cs0`, `di0fp0`, `di0fs0`, `di0mp0`, `di0ms0`, `dn00p0`, `dn0cp0`, `dn0cs0`, `dn0fp0`, `dn0fs0`, `dn0mp0`, `dn0ms0`, `dp1cps`, `dp1css`, `dp1fpp`, `dp1fsp`, `dp1mpp`, `dp1msp`, `dp1mss`, `dp2cps`, `dp2css`, `dp2fpp`, `dp2fsp`, `dp3cp0`, `dp3cs0`, `dp3fs0`, `dp3mp0`, `dp3ms0`, `dt0cn0`, `dt0fs0`, `dt0ms0`, `faa`, `fat`, `fc`, `fd`, `fe`, `fg`, `fh`, `fia`, `fit`, `fp`, `fpa`, `fpt`, `fs`, `fx`, `fz`, `i`, `nc00000`, `nccn000`, `nccp000`, `nccs000`, `ncf0000`, `ncfn000`, `ncfp000`, `ncfs000`, `ncfs00a`, `ncmn000`, `ncmp000`, `ncms00`, `ncms000`, `np00000`, `np0000a`, `np0000l`, `np0000o`, `np0000p`, `p0000000`, `p010p000`, `p010s000`, `p020s000`, `p0300000`, `pd0cp000`, `pd0cs000`, `pd0fp000`, `pd0fs000`, `pd0mp000`, `pd0ms000`, `pd0ns000`, `pe000000`, `pi000000`, `pi00s000`, `pi0cp000`, `pi0cs000`, `pi0fp000`, `pi0fs000`, `pi0mp0`, `pi0mp000`, `pi0ms0`, `pi0ms000`, `pn0cp000`, `pn0cs000`, `pn0fp000`, `pn0fs000`, `pn0mp000`, `pn0ms000`, `pp1cn000`, `pp1cp000`, `pp1cs000`, `pp1csn00`, `pp1cso00`, `pp1fs000`, `pp1mp000`, `pp2cp000`, `pp2cp00p`, `pp2cs000`, `pp2cs00p`, `pp2csn00`, `pp2cso00`, `pp300000`, `pp30p000`, `pp30sa00`, `pp3cn000`, `pp3cna00`, `pp3cno00`, `pp3cpa00`, `pp3cpd00`, `pp3csa00`, `pp3csd00`, `pp3fp000`, `pp3fpa00`, `pp3fs000`, `pp3fsa00`, `pp3mp000`, `pp3mpa00`, `pp3ms000`, `pp3msa00`, `pp3ns000`, `pr00000`, `pr000000`, `pr0cn000`, `pr0cp000`, `pr0cs000`, `pr0fp000`, `pr0fs000`, `pr0mp000`, `pr0ms000`, `pt000000`, `pt0cp000`, `pt0cs000`, `pt0fp000`, `pt0mp000`, `pt0ms000`, `px1fp0p0`, `px1fs0p0`, `px1fs0s0`, `px1mp0p0`, `px1ms0p0`, `px1ms0s0`, `px2fs0s0`, `px2mp000`, `px2ms0s0`, `px3fp000`, `px3fs000`, `px3mp000`, `px3ms000`, `px3ns000`, `rg`, `rn`, `spcms`, `sps00`, `vag0000`, `vaic1p0`, `vaic3p0`, `vaic3s0`, `vaif1p0`, `vaif1s0`, `vaif2s0`, `vaif3p0`, `vaif3s0`, `vaii1p0`, `vaii1s0`, `vaii2s0`, `vaii3p0`, `vaii3s0`, `vaip1p0`, `vaip1s0`, `vaip2s0`, `vaip3p0`, `vaip3s0`, `vais3p0`, `vais3s0`, `vam02s0`, `vam03s0`, `van0000`, `vap00sm`, `vasi1p0`, `vasi1s0`, `vasi3p0`, `vasi3s0`, `vasp1p0`, `vasp1s0`, `vasp3p0`, `vasp3s0`, `vmg0000`, `vmic1p0`, `vmic1s0`, `vmic2s0`, `vmic3p0`, `vmic3s0`, `vmif1p0`, `vmif1s0`, `vmif2s0`, `vmif3p0`, `vmif3s0`, `vmii1p0`, `vmii1s0`, `vmii2s0`, `vmii3p0`, `vmii3s0`, `vmip1p0`, `vmip1s0`, `vmip2p0`, `vmip2s0`, `vmip3p0`, `vmip3s0`, `vmip3sm`, `vmis1p0`, `vmis1s0`, `vmis2s0`, `vmis3p0`, `vmis3s0`, `vmm01p0`, `vmm02p0`, `vmm02s0`, `vmm03p0`, `vmm03s0`, `vmn0000`, `vmp00fs`, `vmp00ms`, `vmp00pf`, `vmp00pm`, `vmp00sf`, `vmp00sm`, `vmsi1p0`, `vmsi1s0`, `vmsi3p0`, `vmsi3s0`, `vmsp1p0`, `vmsp1s0`, `vmsp2p0`, `vmsp2s0`, `vmsp3p0`, `vmsp3s0`, `vsg0000`, `vsic1s0`, `vsic2s0`, `vsic3p0`, `vsic3s0`, `vsif1s0`, `vsif3p0`, `vsif3s0`, `vsii1p0`, `vsii1s0`, `vsii3p0`, `vsii3s0`, `vsip1p0`, `vsip1s0`, `vsip2s0`, `vsip3p0`, `vsip3s0`, `vsis1s0`, `vsis3p0`, `vsis3s0`, `vsm02s0`, `vsm03s0`, `vsn0000`, `vsp00sm`, `vssi3p0`, `vssi3s0`, `vssp1p0`, `vssp1s0`, `vssp2s0`, `vssp3p0`, `vssp3s0`, `w`, `z`, `zm`, `zp`, `zu` | | **`morphologizer`** | `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=ADP`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `NumForm=Digit\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Comm`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=ADV`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Peri`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=ADJ`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=ADJ`, `POS=PRON\|PronType=Int,Rel`, `Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Inf`, `POS=VERB\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Quot`, `POS=ADV\|Polarity=Neg`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Ger`, `Degree=Cmp\|POS=ADV`, `Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `AdvType=Tim\|POS=NOUN`, `Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `AdvType=Tim\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `POS=PUNCT`, `POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Colo`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PUNCT\|PunctType=Semi`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PUNCT\|PunctType=Dash`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=NOUN\|VerbForm=Inf`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `POS=DET\|PronType=Ind`, `POS=DET\|PronType=Int,Rel`, `AdvType=Tim\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Degree=Abs\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=SCONJ\|PronType=Int,Rel`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Com\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=NOUN\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `POS=SYM`, `Number=Sing\|POS=VERB\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Abs\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Def\|Foreign=Yes\|POS=DET\|PronType=Art`, `Case=Com\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Tot`, `AdvType=Tim\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=AUX\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=X`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Definite=Def\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADP`, `Foreign=Yes\|POS=CCONJ`, `Foreign=Yes\|POS=PROPN`, `Case=Com\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=NOUN\|VerbForm=Part`, `Case=Com\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=X`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Degree=Cmp\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `POS=NOUN\|PunctType=Comm`, `POS=PRON\|PronType=Neg`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:impers`, `expl:pass`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`textcat`** | `Economía`, `Entretenimiento`, `Historia`, `Humanidades`, `Derecho`, `Matemáticas`, `Música`, `Filosofía`, `Política`, `Religión`, `Deporte`, `Ciencia_y_Tecnología` | </details>
98c1d7200e59e1866c87d92b202f54fb
mit
['spacy', 'token-classification', 'text-classification']
false
Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 95.39 | | `POS_ACC` | 98.60 | | `MORPH_ACC` | 98.10 | | `LEMMA_ACC` | 97.98 | | `DEP_UAS` | 91.26 | | `DEP_LAS` | 88.09 | | `SENTS_P` | 95.38 | | `SENTS_R` | 96.54 | | `SENTS_F` | 95.96 | | `TOK2VEC_LOSS` | 7166396.29 | | `TAGGER_LOSS` | 1262344.25 | | `MORPHOLOGIZER_LOSS` | 311469.37 | | `PARSER_LOSS` | 4991259.73 | | `CATS_SCORE` | 99.14 | | `CATS_MICRO_P` | 97.52 | | `CATS_MICRO_R` | 96.19 | | `CATS_MICRO_F` | 96.85 | | `CATS_MACRO_P` | 97.25 | | `CATS_MACRO_R` | 95.42 | | `CATS_MACRO_F` | 96.31 | | `CATS_MACRO_AUC` | 99.14 |
bcddce0d97534fa1bf1079978bb5e905
apache-2.0
['generated_from_trainer']
false
Summarization: mukayese/mbart-large-turkish-sum This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mlsum/tu dataset. It achieves the following results on the evaluation set: - Rouge1: 47.4222 - Rouge2: 34.8624 - Rougel: 42.2487 - Rougelsum: 43.9494 Check [this](https://arxiv.org/abs/2203.01215) paper for more details on the model and the dataset.
a3a5157c2bfb79bd908ca02fecf14cf7
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 - label_smoothing_factor: 0.1
da83dc72cc08ba032ffa570594fff00d
apache-2.0
['generated_from_trainer']
false
Citation ``` @misc{safaya-etal-2022-mukayese, title={Mukayese: Turkish NLP Strikes Back}, author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret}, year={2022}, eprint={2203.01215}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
67478e06eab0300c119ccc1ca71009c8
mit
[]
false
model by avantcontra This your the Stable Diffusion model fine-tuned the face2contra-sd-dreambooth concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks face2contra** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/4.jpeg) ![image 2](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/10.jpeg) ![image 3](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/7.jpeg) ![image 4](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/2.jpeg) ![image 5](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/0.jpeg) ![image 6](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/9.jpeg) ![image 7](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/3.jpeg) ![image 8](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/6.jpeg) ![image 9](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/18.jpeg) ![image 10](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/17.jpeg) ![image 11](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/19.jpeg) ![image 12](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/15.jpeg) ![image 13](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/11.jpeg) ![image 14](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/14.jpeg) ![image 15](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/12.jpeg) ![image 16](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/13.jpeg) ![image 17](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/5.jpeg) ![image 18](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/20.jpeg) ![image 19](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/8.jpeg) ![image 20](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/16.jpeg)
e93210723bf38c3b07f75572effa1da1
creativeml-openrail-m
['text-to-image']
false
Duskfall's Pink Spider Plushie Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Information on this model will be here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk plushiedsk (use that on your prompt)
eabfe12998999e8a2838c93f254eeb7d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 35.0
b05cb335a33a651732d52f68226552f0
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.2893 - Wer: 0.2713
efcf78778328c987d2e9baba865d5200
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 100.0 - mixed_precision_training: Native AMP
00a23154d0afcce477adf7c3c60dff9c
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8647 | 14.28 | 200 | 0.2758 | 0.2568 | | 1.3376 | 28.56 | 400 | 0.2754 | 0.2722 | | 1.1975 | 42.84 | 600 | 0.2929 | 0.2901 | | 1.1024 | 57.14 | 800 | 0.2904 | 0.2928 | | 1.0257 | 71.42 | 1000 | 0.2915 | 0.2823 | | 0.9628 | 85.7 | 1200 | 0.2936 | 0.2749 | | 0.9109 | 99.98 | 1400 | 0.2893 | 0.2713 |
e0e02fb60bf0e332d7ae6397ee2cd6ac
apache-2.0
['generated_from_trainer']
false
gpt-neo-125M-Byethon This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6609
ab0117e6d42439db4aa46da541032199
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 237 | 0.8348 | | No log | 2.0 | 474 | 0.6931 | | 0.8151 | 3.0 | 711 | 0.6609 |
18fd348e6209188c7bacca431087da61
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
This is WD1.4 with .safetensors and fp16, which is unofficial fork - [Waifu Diffusion 1.4 Anime Epoch 2 Safetensors](https://huggingface.co/subaqua/_unofficial-WD1.4-fp16-safetensors/resolve/main/wd-1-4-anime_e2-fp16.safetensors): A faster-loading and lighter version of WD1.4 Anime E2 - [Waifu Diffusion 1.4 Anime Safetensors Inference Config](https://huggingface.co/subaqua/_unofficial-WD1.4-fp16-safetensors/resolve/main/wd-1-4-anime_e2-fp16.yaml): A file included to allow for inference with Automatic's WebUI and with the original Stable Diffusion codebase. This configuration file is modified for "Waifu Diffusion 1.4 Anime Inference Config" with the following changes: ``` model: params: unet_config: params: use_checkpoint: False ```
23564b597bac4df300de26b8e1d0c523
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
Inherited License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
22b7d94013bff5ea4120337c0c262700
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP
04e353e002e650a29946d4d0de669be6
apache-2.0
[]
false
MuRIL: Multilingual Representations for Indian Languages === MuRIL is a BERT model pre-trained on 17 Indian languages and their transliterated counterparts. We have released the pre-trained model (with the MLM layer intact, enabling masked word predictions) in this repository. We have also released the encoder on [TFHub](https://tfhub.dev/google/MuRIL/1) with an additional pre-processing module, that processes raw text into the expected input format for the encoder. You can find more details on MuRIL in this [paper](http://arxiv.org/abs/2103.10730).
825b4b7038e3915b3295cd5c283b5054
apache-2.0
[]
false
Overview This model uses a BERT base architecture [1] pretrained from scratch using the Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6] Indian languages. We use a training paradigm similar to multilingual bert, with a few modifications as listed: * We include translation and transliteration segment pairs in training as well. * We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to enhance low-resource performance. [7] See the Training section for more details.
d23468b6902f8eb9f679a606b5c95494
apache-2.0
[]
false
Training The MuRIL model is pre-trained on monolingual segments as well as parallel segments as detailed below : * Monolingual Data : We make use of publicly available corpora from Wikipedia and Common Crawl for 17 Indian languages. * Parallel Data : We have two types of parallel data : * Translated Data : We obtain translations of the above monolingual corpora using the Google NMT pipeline. We feed translated segment pairs as input. We also make use of the publicly available PMINDIA corpus. * Transliterated Data : We obtain transliterations of Wikipedia using the IndicTrans [8] library. We feed transliterated segment pairs as input. We also make use of the publicly available Dakshina dataset. We keep an exponent value of 0.3 to calculate duplication multiplier values for upsampling of lower resourced languages and set dupe factors accordingly. Note, we limit transliterated pairs to Wikipedia only. The model was trained using a self-supervised masked language modeling task. We do whole word masking with a maximum of 80 predictions. The model was trained for 1000K steps, with a batch size of 4096, and a max sequence length of 512.
ed5234974bcfca5fef00cc8d02a76b32