license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['SlovakBERT']
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='gerulata/slovakbert') unmasker("Deti sa <mask> na ihrisku.") [{'sequence': 'Deti sa hrali na ihrisku.', 'score': 0.6355380415916443, 'token': 5949, 'token_str': ' hrali'}, {'sequence': 'Deti sa hrajú na ihrisku.', 'score': 0.14731724560260773, 'token': 9081, 'token_str': ' hrajú'}, {'sequence': 'Deti sa zahrali na ihrisku.', 'score': 0.05016357824206352, 'token': 32553, 'token_str': ' zahrali'}, {'sequence': 'Deti sa stretli na ihrisku.', 'score': 0.041727423667907715, 'token': 5964, 'token_str': ' stretli'}, {'sequence': 'Deti sa učia na ihrisku.', 'score': 0.01886524073779583, 'token': 18099, 'token_str': ' učia'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert') model = RobertaModel.from_pretrained('gerulata/slovakbert') text = "Text ktorý sa má embedovať." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('gerulata/slovakbert') model = TFRobertaModel.from_pretrained('gerulata/slovakbert') text = "Text ktorý sa má embedovať." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` Or extract information from the model like this: ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='gerulata/slovakbert') unmasker("Slovenské národne povstanie sa uskutočnilo v roku <mask>.") [{'sequence': 'Slovenske narodne povstanie sa uskutočnilo v roku 1944.', 'score': 0.7383289933204651, 'token': 16621, 'token_str': ' 1944'},...] ```
c6e5f75b1c9df83deb3e7a8bd2b46c9c
mit
['SlovakBERT']
false
Training data The SlovakBERT model was pretrained on these datasets: - Wikipedia (326MB of text), - OpenSubtitles (415MB of text), - Oscar (4.6GB of text), - Gerulata WebCrawl (12.7GB of text) , - Gerulata Monitoring (214 MB of text), - blbec.online (4.5GB of text) The text was then processed with the following steps: - URL and email addresses were replaced with special tokens ("url", "email"). - Elongated interpunction was reduced (e.g. -- to -). - Markdown syntax was deleted. - All text content in braces f.g was eliminated to reduce the amount of markup and programming language text. We segmented the resulting corpus into sentences and removed duplicates to get 181.6M unique sentences. In total, the final corpus has 19.35GB of text.
65a33157f97136c4e1470d3416873815
mit
['SlovakBERT']
false
Pretraining The model was trained in **fairseq** on 4 x Nvidia A100 GPUs for 300K steps with a batch size of 512 and a sequence length of 512. The optimizer used is Adam with a learning rate of 5e-4, \\(\beta_{1} = 0.9\\), \\(\beta_{2} = 0.98\\) and \\(\epsilon = 1e-6\\), a weight decay of 0.01, dropout rate 0.1, learning rate warmup for 10k steps and linear decay of the learning rate after. We used 16-bit float precision.
e43afed95598561110d4583f4669d733
mit
['SlovakBERT']
false
About us <a href="https://www.gerulata.com/"> <img width="300px" src="https://www.gerulata.com/images/gerulata-logo-blue.png"> </a> Gerulata uses near real-time monitoring, advanced analytics and machine learning to help create a safer, more productive and enjoyable online environment for everyone.
f3faaf2975c53a8bd8811665ba061cd0
mit
['SlovakBERT']
false
BibTeX entry and citation info If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2109.15254 ``` @misc{pikuliak2021slovakbert, title={SlovakBERT: Slovak Masked Language Model}, author={Matúš Pikuliak and Štefan Grivalský and Martin Konôpka and Miroslav Blšták and Martin Tamajka and Viktor Bachratý and Marián Šimko and Pavol Balážik and Michal Trnka and Filip Uhlárik}, year={2021}, eprint={2109.15254}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
30947812e9099551d00e51d137384d8c
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
gLWoman This is my second Stable Diffusion custom model that bring to you a generic woman generated with non-licenced images. The magic word is: gLWoman If you enjoy my work, please consider supporting me: [![Buy me a coffee](https://badgen.net/badge/icon/buymeacoffee?icon=buymeacoffee&label)](https://www.buymeacoffee.com/elrivx) Examples: <img src=https://imgur.com/tKpiVEE.png width=30% height=30%> <img src=https://imgur.com/GAOJzps.png width=30% height=30%> <img src=https://imgur.com/oxI9ZQv.png width=30% height=30%>
e52fea54e3e4832a560ea857a6c00523
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'diffusers', 'game']
false
BreathOfTheWild Diffusion This model was trained on screenshots from Botw and a few from Hyrule warrior, but because I don't actually play both of them, it was pretty hard to get decent training images. The portraits still contain resemblance of link and zelda, this can sometimes be avoided by prompting "portrait of", and the model isn't exactly easily modifiable through prompting. The landscapes are pretty good though. To reference the art style, use the token: botw style You can also check out the higher quality, but trickier V2 on [Civitai](https://civitai.com/models/4676/breathofthewild-diffusion).
d522fb21f7cbeeb14a9b908adbc2a5d8
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'diffusers', 'game']
false
Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run BreathOfTheWild_Diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/ItsJayQz/BreathOfTheWild_Diffusion?logs=build) Here are some samples. **Portraits** ![botw5.png](https://s3.amazonaws.com/moonup/production/uploads/1673980004637-635eafb49f24f6db0a1eafd1.png) ![botw1.png](https://s3.amazonaws.com/moonup/production/uploads/1673980004598-635eafb49f24f6db0a1eafd1.png) Prompt used: botw style A stunning intricate full color portrait of anad, epic character composition, wearing a white dress, brown hair, by ilya kuvshinov, alessio albi, nina masic, sharp focus, natural lighting, subsurface scattering, f2, 35mm, film grain Negative: ((disfigured)), ((bad art)), ((deformed)), ((poorly drawn)), ((extra limbs)), ((close up)), ((b&amp;w)), weird colors, blurry Guidance: 10 Steps: 30 using DPM++ 2M Karras I'm not a prompt wizard so you can probably get better results with some tuning. **Landscape** ![botw3.png](https://s3.amazonaws.com/moonup/production/uploads/1673929825968-635eafb49f24f6db0a1eafd1.png) ![botw2.png](https://s3.amazonaws.com/moonup/production/uploads/1673929825987-635eafb49f24f6db0a1eafd1.png) ![botw4.png](https://s3.amazonaws.com/moonup/production/uploads/1673929825997-635eafb49f24f6db0a1eafd1.png) **Disclaimers** - I'm in no way affliated with Nintendo, or any entities relating to the ownership of the game. - The phrase Breath Of The Wild is simply a reference for accessibility. - This was created entirely for research, and entertainment purpose. - I did not plan, or is planning on turning this model into a commercial product, or use for commercial purposes. - I do not condone the usage of the model for making counterfeit products that might infringe on Nintendo's copyrights/trademarks. **License** - This model is under Creative OpenRAIL-M. - This means the model can be used royalty-free, and flexible with the model usage, such as redistribution of the model, or of any derivatives of the model. - However, there are restrictions on the openess of the license. More info into the restrictions can be found [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license). **Responsibilities** - By using/downloading the model, you are responsible for: - All outputs/usage of the model. - Understanding the Disclaimers. - Upholding the terms of the license. Thanks for checking out the model!
75dd7b6fdd8e701a7678e14fb8a5605a
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ID dataset. It achieves the following results on the evaluation set: - Loss: 0.3975 - Wer: 0.2633
19a5a6b343bce1806aba6dfe932e9bf4
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP
95824b7be5e8b2ed8bf94d7058aead15
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.78 | 100 | 4.5645 | 1.0 | | No log | 1.55 | 200 | 2.9016 | 1.0 | | No log | 2.33 | 300 | 2.2666 | 1.0982 | | No log | 3.1 | 400 | 0.6079 | 0.6376 | | 3.2188 | 3.88 | 500 | 0.4985 | 0.5008 | | 3.2188 | 4.65 | 600 | 0.4477 | 0.4469 | | 3.2188 | 5.43 | 700 | 0.3953 | 0.3915 | | 3.2188 | 6.2 | 800 | 0.4319 | 0.3921 | | 3.2188 | 6.98 | 900 | 0.4171 | 0.3698 | | 0.2193 | 7.75 | 1000 | 0.3957 | 0.3600 | | 0.2193 | 8.53 | 1100 | 0.3730 | 0.3493 | | 0.2193 | 9.3 | 1200 | 0.3780 | 0.3348 | | 0.2193 | 10.08 | 1300 | 0.4133 | 0.3568 | | 0.2193 | 10.85 | 1400 | 0.3984 | 0.3193 | | 0.1129 | 11.63 | 1500 | 0.3845 | 0.3174 | | 0.1129 | 12.4 | 1600 | 0.3882 | 0.3162 | | 0.1129 | 13.18 | 1700 | 0.3982 | 0.3008 | | 0.1129 | 13.95 | 1800 | 0.3902 | 0.3198 | | 0.1129 | 14.73 | 1900 | 0.4082 | 0.3237 | | 0.0765 | 15.5 | 2000 | 0.3732 | 0.3126 | | 0.0765 | 16.28 | 2100 | 0.3893 | 0.3001 | | 0.0765 | 17.05 | 2200 | 0.4168 | 0.3083 | | 0.0765 | 17.83 | 2300 | 0.4193 | 0.3044 | | 0.0765 | 18.6 | 2400 | 0.4006 | 0.3013 | | 0.0588 | 19.38 | 2500 | 0.3836 | 0.2892 | | 0.0588 | 20.16 | 2600 | 0.3761 | 0.2903 | | 0.0588 | 20.93 | 2700 | 0.3895 | 0.2930 | | 0.0588 | 21.71 | 2800 | 0.3885 | 0.2791 | | 0.0588 | 22.48 | 2900 | 0.3902 | 0.2891 | | 0.0448 | 23.26 | 3000 | 0.4200 | 0.2849 | | 0.0448 | 24.03 | 3100 | 0.4013 | 0.2799 | | 0.0448 | 24.81 | 3200 | 0.4039 | 0.2731 | | 0.0448 | 25.58 | 3300 | 0.3970 | 0.2647 | | 0.0448 | 26.36 | 3400 | 0.4081 | 0.2690 | | 0.0351 | 27.13 | 3500 | 0.4090 | 0.2674 | | 0.0351 | 27.91 | 3600 | 0.3953 | 0.2663 | | 0.0351 | 28.68 | 3700 | 0.4044 | 0.2650 | | 0.0351 | 29.46 | 3800 | 0.3969 | 0.2646 |
76d06985e681c5f8947881bbb1d8af7c
apache-2.0
['image-classification', 'timm']
false
Model card for convnext_small.in12k A ConvNeXt image classification model. Trained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) by Ross Wightman. ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
b4d620375836c82f0d2dcc7cda34703b
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 58.5 - GMACs: 8.7 - Activations (M): 21.6 - Image size: 224 x 224 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/rwightman/pytorch-image-models - **Dataset:** ImageNet-12k
8a446424aa84b7341eff6ab941ba668d
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_small.in12k', pretrained=True) model = model.eval()
e505560fd9063012e0f117ced292a000
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_small.in12k', pretrained=True, features_only=True, ) model = model.eval()
97b4caa3319bb24f1e6b7b1b25507e23
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_small.in12k', pretrained=True, num_classes=0,
03c59939a47b3cc0a7f92493e270b7f7
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.7029 - Accuracy: 0.6540 - F1: 0.1254 - Combined Score: 0.3897
679bf3e11b4f11e3ac1d797bb27c12d3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:| | 0.8495 | 1.0 | 29671 | 0.7150 | 0.6333 | 0.0086 | 0.3210 | | 0.7654 | 2.0 | 59342 | 0.7273 | 0.6339 | 0.0121 | 0.3230 | | 0.7305 | 3.0 | 89013 | 0.7241 | 0.6400 | 0.0479 | 0.3440 | | 0.7108 | 4.0 | 118684 | 0.7147 | 0.6381 | 0.0380 | 0.3381 | | 0.698 | 5.0 | 148355 | 0.7192 | 0.6414 | 0.0564 | 0.3489 | | 0.6891 | 6.0 | 178026 | 0.7239 | 0.6357 | 0.0232 | 0.3295 | | 0.6823 | 7.0 | 207697 | 0.7141 | 0.6442 | 0.0723 | 0.3583 | | 0.6771 | 8.0 | 237368 | 0.7112 | 0.6491 | 0.1004 | 0.3748 | | 0.6729 | 9.0 | 267039 | 0.7156 | 0.6494 | 0.1022 | 0.3758 | | 0.6694 | 10.0 | 296710 | 0.7185 | 0.6502 | 0.1053 | 0.3777 | | 0.6664 | 11.0 | 326381 | 0.7129 | 0.6508 | 0.1085 | 0.3796 | | 0.6639 | 12.0 | 356052 | 0.7112 | 0.6508 | 0.1080 | 0.3794 | | 0.6617 | 13.0 | 385723 | 0.7105 | 0.6542 | 0.1260 | 0.3901 | | 0.6597 | 14.0 | 415394 | 0.7029 | 0.6540 | 0.1254 | 0.3897 | | 0.658 | 15.0 | 445065 | 0.7094 | 0.6486 | 0.0964 | 0.3725 | | 0.6564 | 16.0 | 474736 | 0.7072 | 0.6510 | 0.1084 | 0.3797 | | 0.655 | 17.0 | 504407 | 0.7049 | 0.6557 | 0.1333 | 0.3945 | | 0.6537 | 18.0 | 534078 | 0.7051 | 0.6542 | 0.1269 | 0.3905 | | 0.6526 | 19.0 | 563749 | 0.7096 | 0.6601 | 0.1573 | 0.4087 |
8021890f97023d8cdff76caaecc34070
apache-2.0
['generated_from_trainer']
false
finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892
cd745c17ac06181d6f6b3bc010c91b1c
mit
[]
false
DIALogue-level Commonsense Transformer (DIALeCT) The pretrained checkpoint for the paper [Multiview Contextual Commonsense Inference: A New Dataset and Task](https://arxiv.org/abs/2210.02890). The model is trained based on the [T5-large](https://huggingface.co/t5-large) checkpoint. ![model image](https://drive.google.com/uc?export=download&id=14RIbxgXhREdu5xZiKn5D-UUzaQLDNLqf)
a2939f18b6635cb9a3c356dae1aad43f
mit
[]
false
Datasets The dataset used to pretrain the model can be obtained from the [CICERO repo](https://github.com/declare-lab/CICERO) following instructions. The Contextualized Commonsense Inference in Dialogues v2 (CICEROv2) consists of annotated commonsense inferences including cause and emotional reaction, etc. The dialogues are from multiple datasets. | Dataset |
03af41b1c955bc7278b0dc79d997d936
mit
[]
false
Examples Some examples of generated results from the pretrained model (the zero-shot setting). **Subsequent Event** ``` What is or could be the subsequent event of the target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted subsequent event: ``` David's girlfriend apologized to david for her mistake. ``` **Cause** ``` What is or could be the cause of target? <sep> target: Thanks. Will I be able to take a retest ? <sep> context: A: Did I do well on my test ?, <utt> B: Do you want to know the honest answer ?, <utt> A: Why wouldn't I want to know ?, <utt> B: You had pretty bad scores ., <utt> A: Exactly what do you mean by bad ?, <utt> B: You failed ., <utt> A: How'd I fail it ?, <utt> B: There are a couple of reasons why you didn't pass ., <utt> A: What did I do wrong ?, <utt> B: To sum it all up , you really just don't know how to drive ., <utt> A: Thanks. Will I be able to take a retest ?, <utt> B: Sure you can , in about two and a half weeks . ``` Predicted cause: ``` The speaker has failed the driving test. ``` **Emotional Reaction** ``` What is the possible emotional reaction of the listener in response to target? <sep> target: Oh . I just can't forget it .<sep> context: A: David , why didn't you clean the room ?, <utt> B: I'm not in the mood ., <utt> A: Why are you feeling depressed ?, <utt> B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> A: I don t think she will do such a thing ., <utt> B: But she did and made me disappointed ., <utt> A: Oh , cheer up . A girlfriend is not everything ., <utt> B: But she means a lot to me ., <utt> A: Then forgive her mistake ., <utt> B: Oh . I just can't forget it ``` Predicted emotional reaction: ``` The listener is hopeful that david will forgive his girlfriend for her mistake. ```
a10e4e18f70fc1ab039461cb725b3d28
mit
[]
false
Inference: The input text should be formatted as follows: ``` Question <sep> target: target_utt <sep> context: A: utterance 1 <utt> B: utterance 2 <utt> A: utterance 3 <utt> B: utterance 4 ``` Question: The question against which we want to make the inference. A, B are speaker identifiers The ```target_utt``` should be anyone between ```utterance 1, utterance 2, utterance 3, or utterance 4```. Do not use the speaker identifier in the ```target_utt``` Some samples are provided in the Hosted inference API box examples.
06f650fc0c97b5f9daf8c4fe866fe071
mit
[]
false
BibTeX entry and citation info If you use the model, you can cite: ```bibtex @article{Shen2022MultiviewCC, title={Multiview Contextual Commonsense Inference: A New Dataset and Task}, author={Siqi Shen and Deepanway Ghosal and Navonil Majumder and Henry Lim and Rada Mihalcea and Soujanya Poria}, journal={ArXiv}, year={2022}, volume={abs/2210.02890} } ```
c2b1d5632221b450cff4303c2db1d609
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1600k']
false
MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1600k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
747a3e8766be3ea2610f8ab291d0a816
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1600k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1600k') model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1600k') model = BertModel.from_pretrained("google/multiberts-seed_2-step_1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
b18bc9fcc52d280d5794f8a77423885f
mit
[]
false
Table Transformer (fine-tuned for Table Structure Recognition) Table Transformer (DETR) model trained on PubTables1M. It was introduced in the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Smock et al. and first released in [this repository](https://github.com/microsoft/table-transformer). Disclaimer: The team releasing Table Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
87395f148d5f94f8b0ecd68d93e978a4
mit
[]
false
Model description The Table Transformer is equivalent to [DETR](https://huggingface.co/docs/transformers/model_doc/detr), a Transformer-based object detection model. Note that the authors decided to use the "normalize before" setting of DETR, which means that layernorm is applied before self- and cross-attention.
d860070b10291d2b88e09715b13e2479
mit
[]
false
Usage You can use the raw model for detecting the structure (like rows, columns) in tables. See the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for more info.
da7751ce2d3928a88223b4600a8f1baa
mit
['generated_from_trainer']
false
mbti-classification-xlnet-base-cased-augment This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2045 - Accuracy: 0.2829
5ef165b78e096a7f434f8431a6525e9d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3
9c72170a5394ee1980d4da4ad1391128
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:---------------:| | 2.1055 | 1.0 | 29900 | 0.2884 | 2.1344 | | 1.8127 | 2.0 | 59800 | 0.2830 | 2.1479 | | 1.6953 | 3.0 | 89700 | 2.2045 | 0.2829 |
e4dddcba1e0ba3f5936c2a2b5d74f0a6
mit
['conversational']
false
DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Neku Sakuraba from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-neku") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-neku")
735254a9712871a22463189ac8b662d4
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-base-bat-zle Neural machine translation model for translating from Baltic languages (bat) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
165c7fb38a9ab9dbd15448a19bbf255b
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-13 * source language(s): lav lit * target language(s): rus * model: transformer-align * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807_transformer-align_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.zip) * more information released models: [OPUS-MT bat-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-zle/README.md)
98bd4d176e36891c08eb92543b6f7673
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>rus<< Āfrika ir cilvēces šūpulis.", ">>ukr<< Tomas yra mūsų kapitonas." ] model_name = "pytorch-models/opus-mt-tc-base-bat-zle" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
811ac389c170112e0ddec09f440d541d
cc-by-4.0
['translation', 'opus-mt-tc']
false
Томас - наш капітан. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-bat-zle") print(pipe(">>rus<< Āfrika ir cilvēces šūpulis."))
3da6f410b8deaae4a5f39567c19808d3
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807_transformer-align_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.test.txt) * test set scores: [opusTCv20210807_transformer-align_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
73800f798eb40b3aa43c7d7e5f8a2aad
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | lav-rus | tatoeba-test-v2021-08-07 | 0.75918 | 60.5 | 274 | 1541 | | lit-rus | tatoeba-test-v2021-08-07 | 0.72796 | 54.9 | 3598 | 21908 | | lav-rus | flores101-devtest | 0.49210 | 21.1 | 1012 | 23295 | | lav-ukr | flores101-devtest | 0.48185 | 19.2 | 1012 | 22810 | | lit-rus | flores101-devtest | 0.49850 | 21.3 | 1012 | 23295 | | lit-ukr | flores101-devtest | 0.49114 | 19.5 | 1012 | 22810 |
0c2fc127b2eaf4a1b2d14add7dc74a46
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_xls-r_gender_male-5_female-5_s719 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a5fba4b12b8c6400a2589faa4e098346
apache-2.0
['summarization', 't5', 'ar', 'abstractive summarization', 'xlsum', 'generated_from_trainer']
false
t5-arabic-base-finetuned-xlsum-ar This model is a fine-tuned version of [bakrianoo/t5-arabic-base](https://huggingface.co/bakrianoo/t5-arabic-base) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 3.0328 - Rouge-1: 23.72 - Rouge-2: 10.95 - Rouge-l: 21.59 - Gen Len: 19.0 - Bertscore: 71.81
59d9d3e0604dd7d021061a7930911212
apache-2.0
['summarization', 't5', 'ar', 'abstractive summarization', 'xlsum', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 10 - label_smoothing_factor: 0.1
43886c234f00478070178ff66b2a859b
apache-2.0
['translation', 'generated_from_trainer']
false
marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8558 - Bleu: 52.9454
4a9d07ea7da6578d24870fcbf4b37c80
apache-2.0
['generated_from_keras_callback']
false
reqbert-tapt-epoch30 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set:
a4f2bbb195fcb1ce7c431f390501ba6f
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 21740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
97d193dc19c7eb5dee16ef579a636851
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-1-0.25 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 6.8599 - Bleu: 4.0871 - Gen Len: 35.3267
8ece531615e3b38f5f369e4aa716a776
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
**100Memories2.1E** Hi guys! Do you remember my SD 1.5 model about photos with a little bit of vintage style? I resurrect the project as a SD 2.1 embedding Some recomendations: the magic word for your prompts is 100Memories. If you enjoy my work, please consider supporting me: [![Buy me a coffee](https://badgen.net/badge/icon/buymeacoffee?icon=buymeacoffee&label)](https://www.buymeacoffee.com/elrivx) Examples: <img src=https://imgur.com/3EjRdsJ.png width=30% height=30%> <img src=https://imgur.com/YPcD8wd.png width=30% height=30%> <img src=https://imgur.com/XzoTc2l.png width=30% height=30%> <img src=https://imgur.com/7DfSVIT.png width=30% height=30%>
d79e3d62fdf12b062a98aba54eb9071a
apache-2.0
['vision', 'gear-segmentation', 'generated_from_trainer']
false
segformer-b0-finetuned-segments-gear2 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the marcomameli01/gear dataset. It achieves the following results on the evaluation set: - Loss: 0.1268 - Mean Iou: 0.1254 - Mean Accuracy: 0.2509 - Overall Accuracy: 0.2509 - Per Category Iou: [0.0, 0.2508641975308642] - Per Category Accuracy: [nan, 0.2508641975308642]
91abdaed2cf7970037ebf5d04adc7e71
apache-2.0
['vision', 'gear-segmentation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------:|:--------------------------:| | 0.4614 | 5.0 | 20 | 0.4427 | 0.0741 | 0.1481 | 0.1481 | [0.0, 0.14814814814814814] | [nan, 0.14814814814814814] | | 0.3327 | 10.0 | 40 | 0.2933 | 0.1726 | 0.3453 | 0.3453 | [0.0, 0.34528395061728395] | [nan, 0.34528395061728395] | | 0.2305 | 15.0 | 60 | 0.2244 | 0.0382 | 0.0763 | 0.0763 | [0.0, 0.07634567901234568] | [nan, 0.07634567901234568] | | 0.2011 | 20.0 | 80 | 0.2130 | 0.0374 | 0.0748 | 0.0748 | [0.0, 0.07476543209876543] | [nan, 0.07476543209876543] | | 0.1846 | 25.0 | 100 | 0.1672 | 0.1037 | 0.2073 | 0.2073 | [0.0, 0.20730864197530866] | [nan, 0.20730864197530866] | | 0.1622 | 30.0 | 120 | 0.1532 | 0.0805 | 0.1611 | 0.1611 | [0.0, 0.1610864197530864] | [nan, 0.1610864197530864] | | 0.139 | 35.0 | 140 | 0.1396 | 0.0971 | 0.1942 | 0.1942 | [0.0, 0.19417283950617284] | [nan, 0.19417283950617284] | | 0.1342 | 40.0 | 160 | 0.1283 | 0.0748 | 0.1496 | 0.1496 | [0.0, 0.14962962962962964] | [nan, 0.14962962962962964] | | 0.128 | 45.0 | 180 | 0.1224 | 0.1128 | 0.2256 | 0.2256 | [0.0, 0.22558024691358025] | [nan, 0.22558024691358025] | | 0.1243 | 50.0 | 200 | 0.1268 | 0.1254 | 0.2509 | 0.2509 | [0.0, 0.2508641975308642] | [nan, 0.2508641975308642] |
551227ef68c40b4b04c5282359b484f3
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-kitchen_and_dining-7-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3560 - Accuracy: 0.2692
9a93dd2a66de1c66dcf9a182cd5826ad
apache-2.0
['generated_from_keras_callback']
false
whisper_0015 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3281 - Train Accuracy: 0.0322 - Validation Loss: 0.5841 - Validation Accuracy: 0.0311 - Epoch: 14
2bdf60677116ece2be9dbd948cf40435
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 5.0856 | 0.0116 | 4.4440 | 0.0123 | 0 | | 4.3149 | 0.0131 | 4.0521 | 0.0142 | 1 | | 3.9260 | 0.0146 | 3.7264 | 0.0153 | 2 | | 3.5418 | 0.0160 | 3.3026 | 0.0174 | 3 | | 2.7510 | 0.0198 | 2.0157 | 0.0241 | 4 | | 1.6782 | 0.0250 | 1.3567 | 0.0273 | 5 | | 1.1705 | 0.0274 | 1.0678 | 0.0286 | 6 | | 0.9126 | 0.0287 | 0.9152 | 0.0294 | 7 | | 0.7514 | 0.0296 | 0.8057 | 0.0299 | 8 | | 0.6371 | 0.0302 | 0.7409 | 0.0302 | 9 | | 0.5498 | 0.0307 | 0.6854 | 0.0306 | 10 | | 0.4804 | 0.0312 | 0.6518 | 0.0307 | 11 | | 0.4214 | 0.0316 | 0.6200 | 0.0310 | 12 | | 0.3713 | 0.0319 | 0.5947 | 0.0311 | 13 | | 0.3281 | 0.0322 | 0.5841 | 0.0311 | 14 |
5c70b3804bf9e7c50ed079214e48d3f0
apache-2.0
['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event']
false
Benchmark WER result: | | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) |---|---|---| |without LM| 16.97 | 17.95 | |with 4-grams LM| 11.77 | 12.23|
c29f7faa83d8f6c52cad6f4462414174
apache-2.0
['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event']
false
Benchmark CER result: | | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) |---|---|---| |without LM| 6.82 | 7.05 | |with 4-grams LM| 5.22 | 5.33 |
24be62b0b5d30728d9f514f9cc9fceae
apache-2.0
['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event']
false
Evaluation Please use the eval.py file to run the evaluation: ```python pip install mecab-python3 unidic-lite pykakasi python eval.py --model_id vutankiet2901/wav2vec2-xls-r-1b-ja --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs ```
8b3123e3f663ae1b79bc1af3fa676e7c
apache-2.0
['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP
8b60226a8a6d8677ec4cb16dad3bb818
apache-2.0
['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 3.484 | 9.49 | 1500 | 1.1849 | 0.7543 | 0.4099 | | 1.3582 | 18.98 | 3000 | 0.4320 | 0.3489 | 0.1591 | | 1.1716 | 28.48 | 4500 | 0.3835 | 0.3175 | 0.1454 | | 1.0951 | 37.97 | 6000 | 0.3732 | 0.3033 | 0.1405 | | 1.04 | 47.47 | 7500 | 0.3485 | 0.2898 | 0.1360 | | 0.9768 | 56.96 | 9000 | 0.3386 | 0.2787 | 0.1309 | | 0.9129 | 66.45 | 10500 | 0.3363 | 0.2711 | 0.1272 | | 0.8614 | 75.94 | 12000 | 0.3386 | 0.2676 | 0.1260 | | 0.8092 | 85.44 | 13500 | 0.3356 | 0.2610 | 0.1240 | | 0.7658 | 94.93 | 15000 | 0.3316 | 0.2564 | 0.1218 |
c254922f8f53aa4dcef9701523e49767
cc
['text classification']
false
Model information: This model is the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0.
41c9f49a86a389574d02fb1334a1cbb9
cc
['text classification']
false
Limitations: Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use - - [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf) - [bert-base-uncased](https://huggingface.co/bert-base-uncased)
cd79ab8fb24d4ba45600ec33153057e5
cc
['text classification']
false
How to use: Load the model from the library using the following checkpoints: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/bert-base-uncased-ft-m3-lc") model = AutoModel.from_pretrained("sarahmiller137/bert-base-uncased-ft-m3-lc") ```
7911c3a95b532e1bd8c22f0ecc31e41d
mit
[]
false
Model Description <!-- Provide a longer summary of what this model is/does. --> sm Latin model for spaCy trained on UD treebanks for tagging, parsing and lemmatization - **Developed by:** Patrick J. Burns - **Model type:** spaCy model - **Language(s) (NLP):** la - **License:** mit - **Resources for more information:** - [GitHub Repo](https://github.com/diyclassics/la_dep_cltk_sm)
d54e074df5dda61bf6240d932a314e58
mit
[]
false
Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @misc{burns_la_dep_cltk_sm_2022, title = {la\_dep\_cltk\_sm}, version = 0.2.0, url = {https://github.com/diyclassics/la_dep_cltk_sm}, abstract = {spaCy-compatible sm model for Latin}, urldate = {2022-12-26}, author = {Burns, Patrick J.}, year = {2022}, } ```
7a0f76985843ae1891a8cfd010ddb21b
mit
[]
false
How to Get Started with the Model - Install with... - `pip install https://huggingface.co/diyclassics/la_dep_cltk_sm/resolve/main/la_dep_cltk_sm-0.2.0/dist/la_dep_cltk_sm-0.2.0.tar.gz` - Tested on python 3.10.8, spacy==3.4.2
c623a96325b6ac5cf7ec2eed35bcf0d8
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Swedish This model is an adapted version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset in Swedish. It achieves the following results on the evaluation set: - Wer: 19.8166
ffb56461a2ae2be8267bf4ce6e5331ab
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Model description & uses This model is the openai whisper small transformer adapted for Swedish audio to text transcription. The model is available through its [HuggingFace web app](https://huggingface.co/spaces/torileatherman/whisper_small_sv)
25a96709c9ff8e3355c5688d108033b4
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training and evaluation data Data used for training is the initial 10% of train and validation of [Swedish Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/sv/train) 11.0 from Mozilla Foundation. The dataset used for evaluation is the initial 10% of test of Swedish Common Voice. The training data has been augmented with random noise, random pitching and change of the speed of the voice.
9904e31a42fbebf8acd211133efc5a75
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - weight decay: 0
9efda614cbd8a78355e67a1bad193171
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1379 | 0.95 | 1000 | 0.295811 | 21.467| | 0.0245 | 2.86 | 3000 | 0.300059 | 20.160 | | 0.0060 | 3.82 | 4000 | 0.320301 | 19.762 |
f029b60dedf5507485b2173df4b825f5
apache-2.0
['translation']
false
jpn-nld * source group: Japanese * target group: Dutch * OPUS readme: [jpn-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-nld/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana jpn_Latn * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.eval.txt)
3c508b988fc076e2f2c30d8294807de5
apache-2.0
['translation']
false
System Info: - hf_name: jpn-nld - source_languages: jpn - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'nl'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: nld - short_pair: ja-nl - chrF2_score: 0.534 - bleu: 34.7 - brevity_penalty: 0.938 - ref_len: 25849.0 - src_name: Japanese - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: nl - prefer_old: False - long_pair: jpn-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
85e9348eaea39f3af968fbc0f516b0df
apache-2.0
[]
false
distilbert-base-en-fr-es-de-zh-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
37b0acaac22161c2343703a372ae777e
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-es-de-zh-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-es-de-zh-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
aa4f91a93261957282b6d9321a2089d4
creativeml-openrail-m
[]
false
A DreamBooth finetune of Stable Diffusion v1.5 model trained on a bunch of stills from [Kurzgesagt](https://www.youtube.com/c/inanutshell) videos. Use the tokens **_kurzgesagt style_** in your prompts for the effect. **Sample 1:** ![Sample 1](samples-1.jpg) **Sample 2:** ![Sample 2](samples-2.jpg)
80067bc5557fc5b250e681a65a19bb44
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_vp-100k_age_teens-10_sixties-0_s818 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a882fef6a4775fe9d02362f72d698d29
mit
['automatic-speech-recognition', 'generated_from_trainer']
false
Model description We pre-trained a wav2vec 2.0 base model on 842h of unlabelled Luxembourgish speech collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 4h of labelled Luxembourgish Speech from the same domain.
e4e6fc0580264a066c57718789d795ff
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-spam This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0325 - F1: 0.9910 - Accuracy: 0.9910
339ebefb493dd3e5d3f11dc90e174eed
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 0.1523 | 1.0 | 79 | 0.0369 | 0.9892 | 0.9892 | | 0.0303 | 2.0 | 158 | 0.0325 | 0.9910 | 0.9910 |
542f569c788bce8ca87e5d7700dda9f4
apache-2.0
['generated_from_trainer']
false
TUF_DistilBERT_5E This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1832 - Accuracy: 0.96
eb87bffa997b2ccec1d7c930eb7d435e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5092 | 0.1 | 50 | 0.4385 | 0.7533 | | 0.2807 | 0.2 | 100 | 0.2225 | 0.9 | | 0.1881 | 0.3 | 150 | 0.1531 | 0.94 | | 0.1895 | 0.4 | 200 | 0.1426 | 0.94 | | 0.1995 | 0.5 | 250 | 0.1428 | 0.94 | | 0.1745 | 0.59 | 300 | 0.1538 | 0.9267 | | 0.1679 | 0.69 | 350 | 0.1249 | 0.9533 | | 0.199 | 0.79 | 400 | 0.1327 | 0.9467 | | 0.1703 | 0.89 | 450 | 0.1488 | 0.92 | | 0.1541 | 0.99 | 500 | 0.1772 | 0.9467 | | 0.1436 | 1.09 | 550 | 0.1070 | 0.9667 | | 0.1463 | 1.19 | 600 | 0.1165 | 0.9467 | | 0.1309 | 1.29 | 650 | 0.1054 | 0.9733 | | 0.097 | 1.39 | 700 | 0.1346 | 0.94 | | 0.1307 | 1.49 | 750 | 0.1477 | 0.9467 | | 0.1506 | 1.58 | 800 | 0.1311 | 0.9533 | | 0.1386 | 1.68 | 850 | 0.1165 | 0.9667 | | 0.1463 | 1.78 | 900 | 0.4207 | 0.9067 | | 0.1202 | 1.88 | 950 | 0.1528 | 0.9667 | | 0.1403 | 1.98 | 1000 | 0.1262 | 0.96 | | 0.073 | 2.08 | 1050 | 0.1459 | 0.96 | | 0.0713 | 2.18 | 1100 | 0.1747 | 0.9533 | | 0.0814 | 2.28 | 1150 | 0.1953 | 0.9667 | | 0.0935 | 2.38 | 1200 | 0.1888 | 0.9533 | | 0.0685 | 2.48 | 1250 | 0.1562 | 0.9467 | | 0.1154 | 2.57 | 1300 | 0.1806 | 0.96 | | 0.1239 | 2.67 | 1350 | 0.1322 | 0.9533 | | 0.1011 | 2.77 | 1400 | 0.2148 | 0.94 | | 0.0718 | 2.87 | 1450 | 0.1686 | 0.96 | | 0.1159 | 2.97 | 1500 | 0.1532 | 0.9533 | | 0.0516 | 3.07 | 1550 | 0.1888 | 0.96 | | 0.063 | 3.17 | 1600 | 0.1851 | 0.9467 | | 0.068 | 3.27 | 1650 | 0.2775 | 0.94 | | 0.0946 | 3.37 | 1700 | 0.1853 | 0.96 | | 0.0606 | 3.47 | 1750 | 0.2148 | 0.9467 | | 0.0663 | 3.56 | 1800 | 0.2091 | 0.9533 | | 0.0474 | 3.66 | 1850 | 0.1702 | 0.9533 | | 0.0585 | 3.76 | 1900 | 0.1660 | 0.96 | | 0.0439 | 3.86 | 1950 | 0.2220 | 0.9533 | | 0.0758 | 3.96 | 2000 | 0.1834 | 0.96 | | 0.0497 | 4.06 | 2050 | 0.1707 | 0.9533 | | 0.0412 | 4.16 | 2100 | 0.1948 | 0.9533 | | 0.0338 | 4.26 | 2150 | 0.2039 | 0.9533 | | 0.0796 | 4.36 | 2200 | 0.1797 | 0.9533 | | 0.0727 | 4.46 | 2250 | 0.1986 | 0.9533 | | 0.032 | 4.55 | 2300 | 0.1947 | 0.9467 | | 0.0436 | 4.65 | 2350 | 0.1908 | 0.9467 | | 0.0205 | 4.75 | 2400 | 0.1806 | 0.96 | | 0.0326 | 4.85 | 2450 | 0.1835 | 0.96 | | 0.0404 | 4.95 | 2500 | 0.1832 | 0.96 |
83a62ae4ec51704a38b3de2989d0759b
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_wav2vec2_s112 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
82b5aefb6780b05af2ee98045796a7a4
apache-2.0
[]
false
Training Data Given a question and paragraph, can the question be answered by the paragraph? The models have been trained on the [GLUE QNLI](https://arxiv.org/abs/1804.07461) dataset, which transformed the [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/) into an NLI task.
5ef21a5ef466dd343b3c7fbdf7e3e861
apache-2.0
[]
false
Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Query1', 'Paragraph1'), ('Query2', 'Paragraph2')])
51a7ad876f02037ac2687fb0dd00c19d
apache-2.0
[]
false
e.g. scores = model.predict([('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), ('What is the size of New York?', 'New York City is famous for the Metropolitan Museum of Art.')]) ```
675871f385d06873a1d2ad11cd95dafe
apache-2.0
[]
false
Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'What is the size of New York?'], ['Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = torch.nn.functional.sigmoid(model(**features).logits) print(scores) ```
42bdbc866f77027cbf6d71ff5efc19d1
apache-2.0
['automatic-speech-recognition', 'es']
false
exp_w2v2t_es_xls-r_s691 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
98fc04c888d6f89d9b15a5f39a79acb5
mit
['generated_from_trainer']
false
finetuned_gpt2-medium_sst2_negation0.001_pretrainedTrue_epochs1 This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 2.8767
954ebc6ba10f3972df83473c90c2b6f2
unlicense
[]
false
Use the token SamDoesArt to trigger the effect. It should work anywhere in the prompt. I usually put it right at the beginning of the prompt, which has a mildly different effect than putting it at the end of the prompt. Up to you though to do some test and find where is the prompt is best for your personal tastes. I don't recommend putting the word style directly after the keyword like most custom models do. On this model it will cause unpredictable, wierd results that probably aren't what you truly want. The effect of a comma after SamDoesArt or no comma is difficult to determine. "SamDoesArt, portrait of a pretty girl" "SamDoesArt, a man working in a factory, manly, machines" "SamDoesArt, an african lion, mane, magestic You get the idea. A much more thorough and much more visual guide to the model can be seen here: https://www.reddit.com/r/StableDiffusion/comments/za1d7p/samdoesart_v3_intended_to_improve_both_human/ Still don't like this website and don't understand why it is so user unfriendly...
7ba50c9ae1e176bd964fb526c58db966
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Galician This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the google/fleurs gl_es dataset. It achieves the following results on the evaluation set: - Loss: 0.4553 - Wer: 14.6741
06ed36e01ed8526f405b5f94628aad93
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0002 | 39.01 | 1000 | 0.4553 | 14.6741 | | 0.0001 | 79.0 | 2000 | 0.5023 | 14.9400 | | 0.0 | 119.0 | 3000 | 0.5317 | 15.1609 | | 0.0 | 159.0 | 4000 | 0.5513 | 15.2015 | | 0.0 | 199.0 | 5000 | 0.5593 | 15.2060 |
ad2854ecb173e3140bf1ee548efb3247
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2214 - Accuracy: 0.9294
62db770e3f542d943dae9edd5dcc8681
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2435 | 1.0 | 1250 | 0.2186 | 0.917 | | 0.1495 | 2.0 | 2500 | 0.2214 | 0.9294 | | 0.0829 | 3.0 | 3750 | 0.4892 | 0.8918 | | 0.0472 | 4.0 | 5000 | 0.5189 | 0.8976 | | 0.0268 | 5.0 | 6250 | 0.5478 | 0.8996 |
9ffe5b5df99b4551e19bff2f34717edf
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0964
ff0f87a11294082cd8fd64b7dffba0f6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0664 | 1.0 | 5533 | 1.0170 | | 0.7946 | 2.0 | 11066 | 1.0367 | | 0.5758 | 3.0 | 16599 | 1.0964 |
aed549c00480643041ddd3635df16d32
apache-2.0
['translation']
false
opus-mt-sv-chk * source languages: sv * target languages: chk * OPUS readme: [sv-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-chk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-chk/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-chk/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-chk/opus-2020-01-16.eval.txt)
eec0b7161a6eb18a879b99f80d0cfc39
mit
['automatic-speech-recognition', 'generated_from_trainer']
false
Model description We fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 11h of labelled Luxembourgish speech from the same domain.
1fb9730bbe5895fbb800ef917c2a5e70
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__sst2__train-32-5 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6248 - Accuracy: 0.6826
f41210d25d8c7581b3a25252db920c11
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7136 | 1.0 | 13 | 0.6850 | 0.5385 | | 0.6496 | 2.0 | 26 | 0.6670 | 0.6154 | | 0.5895 | 3.0 | 39 | 0.6464 | 0.7692 | | 0.4271 | 4.0 | 52 | 0.6478 | 0.7692 | | 0.2182 | 5.0 | 65 | 0.6809 | 0.6923 | | 0.103 | 6.0 | 78 | 0.9119 | 0.6923 | | 0.0326 | 7.0 | 91 | 1.0718 | 0.6923 | | 0.0154 | 8.0 | 104 | 1.0721 | 0.7692 | | 0.0087 | 9.0 | 117 | 1.1416 | 0.7692 | | 0.0067 | 10.0 | 130 | 1.2088 | 0.7692 | | 0.005 | 11.0 | 143 | 1.2656 | 0.7692 | | 0.0037 | 12.0 | 156 | 1.3104 | 0.7692 | | 0.0032 | 13.0 | 169 | 1.3428 | 0.6923 |
937ec36df0f96ee6b969bbeebcd418b3
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8456 - Matthews Correlation: 0.5500
fb3bde144e7094baeaf09636a241a53e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5197 | 1.0 | 535 | 0.5477 | 0.4130 | | 0.3456 | 2.0 | 1070 | 0.5035 | 0.5239 | | 0.2342 | 3.0 | 1605 | 0.6100 | 0.5285 | | 0.1698 | 4.0 | 2140 | 0.7556 | 0.5456 | | 0.1295 | 5.0 | 2675 | 0.8456 | 0.5500 |
58f4d3005450d944bc5726895a5c9027
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Medium Czech 2 CV11 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 cs dataset. It achieves the following results on the evaluation set: - Loss: 0.2417 - Wer: 11.4086
e6dc0d66771a46fb1686ab15a1a714b1