license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L-6-v3') embeddings = model.encode(sentences) print(embeddings) ```
e49f1923b7654b5eb8ccccfb76fa0b1e
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-MiniLM-L-6-v3') model = AutoModel.from_pretrained('sentence-transformers/msmarco-MiniLM-L-6-v3')
1e477016522320244dee0a01bb5b895b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-MiniLM-L-6-v3)
817de3e8512aefccc56e469d8ed28031
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
c614514a59127b87ac4fc8d4351e0db2
mit
['roberta']
false
What is this? This model has been developed to detect "narrative-style" jokes, stories and anecdotes (i.e. they are narrated as a story) spoken during speeches or conversations etc. It works best when jokes/anecdotes are at least 40 words or longer. It is based on Facebook's [RoBerta-MUPPET](https://huggingface.co/facebook/muppet-roberta-base). The training dataset was a private collection of around 2000 jokes. This model has not been trained or tested on one-liners, puns or Reddit-style language-manipulation jokes such as knock-knock, Q&A jokes etc. See the example in the inference widget or How to use section for what constitues a narrative-style joke. For a slightly more accurate model (0.4% more) that is 65% slower at inference, see the [Deberta-v3 model](https://huggingface.co/Reggie/DeBERTa-v3-base-joke_detector). For a much more inaccurate model (2.4% less) that is way faster at inference, see the [distilbert model](https://huggingface.co/Reggie/distilbert-joke_detector).
6cec773b6886c063c6bc1062a03bb049
mit
['roberta']
false
How to use ```python from transformers import pipeline import torch device = 0 if torch.cuda.is_available() else -1 model_name = 'Reggie/muppet-roberta-base-joke_detector' max_seq_len = 510 pipe = pipeline(model=model_name, device=device, truncation=True, max_length=max_seq_len) is_it_a_joke = """A nervous passenger is about to book a flight ticket, and he asks the airlines' ticket seller, "I hope your planes are safe. Do they have a good track record for safety?" The airline agent replies, "Sir, I can guarantee you, we've never had a plane that has crashed more than once." """ result = pipe(is_it_a_joke)
d886638f67390a357781c7dbd8423839
cc-by-sa-4.0
['generated_from_trainer']
false
fin4 This model is a fine-tuned version of [nlpaueb/sec-bert-num](https://huggingface.co/nlpaueb/sec-bert-num) on the fin dataset. It achieves the following results on the evaluation set: - Loss: 0.0549 - Precision: 0.9209 - Recall: 0.9283 - F1: 0.9246 - Accuracy: 0.9913
012bf0d78978b831876260157a631493
cc-by-sa-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 129 | 0.1041 | 0.8242 | 0.8406 | 0.8323 | 0.9788 | | No log | 2.0 | 258 | 0.0511 | 0.9173 | 0.9283 | 0.9228 | 0.9902 | | No log | 3.0 | 387 | 0.0430 | 0.9102 | 0.9283 | 0.9191 | 0.9907 | | 0.0598 | 4.0 | 516 | 0.0501 | 0.9368 | 0.9442 | 0.9405 | 0.9922 | | 0.0598 | 5.0 | 645 | 0.0436 | 0.9325 | 0.9363 | 0.9344 | 0.9924 | | 0.0598 | 6.0 | 774 | 0.0489 | 0.9433 | 0.9283 | 0.9357 | 0.9917 | | 0.0598 | 7.0 | 903 | 0.0499 | 0.932 | 0.9283 | 0.9301 | 0.9919 | | 0.0028 | 8.0 | 1032 | 0.0537 | 0.9209 | 0.9283 | 0.9246 | 0.9913 | | 0.0028 | 9.0 | 1161 | 0.0540 | 0.9170 | 0.9243 | 0.9206 | 0.9911 | | 0.0028 | 10.0 | 1290 | 0.0549 | 0.9209 | 0.9283 | 0.9246 | 0.9913 |
83e023f13ee09d54c07bb16ee518a8be
apache-2.0
['generated_from_trainer']
false
beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-7e-05-32 This model is a fine-tuned version of [Celal11/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05](https://huggingface.co/Celal11/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.8037 - Accuracy: 0.7201
b4b06f1484e29d0db39268820b07d8da
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2
a2ac630872b337981ffef4d9b3a0ee34
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8058 | 1.0 | 112 | 0.8260 | 0.7056 | | 0.6999 | 2.0 | 224 | 0.8037 | 0.7201 |
a2e09fc83643b1a7d4da0b50d6e722fe
apache-2.0
['translation']
false
opus-mt-kg-fr * source languages: kg * target languages: fr * OPUS readme: [kg-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kg-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kg-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-fr/opus-2020-01-09.eval.txt)
ae7c597ee52b0d5ce0d7d570e2e88765
apache-2.0
['generated_from_trainer']
false
wav2vec2-xls-r-300m-ar-4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7888 - Wer: 0.3697
75da6ca44e75508bcd0cb2d8ff9e811a
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP
8ce70c2e4614fab7fffd9e3c9ea0df5e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.8069 | 1.18 | 400 | 1.7793 | 0.9883 | | 1.1949 | 2.35 | 800 | 0.9662 | 0.7908 | | 0.8996 | 3.53 | 1200 | 0.8404 | 0.7154 | | 0.7652 | 4.71 | 1600 | 0.7478 | 0.6379 | | 0.6611 | 5.88 | 2000 | 0.7687 | 0.6229 | | 0.6015 | 7.06 | 2400 | 0.7153 | 0.5948 | | 0.5444 | 8.24 | 2800 | 0.7062 | 0.5826 | | 0.4872 | 9.41 | 3200 | 0.6568 | 0.5414 | | 0.4729 | 10.59 | 3600 | 0.6817 | 0.5599 | | 0.4238 | 11.76 | 4000 | 0.6406 | 0.5262 | | 0.4022 | 12.94 | 4400 | 0.6797 | 0.5184 | | 0.3945 | 14.12 | 4800 | 0.6744 | 0.5147 | | 0.3711 | 15.29 | 5200 | 0.6807 | 0.5090 | | 0.3318 | 16.47 | 5600 | 0.6286 | 0.5011 | | 0.3132 | 17.65 | 6000 | 0.6481 | 0.4814 | | 0.2992 | 18.82 | 6400 | 0.6454 | 0.4958 | | 0.2734 | 20.0 | 6800 | 0.6465 | 0.4825 | | 0.2534 | 21.18 | 7200 | 0.6559 | 0.4658 | | 0.2505 | 22.35 | 7600 | 0.6601 | 0.4618 | | 0.2495 | 23.53 | 8000 | 0.7080 | 0.4813 | | 0.2387 | 24.71 | 8400 | 0.6635 | 0.4508 | | 0.2154 | 25.88 | 8800 | 0.6442 | 0.4538 | | 0.2096 | 27.06 | 9200 | 0.7399 | 0.4579 | | 0.2007 | 28.24 | 9600 | 0.6957 | 0.4512 | | 0.1942 | 29.41 | 10000 | 0.6642 | 0.4267 | | 0.1854 | 30.59 | 10400 | 0.6842 | 0.4393 | | 0.1782 | 31.76 | 10800 | 0.7007 | 0.4393 | | 0.1751 | 32.94 | 11200 | 0.7063 | 0.4321 | | 0.1695 | 34.12 | 11600 | 0.7057 | 0.4330 | | 0.1638 | 35.29 | 12000 | 0.7416 | 0.4266 | | 0.1531 | 36.47 | 12400 | 0.7420 | 0.4273 | | 0.1475 | 37.65 | 12800 | 0.7334 | 0.4218 | | 0.1388 | 38.82 | 13200 | 0.7420 | 0.4227 | | 0.1372 | 40.0 | 13600 | 0.7492 | 0.4238 | | 0.1341 | 41.18 | 14000 | 0.7803 | 0.4193 | | 0.133 | 42.35 | 14400 | 0.7396 | 0.4105 | | 0.1238 | 43.53 | 14800 | 0.7561 | 0.4098 | | 0.1163 | 44.71 | 15200 | 0.7987 | 0.4049 | | 0.116 | 45.88 | 15600 | 0.7769 | 0.4093 | | 0.1079 | 47.06 | 16000 | 0.7780 | 0.3986 | | 0.1043 | 48.24 | 16400 | 0.7674 | 0.3905 | | 0.1004 | 49.41 | 16800 | 0.7931 | 0.3949 | | 0.0987 | 50.59 | 17200 | 0.7605 | 0.3938 | | 0.0963 | 51.76 | 17600 | 0.7735 | 0.3858 | | 0.0905 | 52.94 | 18000 | 0.7504 | 0.3802 | | 0.086 | 54.12 | 18400 | 0.8038 | 0.3867 | | 0.0839 | 55.29 | 18800 | 0.7887 | 0.3797 | | 0.0798 | 56.47 | 19200 | 0.7832 | 0.3705 | | 0.0785 | 57.65 | 19600 | 0.7771 | 0.3706 | | 0.0765 | 58.82 | 20000 | 0.7858 | 0.3703 | | 0.0739 | 60.0 | 20400 | 0.7888 | 0.3697 |
7746c9bb6486f4d2af9c0153ae6874f9
apache-2.0
['generated_from_trainer']
false
summarise_v11 This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6322 - Rouge1 Precision: 0.6059 - Rouge1 Recall: 0.6233 - Rouge1 Fmeasure: 0.5895 - Rouge2 Precision: 0.4192 - Rouge2 Recall: 0.4512 - Rouge2 Fmeasure: 0.4176 - Rougel Precision: 0.4622 - Rougel Recall: 0.4946 - Rougel Fmeasure: 0.4566 - Rougelsum Precision: 0.4622 - Rougelsum Recall: 0.4946 - Rougelsum Fmeasure: 0.4566
36543535451a2aec01622efaca4517f3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP
c4653ad1af2b991c7526de75727cbcc1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:| | 1.6201 | 0.45 | 10 | 1.4875 | 0.3203 | 0.64 | 0.3932 | 0.197 | 0.3839 | 0.2385 | 0.1952 | 0.4051 | 0.2454 | 0.1952 | 0.4051 | 0.2454 | | 0.9172 | 0.91 | 20 | 1.4404 | 0.4917 | 0.5134 | 0.4699 | 0.288 | 0.3095 | 0.276 | 0.3371 | 0.3594 | 0.3277 | 0.3371 | 0.3594 | 0.3277 | | 1.0923 | 1.36 | 30 | 1.3575 | 0.519 | 0.5505 | 0.4936 | 0.3114 | 0.3237 | 0.2958 | 0.3569 | 0.3702 | 0.3364 | 0.3569 | 0.3702 | 0.3364 | | 1.1287 | 1.82 | 40 | 1.3269 | 0.4913 | 0.5997 | 0.5068 | 0.3108 | 0.3964 | 0.3269 | 0.3355 | 0.427 | 0.3521 | 0.3355 | 0.427 | 0.3521 | | 0.9938 | 2.27 | 50 | 1.3189 | 0.5339 | 0.5781 | 0.4973 | 0.3555 | 0.3883 | 0.3345 | 0.3914 | 0.4289 | 0.3678 | 0.3914 | 0.4289 | 0.3678 | | 0.8659 | 2.73 | 60 | 1.3241 | 0.525 | 0.638 | 0.5165 | 0.3556 | 0.4349 | 0.3535 | 0.3914 | 0.4793 | 0.3886 | 0.3914 | 0.4793 | 0.3886 | | 0.6187 | 3.18 | 70 | 1.3360 | 0.5875 | 0.5864 | 0.5416 | 0.4005 | 0.4045 | 0.3701 | 0.4485 | 0.4556 | 0.414 | 0.4485 | 0.4556 | 0.414 | | 0.3941 | 3.64 | 80 | 1.4176 | 0.5373 | 0.6415 | 0.5328 | 0.3576 | 0.446 | 0.3642 | 0.3787 | 0.4586 | 0.3781 | 0.3787 | 0.4586 | 0.3781 | | 0.4145 | 4.09 | 90 | 1.3936 | 0.4127 | 0.6553 | 0.4568 | 0.2568 | 0.4498 | 0.2988 | 0.2918 | 0.4933 | 0.328 | 0.2918 | 0.4933 | 0.328 | | 0.4203 | 4.55 | 100 | 1.4703 | 0.6545 | 0.601 | 0.5981 | 0.4789 | 0.4373 | 0.438 | 0.5251 | 0.4851 | 0.4818 | 0.5251 | 0.4851 | 0.4818 | | 0.687 | 5.0 | 110 | 1.4304 | 0.5566 | 0.6357 | 0.5637 | 0.3734 | 0.4186 | 0.3748 | 0.4251 | 0.4825 | 0.4286 | 0.4251 | 0.4825 | 0.4286 | | 0.4006 | 5.45 | 120 | 1.5399 | 0.5994 | 0.5794 | 0.5515 | 0.4215 | 0.4218 | 0.398 | 0.4359 | 0.4369 | 0.4084 | 0.4359 | 0.4369 | 0.4084 | | 0.2536 | 5.91 | 130 | 1.5098 | 0.5074 | 0.6254 | 0.4874 | 0.3369 | 0.4189 | 0.3256 | 0.3802 | 0.4738 | 0.3664 | 0.3802 | 0.4738 | 0.3664 | | 0.2218 | 6.36 | 140 | 1.5278 | 0.5713 | 0.6059 | 0.5688 | 0.3887 | 0.4233 | 0.3916 | 0.4414 | 0.4795 | 0.4457 | 0.4414 | 0.4795 | 0.4457 | | 0.2577 | 6.82 | 150 | 1.5469 | 0.5148 | 0.5941 | 0.5175 | 0.3284 | 0.3856 | 0.3335 | 0.3616 | 0.4268 | 0.3681 | 0.3616 | 0.4268 | 0.3681 | | 0.1548 | 7.27 | 160 | 1.5986 | 0.5983 | 0.657 | 0.5862 | 0.4322 | 0.4877 | 0.4287 | 0.4466 | 0.5167 | 0.4482 | 0.4466 | 0.5167 | 0.4482 | | 0.1535 | 7.73 | 170 | 1.5796 | 0.5609 | 0.641 | 0.5616 | 0.3856 | 0.4428 | 0.3892 | 0.4238 | 0.4921 | 0.4263 | 0.4238 | 0.4921 | 0.4263 | | 0.1568 | 8.18 | 180 | 1.6052 | 0.5669 | 0.617 | 0.5679 | 0.3911 | 0.4382 | 0.3969 | 0.4363 | 0.4877 | 0.4417 | 0.4363 | 0.4877 | 0.4417 | | 0.2038 | 8.64 | 190 | 1.6191 | 0.5466 | 0.5973 | 0.5313 | 0.3543 | 0.4114 | 0.3531 | 0.4061 | 0.4666 | 0.404 | 0.4061 | 0.4666 | 0.404 | | 0.1808 | 9.09 | 200 | 1.6165 | 0.5751 | 0.5919 | 0.5587 | 0.3831 | 0.4097 | 0.3817 | 0.4482 | 0.4728 | 0.4405 | 0.4482 | 0.4728 | 0.4405 | | 0.1021 | 9.55 | 210 | 1.6316 | 0.5316 | 0.6315 | 0.535 | 0.3588 | 0.4563 | 0.3697 | 0.405 | 0.502 | 0.4126 | 0.405 | 0.502 | 0.4126 | | 0.1407 | 10.0 | 220 | 1.6322 | 0.6059 | 0.6233 | 0.5895 | 0.4192 | 0.4512 | 0.4176 | 0.4622 | 0.4946 | 0.4566 | 0.4622 | 0.4946 | 0.4566 |
34dd640707390ee5cf07f194701b887d
apache-2.0
['deep-narrow']
false
T5-Efficient-XL-NL16 (Deep-Narrow version) T5-Efficient-XL-NL16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
916aa09b5183d332ca02797dc3fbd1fc
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-xl-nl16** - is of model type **Xl** with the following variations: - **nl** is **16** It has **1912.07** million parameters and thus requires *ca.* **7648.29 MB** of memory in full precision (*fp32*) or **3824.14 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
3f6a504edafb9e8b09275bdd9f72eb11
cc-by-4.0
['classification']
false
A fine-tuned model based on the **DeBERTaV3** model of Microsoft and fine-tuned on **Glue QQP**, which detects the linguistical similarities between two questions and whether they are duplicates questions or different.
c64054aedcd925f463666ad7c4f63d44
cc-by-4.0
['classification']
false
Model Testing ```python import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification model_name = "AI-Ahmed/deberta-v3-base-funetuned-cls-qqa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenized_input = tokenizer("How is the life of a math student? Could you describe your own experiences? Which level of preparation is enough for the exam jlpt5?", return_tensors="pt") with torch.no_grad(): logits = model(**tokenized_input).logits predicted_class_id = logits.argmax().item() model.config.id2label[predicted_class_id] ```
6810009fe05c08b8da57d3bc980659e3
cc-by-4.0
['classification']
false
Information Citation ```bibtex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
ecb3a51f2ac91128b2488039449d92b7
apache-2.0
['text-classification', 'generated_from_trainer']
false
distilroberts-base-mrpc-glue-jeraldflowers This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.4990 - Accuracy: 0.8431 - F1: 0.8815
dbc19227d75ea2529f5141742436bb88
apache-2.0
['text-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
3990a301bcf5c2196858f614b496013a
apache-2.0
['text-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5289 | 1.09 | 500 | 0.5668 | 0.8211 | 0.8689 | | 0.3675 | 2.18 | 1000 | 0.4990 | 0.8431 | 0.8815 |
de6905ec5acf3069d040c395777f8f0d
cc-by-sa-4.0
['bn', 'audio', 'automatic-speech-recognition', 'speech']
false
Wav2Vec2-Large-XLSR-Bengali Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training. When using this model, make sure that your speech input is sampled at 16kHz. Train Script can be Found at : train.py Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing
94db292797f987196606beb9105c4e90
cc-by-sa-4.0
['bn', 'audio', 'automatic-speech-recognition', 'speech']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali") model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
485c6c83b7d4009b616857b1968d2ca0
cc-by-sa-4.0
['bn', 'audio', 'automatic-speech-recognition', 'speech']
false
model = model.to("cuda") resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch) speech = resampler(speech_array).squeeze().numpy() return speech speech_array = speech_file_to_array_fn("test_file.wav") inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values).logits predicted_ids = torch.argmax(logits, dim=-1) preds = processor.batch_decode(predicted_ids)[0] print(preds.replace("[PAD]","")) ``` **Test Result**: WER on ~4200 utterance : 32.45 %
e6e3e518d6479f7c61e4366b3b244e33
apache-2.0
['automatic-speech-recognition', 'uk']
false
exp_w2v2t_uk_unispeech-sat_s27 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b41891e2a304fa47d7487d3f5a1ad09a
apache-2.0
['generated_from_trainer']
false
flan-t5-large-dream-character This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0937 - Gen Len: 2.8625 - F1: 0.6843 - Precision: 0.7760 - Recall: 0.6755
86b630308f3e43ba7a4a94291d92a304
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5
8d89a759e02d8a14bc1b5569c253c522
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:---------:|:------:| | 0.714 | 0.59 | 250 | 0.1678 | 3.025 | 0.2809 | 0.3302 | 0.3145 | | 0.1488 | 1.18 | 500 | 0.1332 | 2.1 | 0.4394 | 0.575 | 0.4082 | | 0.1206 | 1.78 | 750 | 0.1023 | 2.35 | 0.5491 | 0.6948 | 0.5205 | | 0.097 | 2.37 | 1000 | 0.0974 | 2.8375 | 0.5889 | 0.6956 | 0.5904 | | 0.0859 | 2.96 | 1250 | 0.0884 | 2.9 | 0.6610 | 0.7510 | 0.6574 | | 0.0635 | 3.55 | 1500 | 0.0926 | 2.4625 | 0.6429 | 0.7875 | 0.5930 | | 0.0581 | 4.15 | 1750 | 0.0930 | 2.75 | 0.6651 | 0.7754 | 0.6446 | | 0.0453 | 4.74 | 2000 | 0.0937 | 2.8625 | 0.6843 | 0.7760 | 0.6755 |
cd52a14ecf1292502c3657aba2fba770
apache-2.0
[]
false
Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio` or `streamlit` `sdk_version` : _string_ Only applicable for `streamlit` SDK. See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code). Path is relative to the root of the repository. `pinned`: _boolean_ Whether the Space stays on top of your list.
23ce94e3e58756a9db679627e47d2664
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Diffusers ```py from diffusers import StableDiffusionPipeline import torch model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16") pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers
206cafe6be1a7354ed1698aa67ffb678
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_r-wav2vec2_s925 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a158f34a65d26b9371a28a6eb53c6555
apache-2.0
[]
false
Model description **CAMeLBERT-CA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model. For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
cf4b918c9fc3d6d3a9799548c581d0e2
apache-2.0
[]
false
How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa') >>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع' >>> pos(text) [{'entity': 'noun', 'score': 0.9999758, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9997559, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.99996257, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.9958452, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.9999635, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99991685, 'index': 6, 'word': '
aa4e56d8d730a967451317df62ab5f66
apache-2.0
[]
false
رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.99997497, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999795, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99924207, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99994195, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.9997414, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
800ba4e54e4b346b0adae428fb661a6c
apache-2.0
[]
false
Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
bb388112525b3d4838e9a03aa0dd5663
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2369 - F1: 0.8322
eaa64d0df6b5ee41bed8fcb2037834d4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8113 | 1.0 | 70 | 0.3088 | 0.7546 | | 0.259 | 2.0 | 140 | 0.2541 | 0.8155 | | 0.1791 | 3.0 | 210 | 0.2369 | 0.8322 |
c3ae3c6304fdecd9bf5c96a87d92fb96
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9314 - Wer: 1.0
c364b925afe46d92ea279ec6374c9258
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP
3bd3f6ef209cdf85c04f6256322c6ad2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 8.686 | 0.16 | 20 | 13.6565 | 1.0 | | 8.0711 | 0.32 | 40 | 12.5379 | 1.0 | | 6.9967 | 0.48 | 60 | 9.7215 | 1.0 | | 5.2368 | 0.64 | 80 | 5.8459 | 1.0 | | 3.4499 | 0.8 | 100 | 3.3413 | 1.0 | | 3.1261 | 0.96 | 120 | 3.2858 | 1.0 | | 3.0654 | 1.12 | 140 | 3.1945 | 1.0 | | 3.0421 | 1.28 | 160 | 3.1296 | 1.0 | | 3.0035 | 1.44 | 180 | 3.1172 | 1.0 | | 3.0067 | 1.6 | 200 | 3.1217 | 1.0 | | 2.9867 | 1.76 | 220 | 3.0715 | 1.0 | | 2.9653 | 1.92 | 240 | 3.0747 | 1.0 | | 2.9629 | 2.08 | 260 | 2.9984 | 1.0 | | 2.9462 | 2.24 | 280 | 2.9991 | 1.0 | | 2.9391 | 2.4 | 300 | 3.0391 | 1.0 | | 2.934 | 2.56 | 320 | 2.9682 | 1.0 | | 2.9193 | 2.72 | 340 | 2.9701 | 1.0 | | 2.8985 | 2.88 | 360 | 2.9314 | 1.0 |
63c32a2773ec23938a7e6179fce52c11
mit
['gpt2-viwiki']
false
Model description This is a Vietnamese GPT-2 model which is finetuned on the [Latest pages articles of Vietnamese Wikipedia](https://dumps.wikimedia.org/viwiki/latest/viwiki-latest-pages-articles.xml.bz2).
a3b24f9806858208639131dfcffe3132
mit
['gpt2-viwiki']
false
How to use You can use this model to: - Tokenize Vietnamese sentences with GPT2Tokenizer. - Generate text seems like a Wikipedia article. - Finetune it to other downstream tasks. Here is how to use the model to generate text in Pytorch: ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('danghuy1999/gpt2-viwiki') model = GPT2LMHeadModel.from_pretrained('danghuy1999/gpt2-viwiki').to('cuda') text = "Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử" input_ids = tokenizer.encode(text, return_tensors='pt').to('cuda') max_length = 100 sample_outputs = model.generate(input_ids,pad_token_id=tokenizer.eos_token_id, do_sample=True, max_length=max_length, min_length=max_length, top_k=40, num_beams=5, early_stopping=True, no_repeat_ngram_size=2, num_return_sequences=3) for i, sample_output in enumerate(sample_outputs): print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist()))) print('\n---') ``` And the results are: ```bash >> Generated text 1 Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Mặc dù thuyết tương đối tổng quát không được áp dụng rộng rãi trong nhiều lĩnh vực khác nhau, nhưng các nhà lý thuyết đã đưa ra khái niệm rộng hơn về tính chất của vật chất. Một trong những nghiên cứu của Albert Einstein về sự tồn tại của hệ quy chiếu quán tính, ông đã đề xuất rằng một lực hấp dẫn có thể có khối lượng bằng năng lượng của nó. Tuy nhiên, những người cho rằng --- >> Generated text 2 Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Tuy nhiên, thuyết tương đối hẹp không phải là lý thuyết của Einstein. Cho đến tận cuối thế kỷ 19, Albert Einstein đã chứng minh được sự tồn tại của lực hấp dẫn trong một số trường hợp đặc biệt. Năm 1915, ông đưa ra khái niệm "khối lượng" để miêu tả chuyển động lượng của một hạt bằng khối lượng nghỉ của nó. Ông cho rằng năng lượng "m" là một thành phần của --- >> Generated text 3 Albert Einstein là nhà vật lý học tạo ra thuyết lượng tử. Tuy nhiên, thuyết tương đối hẹp không được chấp nhận rộng rãi bởi các nhà lý thuyết. Một trong những nghiên cứu của Einstein về tính chất của lực hấp dẫn là vào năm 1905, ông đã đưa ra một khái niệm về lực học. Ông đã phát biểu rằng nếu một hạt mang điện tích dương, nó có thể chuyển đổi năng lượng của nó thành các hạt khác. Năm 1915, Arthur Eddington phát minh ra --- ``` You can do the same with **Tensorflow** by using the model **TFGPT2Tokenizer** instead.
60ae6b353ffb280a1ddd98f8902ad32e
mit
[]
false
model by LinfO This your the Stable Diffusion model fine-tuned the yerlearsi concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **yerlearsi** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/6.jpeg) ![image 1](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/9.jpeg) ![image 2](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/1.jpeg) ![image 3](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/5.jpeg) ![image 4](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/2.jpeg) ![image 5](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/11.jpeg) ![image 6](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/3.jpeg) ![image 7](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/12.jpeg) ![image 8](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/10.jpeg) ![image 9](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/0.jpeg) ![image 10](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/4.jpeg) ![image 11](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/7.jpeg) ![image 12](https://huggingface.co/LinfO/yerlearsi/resolve/main/concept_images/8.jpeg)
eb1f84082388407e1925375653e3ef05
mit
[]
false
Boris Anderson on Stable Diffusion This is the `<boris-anderson>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<boris-anderson> 0](https://huggingface.co/sd-concepts-library/boris-anderson/resolve/main/concept_images/0.jpeg) ![<boris-anderson> 1](https://huggingface.co/sd-concepts-library/boris-anderson/resolve/main/concept_images/3.jpeg) ![<boris-anderson> 2](https://huggingface.co/sd-concepts-library/boris-anderson/resolve/main/concept_images/2.jpeg) ![<boris-anderson> 3](https://huggingface.co/sd-concepts-library/boris-anderson/resolve/main/concept_images/1.jpeg)
c94c5328fade7fbb475446a8e7985031
cc-by-4.0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
ca4d667e47a26b41b92b0a70c5f5d005
cc-by-4.0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization
187291785581846e119e550f7bfc6805
cc-by-4.0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "Who scored the first touchdown of the game?\n" + "... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
b432dc1dd13ab93b3ebdee97820225a7
cc-by-4.0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ]
a1371182edc11bc89f857d936b9de71c
creativeml-openrail-m
[]
false
Depending on tags and length of tags artstyle will vary, so experiment with them! | wral artstyle - artstyle tag | watercolor \(medium\) - helps to bring out watercolor | multicolored hair - helps to make image multicolored Sample images: <style> img { display: inline-block; } </style> <img src="https://huggingface.co/YoungMasterFromSect/ManyColors/resolve/main/1.png" width="300" height="200"> <img src="https://huggingface.co/YoungMasterFromSect/ManyColors/resolve/main/2.png" width="300" height="200"> <img src="https://huggingface.co/YoungMasterFromSect/ManyColors/resolve/main/3.png" width="300" height="300"> <img src="https://huggingface.co/YoungMasterFromSect/ManyColors/resolve/main/4.png" width="300" height="300"> <img src="https://huggingface.co/YoungMasterFromSect/ManyColors/resolve/main/5.png" width="300" height="300">
4f296a698325e3ff1b2526add11c805a
apache-2.0
['generated_from_trainer']
false
tiny-mlm-imdb-target-rotten_tomatoes This model is a fine-tuned version of [muhtasham/small-mlm-wikitext](https://huggingface.co/muhtasham/small-mlm-wikitext) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3909 - Accuracy: 0.8021 - F1: 0.8017
cd528bef1a34b827e7474d245b27e3b6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4528 | 1.87 | 500 | 0.4296 | 0.8030 | 0.8028 | | 0.2265 | 3.75 | 1000 | 0.5558 | 0.8096 | 0.8096 | | 0.1111 | 5.62 | 1500 | 0.9042 | 0.8039 | 0.8039 | | 0.0584 | 7.49 | 2000 | 1.1252 | 0.8058 | 0.8058 | | 0.0405 | 9.36 | 2500 | 1.3909 | 0.8021 | 0.8017 |
e28a1074a7d402426ba9132d0dd29379
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_qnli_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 1.1653 - Accuracy: 0.5779
e5ee54e30604e751b9727120f5945249
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.7088 | 1.0 | 33208 | 1.1653 | 0.5779 | | 0.5355 | 2.0 | 66416 | 1.2844 | 0.5889 | | 0.4541 | 3.0 | 99624 | 1.2482 | 0.5825 | | 0.4041 | 4.0 | 132832 | 1.2911 | 0.5836 | | 0.3722 | 5.0 | 166040 | 1.3428 | 0.5779 | | 0.3486 | 6.0 | 199248 | 1.3220 | 0.5781 |
70c75924b71452a17dafc92e96cfa789
apache-2.0
['generated_from_trainer']
false
mt5-small-finetuned-cnn-dailymail This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.7294 - Rouge1: 32.8352 - Rouge2: 17.0633 - Rougel: 29.0888 - Rougelsum: 30.8226
03a7d59d3db1ff368105aa3afd2b5537
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
f88e577a734f803ea41f88697f1ff903
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | No log | 1.0 | 8973 | 1.9272 | 31.6634 | 16.1653 | 28.1624 | 29.7819 | | No log | 2.0 | 17946 | 1.8282 | 32.1032 | 16.4388 | 28.4914 | 30.1856 | | No log | 3.0 | 26919 | 1.7967 | 32.5721 | 16.8392 | 28.8483 | 30.5764 | | 2.1615 | 4.0 | 35892 | 1.7640 | 32.6788 | 16.94 | 28.994 | 30.6883 | | 2.1615 | 5.0 | 44865 | 1.7450 | 32.8129 | 17.048 | 29.0788 | 30.8106 | | 2.1615 | 6.0 | 53838 | 1.7379 | 32.7074 | 16.9641 | 28.9745 | 30.7043 | | 2.1615 | 7.0 | 62811 | 1.7317 | 32.7692 | 17.0116 | 29.0395 | 30.7685 | | 2.0886 | 8.0 | 71784 | 1.7294 | 32.8352 | 17.0633 | 29.0888 | 30.8226 |
0dae05c6954318d19b6578809c5d6aa2
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_vp-100k_gender_male-2_female-8_s320 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
98d7b5346843dc3f82464608888e6579
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-the-beatles This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5186
8e6160fe75af2733ae338f4331cfda32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 84 | 2.6517 | | No log | 2.0 | 168 | 2.6433 | | No log | 3.0 | 252 | 2.5186 |
caa2809e025ac010841f9805bb5adcf5
apache-2.0
['generated_from_keras_callback']
false
priyankavalappil/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9684 - Train End Logits Accuracy: 0.7305 - Train Start Logits Accuracy: 0.6893 - Validation Loss: 1.1278 - Validation End Logits Accuracy: 0.6999 - Validation Start Logits Accuracy: 0.6635 - Epoch: 1
3137793097a02e3bdbba854cbc94a6e1
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
679093d58a7dc27f36eec4b187b995db
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5059 | 0.6070 | 0.5685 | 1.1518 | 0.6816 | 0.6482 | 0 | | 0.9684 | 0.7305 | 0.6893 | 1.1278 | 0.6999 | 0.6635 | 1 |
d84d6537514469c084662c87bd5c44de
mit
['generated_from_trainer']
false
competent_payne This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
4befc9c7f2680056d0b1a38dafddd143
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 25000 - mixed_precision_training: Native AMP
74e3286c4b49a450d8376f16707d1159
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.00078, 'is_split_by_sentences': True, 'skip_tokens': 1661599744}, 'generation': {'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': 'f9cb81e577effccc64697016af1e8eaf2bf5dcd2'}, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'competent_payne', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1661599744, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
010b581f27f36ea0a6b25869b06f41c3
apache-2.0
['translation']
false
opus-mt-en-pag * source languages: en * target languages: pag * OPUS readme: [en-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pag/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.eval.txt)
6cbd8fc7ca27118acbca5bb94f38f0ec
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s452 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a7f9485b80ce41b6bf5a65b2715aac60
cc-by-4.0
['question generation']
false
Model Card of `lmqg/mbart-large-cc25-frquad-qg` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
17eb6ebcf26b8be7330713d3093c98de
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
f95114dae7adace409350ab6d4563693
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.", list_answer="le Suprême Berger") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-frquad-qg") output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") ```
1bb913ffe7bf253c95c59122d6e3fff8
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 71.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 14.36 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 3.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 1.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 0.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 7.78 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 50.35 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 16.4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 81.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedF1Score (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (BERTScore) | 81.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (MoverScore) | 55.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (BERTScore) | 81.25 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (MoverScore) | 55.6 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mbart-large-cc25-frquad-ae`](https://huggingface.co/lmqg/mbart-large-cc25-frquad-ae). [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.lmqg_mbart-large-cc25-frquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 75.55 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedF1Score (MoverScore) | 51.75 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (BERTScore) | 74.04 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (MoverScore) | 51.03 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (BERTScore) | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (MoverScore) | 52.52 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
473020c706c899d30af0365aead9db3b
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 4 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg/raw/main/trainer_config.json).
be4876fa1c25674e716bd9afa914834f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2199 - F1: 0.9236 - Accuracy: 0.9235
657d2ed9cb4c39b8c756c8648d525964
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 0.8072 | 1.0 | 250 | 0.3153 | 0.9023 | 0.905 | | 0.2442 | 2.0 | 500 | 0.2199 | 0.9236 | 0.9235 |
bb386a6390e237469422265e4b66f0ad
creativeml-openrail-m
['stable-diffusion', 'art', 'cutesexyrobutts', 'style', 'dreambooth']
false
NEW: Merges *Merging sketchstyle models with other models will help to improve anatomy and other elements while also trying to keep the intended style as much as possible.</br> I will upload from time to time new merges, if any of those improves on the previous ones. </br> A 'weak' model means there is more weight to cutesexyrobutts style and a 'strong' model means there is a little more focus on the other model/models.</br> Weak models might mantain a little more of the style but could have some anatomy problems, while strong models keep better anatomy though the style might become a little affected. Low CFG Scale (5-9) and using the "sketchstyle" token in the prompts might help with keeping the style on strong models.</br>* **List of merges:** - Pastelmix 0.2 + sketchstyle_v4-42k 0.8 weak (weighted sum, fp16) - Pastelmix 0.4 + sketchstyle_v4-42k 0.6 strong (weighted sum, fp16) **Versions:** - V1: Trained with around 1300 images (from danbooru), automatically cropped. - V2: Trained with 400 handpicked and handcropped images. - V3: Trained with the same images as V2, but with 'style training' enabled. - V4: Trained with 407 images, including 'captions' for each image. **Recommended to use:** - V4-42k (pretty good style and decent anatomy, might be the best) - V3-40k (decent style and anatomy) - V4-10k (best anatomy, meh style) - V4-100k (good style, bad anatomy/hard to use, useful with img2img) **Usage recommendations:** - For V4, don't use CFG Scale over 11-12, as it will generate an overcooked image. Try between 6 to 9 at first. 9 seems to be the best if you're using the 'sketchstyle' in the prompt, if not, lower - Generating specific characters might be hard, result in bad anatomy or not even work at all. If you want an specific character, the best is to use img2img with an image generated with another model - Going over a certain resolution will generate incoherent results, so try staying close to 768x768 (examples: 640x896, 768x960, 640x1024, 832x640, and similar). Maybe Hires fix could help. - Make sure to add nsfw/nipples/huge or large breasts in the negative prompts if you don't want any of those. - Skin tone tends to be 'tan', use dark skin/tan on the negative prompts if its the case, and/or pale skin in the prompts. - Using img2img to change the style of another image generally gives the best results, examples below. Pay attention to this number. Normally going below 75 generates bad results, specially with models with high steps like V4-100k. Best with 100+ ![Screenshot_1.png](https://s3.amazonaws.com/moonup/production/uploads/1671505643175-633520c031a2be3938c9f8f5.png) Token: 'sketchstyle' (if used, anatomy may get affected, but it can be useful for models with low steps to get a better style)<br /> **Limitations and known errors:** - Not very good anatomy - Sometimes it generates artifacts, specially on the eyes and lips - Tends to generate skimpy clothes, open clothes, cutouts, and similar - Might generate unclear outlines Try using inpainting and/or img2img to fix these.
448f950607abeed23ab8cc26ede187f4
creativeml-openrail-m
['stable-diffusion', 'art', 'cutesexyrobutts', 'style', 'dreambooth']
false
Comparison between different versions and models As you can see, robutts tends to give less coherent results and might need more prompting/steps to get good results (tried on other things aswell with similar results) ![comparison.jpg](https://s3.amazonaws.com/moonup/production/uploads/1671502776323-633520c031a2be3938c9f8f5.jpeg) V2 with 10k steps or lower tends to give better anatomy results, and over that the style appears more apparent, so 10k is the 'sweet spot'. ![comparison2.jpg](https://s3.amazonaws.com/moonup/production/uploads/1671504780023-633520c031a2be3938c9f8f5.jpeg) Around 40 steps seems to be the best, but you should use 20 steps and, if you get an image you like, you increase the step count to 40 or 50. ![comparison3.jpg](https://s3.amazonaws.com/moonup/production/uploads/1671509387599-633520c031a2be3938c9f8f5.jpeg) Comparison between not completing that negative prompt and increasing the strength too much. ![comparison4.jpg](https://s3.amazonaws.com/moonup/production/uploads/1671568686470-633520c031a2be3938c9f8f5.jpeg) Comparison (using V3-5k) of token strength. ![comparison5.jpg](https://s3.amazonaws.com/moonup/production/uploads/1671571773116-633520c031a2be3938c9f8f5.jpeg) Another comparison of token strength using V3-15k. ![comparison6.jpg](https://s3.amazonaws.com/moonup/production/uploads/1671734192353-633520c031a2be3938c9f8f5.jpeg) Comparison, from 1 to 30 steps, between NovelAI - Sketchstyle V3-27500 (img2img with NovelAI image) - Sketchstyle V3-27500. Using Euler sampler. ![comparison.gif](https://s3.amazonaws.com/moonup/production/uploads/1672115659361-633520c031a2be3938c9f8f5.gif)
811333225f2926be7aecc3180324e577
creativeml-openrail-m
['stable-diffusion', 'art', 'cutesexyrobutts', 'style', 'dreambooth']
false
Examples: ![05144-1365838486-(masterpiece,best quality,ultra-detailed),((((face close-up)))),((profile)),((lips,pink_eyes)),((pink_hair,hair_slicked_back,hai.png](https://s3.amazonaws.com/moonup/production/uploads/1671513540474-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: (masterpiece,best quality,ultra-detailed),((((face close-up)))),((profile)),((lips,pink_eyes)),((pink_hair,hair_slicked_back,hair_strand)),(serious),portrait,frown,arms_up,adjusting_hair,eyelashes,parted_lips,(sportswear,crop_top),toned,collarbone,ponytail,1girl,solo,highres<br /> Negative prompt: (deformed,disfigured),(sitting,fat,thick,thick_thighs,nsfw),open_clothes,open_shirt,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br /> Steps: 70, Sampler: Euler, CFG scale: 12, Seed: 1365838486, Size: 768x768, Model: Sketchstyle V3-5k ``` _Eyes fixed with inpainting_: ![00609-996011741-(masterpiece,best quality,ultra-detailed),((((face close-up)))),((profile)),((lips,pink_eyes)),((pink_hair,hair_slicked_back,hai.png](https://s3.amazonaws.com/moonup/production/uploads/1671515050937-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: (masterpiece,best quality,ultra-detailed),((((face close-up)))),((profile)),((lips,pink_eyes)),((pink_hair,hair_slicked_back,hair_strand)),(serious),portrait,frown,arms_up,adjusting_hair,eyelashes,parted_lips,(sportswear,crop_top),toned,collarbone,ponytail,1girl,solo,highres<br /> Negative prompt: (deformed,disfigured),(sitting,fat,thick,thick_thighs,nsfw),open_clothes,open_shirt,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br /> Steps: 34, Sampler: Euler, CFG scale: 12, Seed: 996011741, Size: 768x768, Denoising strength: 0.6, Mask blur: 8, Model: Sketchstyle V2-10k ``` ![05152-4172541433-sketchstyle,(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body,parted_lips),1girl, (nip.png](https://s3.amazonaws.com/moonup/production/uploads/1671517158965-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: sketchstyle,(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body,parted_lips),1girl, (nipples), (fox ears,animal_ear_fluff), (bare_shoulders,eyelashes,lips,orange eyes,blush),orange_hair,((onsen,indoors)),(toned),medium_breasts,navel,cleavage,looking at viewer,collarbone,hair bun, solo, highres,(nsfw)<br /> Negative prompt: (dark-skin,dark_nipples,extra_nipples),deformed,disfigured,(sitting,fat,thick,thick_thighs,nsfw),open_clothes,open_shirt,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br /> Steps: 30, Sampler: Euler, CFG scale: 12, Seed: 4172541433, Size: 640x832, Model: Sketchstyle V3-5k ``` ![05111-4268937236-sketchstyle,(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body),1girl, (nipples), (fox.png](https://s3.amazonaws.com/moonup/production/uploads/1671517508531-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: sketchstyle,(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body),1girl, (nipples), (fox ears,animal_ear_fluff), (bare_shoulders,eyelashes,lips,orange eyes,ringed_eyes,shy,blush),onsen,indoors,medium_breasts, cleavage,looking at viewer,collarbone,hair bun, solo, highres,(nsfw)<br /> Negative prompt: Negative prompt: (huge_breasts,large_breasts),realistic,3D,3D Game,nsfw,lowres, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth<br /> Steps: 40, Sampler: Euler, CFG scale: 14, Seed: 4268937236, Size: 704x896, Model: Sketchstyle V3-5k ``` ![05159-3765393440-(masterpiece,best quality,ultra detailed),(((facing_away,sitting,arm_support,thighs,legs))),(((from_behind,toned,ass,bare back,b.png](https://s3.amazonaws.com/moonup/production/uploads/1671519173074-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: (masterpiece,best quality,ultra detailed),(((facing_away,sitting,arm_support,thighs,legs))),(((from_behind,toned,ass,bare back,breasts))),((thong,garter_belt,garter_straps,lingerie)),(hair_flower,bed_sheet),(black_hair,braid,braided_ponytail,long_hair),1girl,grey_background,thighs,solo,highres<br /> Negative prompt: ((deformed)),((looking_back,looking_at_viewer,face)),((out_of_frame,cropped)),(fat,thick,thick_thighs),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, patreon_logo, patreon_username, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br /> Steps: 50, Sampler: Euler, CFG scale: 12, Seed: 3765393440, Size: 640x832, Model: Sketchstyle V3-5k ``` ![05195-2346086519-(masterpiece,best quality,ultra detailed),(((facing_away,sitting,arm_support,thighs,legs))),(((from_behind,toned,ass,bare back)).png](https://s3.amazonaws.com/moonup/production/uploads/1671561192018-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: (masterpiece,best quality,ultra detailed),(((facing_away,sitting,arm_support,thighs,legs))),(((from_behind,toned,ass,bare back))),((thong,garter_belt,garter_straps,lingerie)),(hair_flower,bed_sheet),(black_hair,braid,braided_ponytail,long_hair),1girl,grey_background,thighs,solo,highres<br /> Negative prompt: backboob,((deformed)),((looking_back,looking_at_viewer,face)),((out_of_frame,cropped)),(fat,thick,thick_thighs),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, patreon_logo, patreon_username, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br /> Steps: 50, Sampler: Euler, CFG scale: 12, Seed: 2346086519, Size: 640x832, Model: Sketchstyle V3-5k ``` ![05170-4024165718-(masterpiece,best quality,ultra-detailed),(sketchstyle),(arms_up,tying_hair),(large_breasts,nipples),(long_hair,blonde_hair,tied.png](https://s3.amazonaws.com/moonup/production/uploads/1671521055006-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: (masterpiece,best quality,ultra-detailed),(sketchstyle),(arms_up,tying_hair),(large_breasts,nipples),(long_hair,blonde_hair,tied_hair,ponytail,collarbone,navel,stomach,midriff,completely_nude,nude,toned),((cleft_of_venus,pussy)),cloudy_sky,1girl,solo,highres,(nsfw)<br /> Negative prompt: (deformed,disfigured,bad proportions,exaggerated),from_behind,(jewelry,earrings,hair_ornament),((sagging_breasts,huge_breasts,shiny,shiny_hair,shiny_skin,realistic,3D,3D game)),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),((fat,thick,thick_thighs)),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br /> Steps: 40, Sampler: Euler, CFG scale: 12, Seed: 4024165718, Size: 704x960, Model: Sketchstyle V3-5k ``` ![05177-4166887955-(masterpiece,best quality),(sketchstyle),((1boy,male_focus)),((close-up,portrait)),((black_shirt)),((((red collared_coat)))),((d.png](https://s3.amazonaws.com/moonup/production/uploads/1671522588038-633520c031a2be3938c9f8f5.png) ```bibtex Prompt: (masterpiece,best quality),(sketchstyle),((1boy,male_focus)),((close-up,portrait)),((black_shirt)),((((red collared_coat)))),((dante_\(devil_may_cry\),devil may cry)),((medium_hair,parted_hair,parted_bangs,forehead,white_hair)),((stubble)),(facial_hair),(popped_collar,open_coat),(closed_mouth,smile),blue_eyes,looking_at_viewer,solo,highres<br /> Negative prompt: ((deformed)),(nsfw),(long_hair,short_hair,young,genderswap,1girl,female,breasts,androgynous),((choker)),(shiny,shiny_hair,shiny_skin,3D,3D game),((extra_limbs,extra_arms)),(loli,shota),(giant nipples),((fat,thick,thick_thighs)),long body,(lowres),(((poorly drawn fingers, poorly drawn hands))),((anatomic nonsense)),(extra fingers),(fused fingers),(((one hand with more than 5 fingers))),(((one hand with less than 5 fingers))),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face maskissing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled,monochrome, greyscale,face mask<br /> Steps: 50, Sampler: Euler, CFG scale: 12, Seed: 4166887955, Size: 768x768, Model: Sketchstyle V3-5k ```
139624d618fb4054f8379dbd9204213d
creativeml-openrail-m
['stable-diffusion', 'art', 'cutesexyrobutts', 'style', 'dreambooth']
false
img2img style change examples: ![img2img-1.png](https://s3.amazonaws.com/moonup/production/uploads/1671510649616-633520c031a2be3938c9f8f5.png) ```bibtex Original settings: Model: NovelAI, Steps: 30, Sampler: Euler a, CFG scale: 16, Seed: 3633297035, Size: 640x960<br /> Original prompt: masterpiece, best quality, 1girl, naked towel, fox ears, orange eyes, wet, ringed eyes, shy, medium breasts, cleavage, looking at viewer, hair bun, blush, solo, highres<br /> Original negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth<br /> New settings: Model: Sketchstyle V3 5k steps, Steps: 33, CFG scale: 12, Seed: 3311014108, Size: 640x960, Denoising strength: 0.6, Mask blur: 4<br /> New prompt: ((sketchstyle)),(masterpiece, best quality,beautiful lighting,stunning,ultra-detailed),(portrait,upper_body),1girl, (((naked_towel,towel))), (fox ears,animal_ear_fluff), (bare_shoulders,eyelashes,lips,orange eyes,ringed_eyes,shy,blush),onsen,indoors,medium_breasts, cleavage,looking at viewer,collarbone,hair bun, solo, highres<br /> New negative prompt: (nipples,huge_breasts,large_breasts),realistic,3D,3D Game,nsfw,lowres, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth<br /> ``` ![img2img-2.png](https://s3.amazonaws.com/moonup/production/uploads/1671523242721-633520c031a2be3938c9f8f5.png) ```bibtex Original settings: Model: NovelAI, Steps: 30, Sampler: Euler a, CFG scale: 16, Seed: 764529639, Size: 640x960<br /> Prompt: masterpiece, highest quality, (1girl), (looking at viewer), ((pov)), fox ears, ((leaning forward)), [light smile], ((camisole)), short shorts, (cleavage), (((medium breasts))), blonde, (high ponytail), (highres)<br /> Negative prompt: ((deformed)), (duplicated), lowres, ((missing animal ears)), ((poorly drawn face)), ((poorly drawn eyes)), (extra limb), (mutation), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (fused fingers), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, realistic, realism, huge breasts<br /> New settings: Model: Sketchstyle V3 5k steps, Steps: 28, CFG scale: 12, Seed: 1866024520, Size: 640x960, Denoising strength: 0.7, Mask blur: 8 ``` ![img2img-3.png](https://s3.amazonaws.com/moonup/production/uploads/1671524129672-633520c031a2be3938c9f8f5.png) ```bibtex Original settings: Model: NovelAI, Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 2604970030, Size: 640x896<br /> Original prompt: (masterpiece),(best quality),((sketch)),(ultra detailed),(1girl, teenage),((white hair, messy hair)),((expressionless)),(black jacket, long sleeves),((grey scarf)),((squatting)), (hands on own knees),((plaid_skirt, pleated skirt, miniskirt)),(fox ears, extra ears, white fox tail, fox girl, animal ear fluff),black ((boots)),full body,bangs,ahoge,(grey eyes),solo,absurdres<br /> Negative prompt: ((deformed)),((loli, young)),(kneehighs,thighhighs),long body, long legs),lowres,((((poorly drawn fingers, poorly drawn hands)))),((anatomic nonsense)),(extra fingers),((fused fingers)),(plaid scarf),(spread legs),((one hand with more than 5 fingers)), ((one hand with less than 5 fingers)),((bad eyes)),(twin, multiple girls, 2girls),(separated eyes),(long neck),((bad proportions)),(bad lips),((thick lips)),loli,long body,(((poorly drawn eyes))),((poorly drawn)),((bad drawing)),(blurry),(((mutation))),(((bad anatomy))),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (poorly drawn feet), (fused toes), (mutated hands and fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, (realistic), face mask<br /> New settings: Model: Sketchstyle V3 5k steps, Steps: 45, CFG scale: 12, Seed: 1073378414, Size: 640x896, Denoising strength: 0.6, Mask blur: 8<br /> New prompt: (masterpiece),(best quality),(sketchstyle),(ultra detailed),(1girl, teenage),((white hair, messy hair)),((expressionless)),(black jacket, long sleeves),((grey scarf)),((squatting)), (hands on own knees),((plaid_skirt, pleated skirt, miniskirt)),(fox ears, extra ears, white fox tail, fox girl, animal ear fluff),black ((boots)),full body,bangs,ahoge,(grey eyes),solo,absurdres<br /> ``` ![img2img-4.png](https://s3.amazonaws.com/moonup/production/uploads/1672003898152-633520c031a2be3938c9f8f5.png) ```bibtex Original settings: Model: NovelAI, Steps: 30, Sampler: Euler a, CFG scale: 12, Seed: 3659534337, Size: 768x832<br /> Original prompt: ((masterpiece)), ((highest quality)),(((ultra-detailed))),(illustration),(1girl), portrait,((wolf ears)),(beautiful eyes),looking at viewer,dress shirt,shadows,((ponytail)), (white hair), ((sidelocks)),outdoors,bangs, solo, highres<br /> Original negative prompt: ((deformed)), lowres,loli,((monochrome)),(black and white),((lips)),long body,(((poorly drawn eyes))),((out of frame)),((poorly drawn)),((bad drawing)),(blurry),depth of field,(fused fingers),(((mutation))),((bad anatomy)),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, realism, face mask<br /> New settings: Model: Sketchstyle V3-20k 2000steps text encoder, Steps: 80, CFG scale: 12, Seed: 3001145714, Size: 768x832, Denoising strength: 0.5, Mask blur: 4<br /> New prompt: ((sketchstyle)),(masterpiece,best quality,highest quality,illustration),((ultra-detailed)),1girl,(portrait,close-up),((wolf_girl,wolf_ears)),(eyelashes,detailed eyes,beautiful eyes),looking at viewer,(collared-shirt,white_shirt),((ponytail)), (white hair), ((sidelocks)),(blue eyes),closed_mouth,(shadows,outdoors,sunlight,grass,trees),hair_between_eyes,bangs,solo,highres<br /> New negative prompt: ((deformed)),(less than 5 fingers, more than 5 fingers,bad hands,bad hand anatomy,missing fingers, extra fingers, mutated hands, disfigured hands, deformed hands),lowres,loli,((monochrome)),(black and white),((lips)),long body,(((poorly drawn eyes))),((out of frame)),((poorly drawn)),((bad drawing)),(blurry),depth of field,(fused fingers),(((mutation))),((bad anatomy)),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts, black and white, monochrome, 3D Game, 3D, realism, face mask<br /> ``` ![img2img-5.png](https://s3.amazonaws.com/moonup/production/uploads/1672122599787-633520c031a2be3938c9f8f5.png) ```bibtex Original settings: Model: NovelAI, Steps: 20, Sampler: Euler, CFG scale: 11, Seed: 2413712316, Size: 768x768<br /> Original prompt: (masterpiece,best quality,ultra-detailed,detailed_eyes),(sketch),((portrait,face focus)),(((shaded eyes))),(wavy hair),(((ringed eyes,red_hair))),((black hair ribbon)),((hair behind ear)),(((short ponytail))),(blush lines),(good anatomy),(((hair strands))),(bangs),((lips)),[teeth, tongue],yellow eyes,(eyelashes),shirt, v-neck,collarbone,cleavage,breasts,(medium hair),(sidelocks),looking at viewer,(shiny hair),1girl,solo,highres<br /> Original negative prompt: ((deformed)),lowres,(black hair),(formal),earrings,(twin, multiple girls, 2girls),(braided bangs),((big eyes)),((close up, eye focus)),(separated eyes),(multiple eyebrows),((eyebrows visible through hair)),(long neck),(bad lips),(tongue out),((thick lips)),(from below),loli,long body,(((poorly drawn eyes))),((poorly drawn)),((bad drawing)),((blurry)),depth of field,(fused fingers),(((mutation))),(((bad anatomy))),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored,doubled, huge breasts, black and white, monochrome, 3D Game, 3D, (realistic), face mask<br /> New settings: (img2img with original image, then again with the new generated image, inpainted to fix the neck) Model: Sketchstyle V3-27.5k 2000steps text encoder, Steps: 80, CFG scale: 12, Seed: 1237755461 / 1353966202, Size: 832x832, Denoising strength: 0.5 / 0.3, Mask blur: 4<br /> New prompt: sketchstyle,(masterpiece,best quality,ultra-detailed,detailed_eyes),(((portrait,face focus,close-up))),(((shaded eyes))),(wavy hair),(((ringed eyes,red_hair))),((black hair ribbon)),((hair behind ear)),(((short ponytail))),(blush lines),(good anatomy),(((hair strands))),(bangs),((lips)),[teeth, tongue],(yellow eyes,eyelashes,tsurime,slanted_eyes),shirt, v-neck,collarbone,breasts,(medium hair),(sidelocks),looking at viewer,(shiny hair),1girl,solo,highres<br /> New negative prompt: ((deformed)),((loli,young)),lowres,(black hair),(formal),earrings,(twin, multiple girls, 2girls),(braided bangs),((big eyes)),((close up, eye focus)),(separated eyes),(multiple eyebrows),((eyebrows visible through hair)),(long neck),(bad lips),(tongue out),((thick lips)),(from below),loli,long body,(((poorly drawn eyes))),((poorly drawn)),((bad drawing)),((blurry)),depth of field,(fused fingers),(((mutation))),(((bad anatomy))),(((multiple arms))),(((bad face))),(((bad eyes))),bad tail,(((more than 2 ears)), (((poorly drawn face))), (extra limb), ((deformed hands)), (((poorly drawn hands))), (poorly drawn feet), (fused toes), (mutated hands and fingers), (one hand with more than 5 fingers), (one hand with less than 5 fingers), extra toes, missing toes, extra feet, extra legs, extra ears, missing ear, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, bad mouth, animal hands, censored,doubled, huge breasts, black and white, monochrome, 3D Game, 3D, (realistic), face mask<br /> ```
19016c4174afd6232988b4799b62cdb7
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3286 - Accuracy: 0.8667 - F1: 0.8667
5ff18f237c39a10fd4bde0c3beb64527
apache-2.0
['generated_from_trainer']
false
t5-base-finetuned-parth This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3764 - Rouge1: 27.5144 - Rouge2: 22.6391 - Rougel: 25.9369 - Rougelsum: 27.1193 - Gen Len: 17.5
6ab28641790e8a70775b8bbf01a8d958
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 4 | 2.7016 | 27.6196 | 22.7595 | 25.9443 | 27.2369 | 17.5 | | No log | 2.0 | 8 | 2.5425 | 27.6196 | 22.7595 | 25.9443 | 27.2369 | 17.5 | | No log | 3.0 | 12 | 2.4526 | 27.6196 | 22.7595 | 25.9443 | 27.2369 | 17.5 | | No log | 4.0 | 16 | 2.3977 | 27.6196 | 22.7595 | 25.9443 | 27.2369 | 17.5 | | No log | 5.0 | 20 | 2.3764 | 27.5144 | 22.6391 | 25.9369 | 27.1193 | 17.5 |
dcbe5e24759ec34fbe58c9f137b03325
apache-2.0
['generated_from_trainer', 'automatic-speech-recognition', 'robust-speech-event', 'hf-asr-leaderboard']
false
wav2vec2-xlsr-fi-lm-1B This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common voice train/dev/other datasets. It achieves the following results on the evaluation set without language model: - Loss: 0.1853 - Wer: 0.2205 With language model: - Wer: 0.1026
7531f645639cc50727b81c9c8febe269
apache-2.0
['generated_from_trainer', 'automatic-speech-recognition', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP
e6349b5ed48455fd78179de66028796d
apache-2.0
['generated_from_trainer', 'automatic-speech-recognition', 'robust-speech-event', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8158 | 0.67 | 400 | 0.4835 | 0.6310 | | 0.5679 | 1.33 | 800 | 0.4806 | 0.5538 | | 0.6055 | 2.0 | 1200 | 0.3888 | 0.5083 | | 0.5353 | 2.67 | 1600 | 0.3258 | 0.4365 | | 0.4883 | 3.33 | 2000 | 0.3313 | 0.4204 | | 0.4513 | 4.0 | 2400 | 0.2924 | 0.3904 | | 0.3753 | 4.67 | 2800 | 0.2593 | 0.3608 | | 0.3478 | 5.33 | 3200 | 0.2832 | 0.3551 | | 0.3796 | 6.0 | 3600 | 0.2495 | 0.3402 | | 0.2556 | 6.67 | 4000 | 0.2342 | 0.3106 | | 0.229 | 7.33 | 4400 | 0.2181 | 0.2812 | | 0.205 | 8.0 | 4800 | 0.2041 | 0.2523 | | 0.1654 | 8.67 | 5200 | 0.2015 | 0.2416 | | 0.152 | 9.33 | 5600 | 0.1942 | 0.2294 | | 0.1569 | 10.0 | 6000 | 0.1853 | 0.2205 |
ed4bc55ef114f490f8e62578913fa525
mit
['generated_from_trainer']
false
gpt2-summarization_reward_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7473 - Accuracy: 0.6006
f3031249c3da058c40895cdf71051306
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
55362f342cc009a9ff3944df0f9d5fc9
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6421 | 1.0 | 1451 | 0.6815 | 0.6036 | | 0.5893 | 2.0 | 2902 | 0.6764 | 0.6048 | | 0.5488 | 3.0 | 4353 | 0.7074 | 0.6012 | | 0.5187 | 4.0 | 5804 | 0.7254 | 0.6009 | | 0.5034 | 5.0 | 7255 | 0.7473 | 0.6006 |
7193c5e02a49b2380b486a49f8532846
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
deberta-v3-large for QA This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
31cf4754b2e173cde941baf493584d1c
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
Overview **Language model:** deberta-v3-large **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) **Infrastructure**: 1x NVIDIA A10G
7af93f01293546631c3aa521f4801e80
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
Hyperparameters ``` batch_size = 2 grad_acc_steps = 32 n_epochs = 6 base_LM_model = "microsoft/deberta-v3-large" max_seq_len = 512 learning_rate = 7e-6 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ```
6ed0d75d8d7ed9370041e4e287317c40
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
In Haystack Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): ```python reader = FARMReader(model_name_or_path="deepset/deberta-v3-large-squad2")
fb2207a11a5857dfa494de8b9248f09b
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input)
5007ad14082fe455f6384b374b51c51e
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 87.6105449338836, "f1": 90.75307008866517, "total": 11873, "HasAns_exact": 84.37921727395411, "HasAns_f1": 90.6732795483674, "HasAns_total": 5928, "NoAns_exact": 90.83263246425568, "NoAns_f1": 90.83263246425568, "NoAns_total": 5945 ```
e422be62ed028082a2b6059a90c3c2b8
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
b763f0c658a93717a38816508e2cc148
cc-by-4.0
['deberta', 'deberta-v3', 'deberta-v3-large']
false
Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
ddada8462ebb0eca95253e97306205d3