license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
agpl-3.0
[]
false
Info Since people are downloading this and I don't know why, I'll add some information. This model is an image classifier fine-tuned on `microsoft/beit-base-patch16-384`. Its purpose is to be used in the dataset conditioning step for the [Waifu Diffusion project](https://huggingface.co/hakurei/waifu-diffusion), a fine-tune effort for Stable Diffusion. As WD1.4 is planned to have a *significantly large dataset* (~15m images), it is infeasible to analyze every image manually to determine whether or not it should be included in the final training dataset. This image classifier is trained on approximately 3.5k real-life and anime/manga images. Its purpose is to remove aesthetically worthless images from our dataset by classifying them as "`not_aesthetic`". The image classifier was trained to **err on the side of caution** and will generally tend to include images unless they are in a "manga-like" format, have messy lines and/or are sketches, or include an unacceptable amount of text (namely text that covers the primary subject of the image). The idea is that certain images will hurt a SD fine-tune. Note: This classifier is not perfect, just like every other classifier out there. However, with a sufficiently large dataset, any imperfections or misclassifications should average themselves out due to the Law of Large Numbers. You can test out the classifier [here](https://huggingface.co/spaces/cafeai/cafe_aesthetic_demo), along with some other classifiers for the project.
a23c600e77f9da84315c53e097d35d60
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4635 - Wer: 0.3357
ee609339555a895f78bbe2c7c7c96010
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6808 | 4.0 | 500 | 1.5478 | 1.0481 | | 0.835 | 8.0 | 1000 | 0.4611 | 0.4703 | | 0.3013 | 12.0 | 1500 | 0.4327 | 0.3887 | | 0.1741 | 16.0 | 2000 | 0.4073 | 0.3677 | | 0.1309 | 20.0 | 2500 | 0.4306 | 0.3595 | | 0.1097 | 24.0 | 3000 | 0.4318 | 0.3475 | | 0.0825 | 28.0 | 3500 | 0.4635 | 0.3357 |
ecda0a8be1333ce57195aa200bc14db2
apache-2.0
['generated_from_keras_callback']
false
marian-finetuned-hi-hinglish This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.1869 - Validation Loss: 4.0607 - Epoch: 0
f256f4a787da2782b3d03998a429eb67
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 279, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
c4f69b0a44a01179b52270b380323826
apache-2.0
['generated_from_trainer']
false
distilbart-xsum-12-3-whole_summary_chatGPT_and_tweetsum This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-3](https://huggingface.co/sshleifer/distilbart-xsum-12-3) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7952 - Rouge1: 45.7353 - Rouge2: 29.1566 - Rougel: 45.8429 - Rougelsum: 45.7353 - Gen Len: 16.6
be5433367d9b6e305739dc8407e85e55
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
1334b595ffeb74cad50ed604b015445f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 397 | 2.8069 | 42.233 | 23.7538 | 39.2701 | 39.2701 | 17.0 | | 2.8673 | 2.0 | 794 | 2.7736 | 48.2389 | 29.6927 | 43.5004 | 43.5004 | 17.4 | | 1.8043 | 3.0 | 1191 | 2.7952 | 45.7353 | 29.1566 | 45.8429 | 45.7353 | 16.6 |
994e462907239a1a35fb5603381018cf
apache-2.0
['tapas', 'table-question-answering']
false
TAPAS mini model fine-tuned on WikiTable Questions (WTQ) This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_mini` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors.
946687b351fc39eac6991033f73fa51b
apache-2.0
['tapas', 'table-question-answering']
false
Results Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset) LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main) BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset) BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main) MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset) MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main) SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset) SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main) **MINI** | **noreset** | **0.2783** | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset) **MINI** | **reset** | **0.2854** | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main) TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset) TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
3352d95357c2ee3d1ca799d86e5895b6
apache-2.0
['tapas', 'table-question-answering']
false
Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
a6b0938976917e76f240a5bdfb30b005
apache-2.0
['tapas', 'table-question-answering']
false
Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Question [SEP] Flattened table [SEP] ``` The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
664045fdc860a46eed551de5cfe6ecae
apache-2.0
['tapas', 'table-question-answering']
false
Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the `select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and 12).
75ba476a422e0b46f58a2953dbbc4272
apache-2.0
['tapas', 'table-question-answering']
false
BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/PasupatL15, author = {Panupong Pasupat and Percy Liang}, title = {Compositional Semantic Parsing on Semi-Structured Tables}, journal = {CoRR}, volume = {abs/1508.00305}, year = {2015}, url = {http://arxiv.org/abs/1508.00305}, archivePrefix = {arXiv}, eprint = {1508.00305}, timestamp = {Mon, 13 Aug 2018 16:47:37 +0200}, biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3dbd10272d7889252ddee4bf981b36aa
apache-2.0
['generated_from_trainer']
false
Negation_Scope_Detection_SFU_Spanish_NLP-CIC-WFU_DisTEMIST_fine_tuned This model is a fine-tuned version of [ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT](https://huggingface.co/ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3219 - Precision: 0.7403 - Recall: 0.7571 - F1: 0.7486 - Accuracy: 0.9518
a4703b37ce42b684eb049bb7f5f8ff2e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7
beeea04dcd703f461a9575acbaa21569
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 72 | 0.2142 | 0.5227 | 0.6497 | 0.5793 | 0.9267 | | No log | 2.0 | 144 | 0.2019 | 0.625 | 0.7062 | 0.6631 | 0.9420 | | No log | 3.0 | 216 | 0.3089 | 0.6444 | 0.6554 | 0.6499 | 0.9432 | | No log | 4.0 | 288 | 0.2376 | 0.6952 | 0.7345 | 0.7143 | 0.9478 | | No log | 5.0 | 360 | 0.2876 | 0.7037 | 0.7514 | 0.7268 | 0.9538 | | No log | 6.0 | 432 | 0.3077 | 0.7278 | 0.7401 | 0.7339 | 0.9534 | | 0.091 | 7.0 | 504 | 0.3219 | 0.7403 | 0.7571 | 0.7486 | 0.9518 |
f4b332a07f76f18c472263cc5c38cc14
mit
[]
false
nouns glasses on Stable Diffusion This is the `<nouns glasses>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<nouns glasses> 0](https://huggingface.co/sd-concepts-library/nouns-glasses/resolve/main/concept_images/nglasses.145.jpg) ![<nouns glasses> 1](https://huggingface.co/sd-concepts-library/nouns-glasses/resolve/main/concept_images/nglasses.147.jpg) ![<nouns glasses> 2](https://huggingface.co/sd-concepts-library/nouns-glasses/resolve/main/concept_images/ICON glasses.png) ![<nouns glasses> 3](https://huggingface.co/sd-concepts-library/nouns-glasses/resolve/main/concept_images/noun glasses wht.jpg)
2916bae039ff74a8dd99630cc10ea75f
mit
['generated_from_trainer']
false
QA_model This model is a fine-tuned version of [ukr-models/xlm-roberta-base-uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2761
29eb8c0970dc5e2ab3f5b5275e1434f4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5806 | 1.0 | 549 | 1.4431 | | 1.3526 | 2.0 | 1098 | 1.2543 | | 1.0814 | 3.0 | 1647 | 1.2761 |
e1983aa9f0d14a56b5cbce37ac99dc00
apache-2.0
['generated_from_trainer']
false
bert-large-cased-finetuned-mrpc This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6274 - Accuracy: 0.6838 - F1: 0.8122 - Combined Score: 0.7480
72520d84aefb70e98c76b86be7a2c88e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0
399fbf615f6a46edeeb045db85a386dc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6441 | 1.0 | 917 | 0.6370 | 0.6838 | 0.8122 | 0.7480 | | 0.6451 | 2.0 | 1834 | 0.6553 | 0.6838 | 0.8122 | 0.7480 | | 0.6428 | 3.0 | 2751 | 0.6332 | 0.6838 | 0.8122 | 0.7480 | | 0.6476 | 4.0 | 3668 | 0.6248 | 0.6838 | 0.8122 | 0.7480 | | 0.6499 | 5.0 | 4585 | 0.6274 | 0.6838 | 0.8122 | 0.7480 |
5c5eaeffb387bae74d1fd7efbbd0fb19
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 450 - mixed_precision_training: Native AMP
7ca176b068bdff4eef639ce5ccb4ad30
apache-2.0
['MRC', 'Natural Questions List', 'xlm-roberta-large']
false
Model description An XLM-RoBERTa reading comprehension model for List Question Answering using a fine-tuned [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) model that is further fine-tuned on the list questions in the [Natural Questions](https://huggingface.co/datasets/natural_questions) dataset.
7a8f9c9728037c3a553fe769ce728f18
apache-2.0
['MRC', 'Natural Questions List', 'xlm-roberta-large']
false
Intended uses & limitations You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, listqa_nq-task-xlm-roberta-large.
4d011400c06ecfcedc997d136c7efdb5
apache-2.0
['MRC', 'Natural Questions List', 'xlm-roberta-large']
false
Usage You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [listqa.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/listqa.ipynb).
a8086c0bb9c9da08beb7812fd8ed47bc
apache-2.0
['MRC', 'Natural Questions List', 'xlm-roberta-large']
false
BibTeX entry and citation info ```bibtex @article{kwiatkowski-etal-2019-natural, title = "Natural Questions: A Benchmark for Question Answering Research", author = "Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey, Matthew and Chang, Ming-Wei and Dai, Andrew M. and Uszkoreit, Jakob and Le, Quoc and Petrov, Slav", journal = "Transactions of the Association for Computational Linguistics", volume = "7", year = "2019", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q19-1026", doi = "10.1162/tacl_a_00276", pages = "452--466", } ``` ```bibtex @article{DBLP:journals/corr/abs-1911-02116, author = {Alexis Conneau and Kartikay Khandelwal and Naman Goyal and Vishrav Chaudhary and Guillaume Wenzek and Francisco Guzm{\'{a}}n and Edouard Grave and Myle Ott and Luke Zettlemoyer and Veselin Stoyanov}, title = {Unsupervised Cross-lingual Representation Learning at Scale}, journal = {CoRR}, volume = {abs/1911.02116}, year = {2019}, url = {http://arxiv.org/abs/1911.02116}, eprinttype = {arXiv}, eprint = {1911.02116}, timestamp = {Mon, 11 Nov 2019 18:38:09 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
13d9796f4ba64566ad0dbc74ee2085db
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_r-wav2vec2_s79 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
1ef1dd6275cfdacc401c8182f3127dd2
mit
['vision', 'image-captioning']
false
GIT (GenerativeImage2Text), base-sized, fine-tuned on TextCaps GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
e7d478470f7d9215c1ddd8b3232b4e8c
mit
['vision', 'image-captioning']
false
Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text).
1254e42ac97453df6c53933de1317e69
mit
['vision', 'image-captioning']
false
Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you.
55bef049d1d6b9b87d8a7b5b246e84f7
mit
['vision', 'image-captioning']
false
Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on TextCaps. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
f9c9365bc81666255780e494aacdcbe7
mit
['vision', 'image-captioning']
false
Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.
a0e9bc44101bda70d3f2e5db74f7af94
apache-2.0
['generated_from_trainer']
false
bert-base-uncased.CEBaB_confounding.uniform.absa.5-class.seed_44 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.4180 - Accuracy: 0.8827 - Macro-f1: 0.8804 - Weighted-macro-f1: 0.8826
6417394be2ef03b90de6e182f9174056
apache-2.0
[]
false
Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co) - **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div>
6d13eafa9d6bc7b25e8d2b3001398c7b
apache-2.0
[]
false
Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!**
f843d8395d7101a4a3acfc920940bb34
apache-2.0
[]
false
pip install -q transformers from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-large" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details>
befc2483d4e4f440161890a5380ac57f
apache-2.0
[]
false
pip install -q transformers accelerate from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-large" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details>
6b37d0ca5a08acc691575dab5757a11e
apache-2.0
[]
false
pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-large" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace -->
258e5a29a41d93e1a1f2e0d00ddef01e
apache-2.0
[]
false
Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
204eba589cc0ee010bb49299994b40dd
apache-2.0
[]
false
Model - **Architecture:** Same as [mt5-large](https://huggingface.co/google/mt5-large), also refer to the `config.json` file - **Finetuning steps:** 25000 - **Finetuning tokens:** 4.62 billion - **Precision:** bfloat16
833338bb239775e46498f688d38aa868
apache-2.0
[]
false
Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
c588ea9c115fc7eee99893d79ae93246
apache-2.0
[]
false
Citation ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
2b8fd757b91cd4260e9fe6ba3e239024
apache-2.0
[]
false
Install lumi: ``` git clone https://github.com/ontocord/lumi pip install transformers sentencepiece ``` Install models: ``` from lumi.modeling_vlt5 import * from lumi.tokenization_vlt5 import * from lumi.modeling_dalle import * import torch minidalle = DalleModel.from_pretrained("ontocord/minidalle").eval().half().to('cuda') vlt5 = VLT5.from_pretrained("ontocord/vlt5").eval().half().to('cuda') vlt5_tokenizer = VLT5Tokenizer.from_pretrained("ontocord/vlt5") ``` Use: ``` text="""A woman riding a black horse next to a blue fence in central park""" img = minidalle.generate( text=text, image_output=True, token_output=False ) print (vlt5_image2text(vlt5, vlt5_tokenizer, "caption:", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: what is she riding?", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: what is the color of the fence?", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: how many horses are there?", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: is it a man or woman riding the horse?", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they at the beach?", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they at the city?", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they at the park?", img)["text"]) print (vlt5_image2text(vlt5, vlt5_tokenizer, "vqa: are they in space?", img)["text"]) ```
29a868e3f286b9fe148351889dd9934c
apache-2.0
['generated_from_trainer', 'gender']
false
GFMgenderDetection This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4328 - Accuracy: 0.7971
f2e0737be668d0c73d8c4e9fc8264676
apache-2.0
['generated_from_trainer', 'gender']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4591 | 1.0 | 4567 | 0.4502 | 0.7841 | | 0.3915 | 2.0 | 9134 | 0.4328 | 0.7971 |
4b4c3e9b7d799b07d2dd832c2189b13b
mit
['generated_from_trainer']
false
xlm-roberta-base-NER-favsbot This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the favsbot dataset. It achieves the following results on the evaluation set: - Loss: 1.0572 - Precision: 0.5556 - Recall: 0.4722 - F1: 0.5105 - Accuracy: 0.6900
eac61ef3ed0d78ba83ae9c91325f80cd
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
1caebf317a001f5e186c621ab3e5b611
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 4 | 2.4303 | 0.1448 | 0.3556 | 0.2058 | 0.1855 | | No log | 2.0 | 8 | 2.3220 | 0.1465 | 0.3556 | 0.2075 | 0.1991 | | No log | 3.0 | 12 | 2.1842 | 0.2486 | 0.2389 | 0.2436 | 0.4593 | | No log | 4.0 | 16 | 1.9552 | 0.4 | 0.0111 | 0.0216 | 0.4367 | | No log | 5.0 | 20 | 1.6989 | 0.0 | 0.0 | 0.0 | 0.4321 | | No log | 6.0 | 24 | 1.6532 | 0.5 | 0.0056 | 0.0110 | 0.4344 | | No log | 7.0 | 28 | 1.5724 | 0.3649 | 0.15 | 0.2126 | 0.5045 | | No log | 8.0 | 32 | 1.5164 | 0.3654 | 0.2111 | 0.2676 | 0.5271 | | No log | 9.0 | 36 | 1.4448 | 0.4203 | 0.1611 | 0.2329 | 0.5090 | | No log | 10.0 | 40 | 1.3922 | 0.4833 | 0.1611 | 0.2417 | 0.5158 | | No log | 11.0 | 44 | 1.3409 | 0.5395 | 0.2278 | 0.3203 | 0.5498 | | No log | 12.0 | 48 | 1.2831 | 0.5824 | 0.2944 | 0.3911 | 0.5950 | | No log | 13.0 | 52 | 1.2269 | 0.5714 | 0.3556 | 0.4384 | 0.6335 | | No log | 14.0 | 56 | 1.1766 | 0.5625 | 0.4 | 0.4675 | 0.6606 | | No log | 15.0 | 60 | 1.1408 | 0.5540 | 0.4278 | 0.4828 | 0.6674 | | No log | 16.0 | 64 | 1.1159 | 0.56 | 0.4667 | 0.5091 | 0.6810 | | No log | 17.0 | 68 | 1.0908 | 0.5658 | 0.4778 | 0.5181 | 0.6855 | | No log | 18.0 | 72 | 1.0722 | 0.5658 | 0.4778 | 0.5181 | 0.6923 | | No log | 19.0 | 76 | 1.0615 | 0.5592 | 0.4722 | 0.5120 | 0.6900 | | No log | 20.0 | 80 | 1.0572 | 0.5556 | 0.4722 | 0.5105 | 0.6900 |
20adfdc7a829c9d22a0654481c553fc9
cc-by-sa-4.0
[]
false
Corpora The following corpora were used for training the model: * Gigafida 2.0 * Kas 1.0 * Janes 1.0 (only Janes-news, Janes-forum, Janes-blog, Janes-wiki subcorpora) * Slovenian parliamentary corpus siParl 2.0 * slWaC
6f413d93b89808c2c80e976cde695405
cc-by-sa-4.0
[]
false
Changelog 2022-07-21: updated with v2 of the model, the old one is still accesible at [cjvt/legacy-t5-sl-small](https://huggingface.co/cjvt/legacy-t5-sl-small). 2022-09-21: added fast tokenizer (Huggingface's TokenizerFast class, the tokenization remains the same)
e09b0a5bebb801b989a90a5b768002ea
apache-2.0
['automatic-speech-recognition', 'pl']
false
exp_w2v2t_pl_vp-nl_s632 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6742d258b2b78a50b8238250d18ec9c4
openrail
[]
false
SantaCoder ![banner](https://huggingface.co/datasets/bigcode/admin/resolve/main/banner.png) Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces/bigcode/santacoder-demo).
85568a6736815ff8ad9ad8e4eeb51fdf
openrail
[]
false
Model Summary This is the Megatron-version of [SantaCoder](https://huggingface.co/bigcode/santacoder). We refer the reader to the [SantaCoder model page](https://huggingface.co/bigcode/santacoder) for full documentation about this model - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Project Website:** [bigcode-project.org](www.bigcode-project.org) - **Paper:** [🎅SantaCoder: Don't reach for the stars!🌟](https://t.co/YV3pzUbYOr) - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org) - **Languages:** Python, Java, and JavaScript
8e947342a6d856798566143ddf808d40
openrail
[]
false
Intended use The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. You should phrase commands like they occur in source code such as comments (e.g. `
016fc6734d6dac5cdfd3b807acf04ab0
openrail
[]
false
Attribution & Other Requirements The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/santacoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
7f15185f860aae79b13f49efe5bcefe5
openrail
[]
false
Limitations The model has been trained on source code in Python, Java, and JavaScript. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
44d7f1f17230c4b72432863d831c632d
openrail
[]
false
Software - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
18dac692acf651e3e7bf3c3540a8b2fb
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP
769c42649972d8e528b5213a0aef75fc
apache-2.0
['bert']
false
Chinese small pre-trained model MiniRBT In order to further promote the research and development of Chinese information processing, we launched a Chinese small pre-training model MiniRBT based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking technology and Knowledge Distillation technology. This repository is developed based on:https://github.com/iflytek/MiniRBT You may also interested in, - Chinese LERT: https://github.com/ymcui/LERT - Chinese PERT: https://github.com/ymcui/PERT - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/iflytek/HFL-Anthology
3f57f9e99ea81054691ec184e79a6dd3
mit
[]
false
gbert-large-germaner This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the germaner dataset. It achieves the following results on the evaluation set: - precision: 0.8693 - recall: 0.8856 - f1: 0.8774 - accuracy: 0.9784
53fcbedb2f831108934cf9e38f82b7e9
mit
[]
false
Training hyperparameters The following hyperparameters were used during training: - num_train_epochs: 5 - train_batch_size: 8 - eval_batch_size: 8 - learning_rate: 2e-05 - weight_decay_rate: 0.01 - num_warmup_steps: 0 - fp16: True
afc83cc69f8d409a3d6f645a515f02eb
apache-2.0
['generated_from_trainer']
false
BART_corrector This model is a fine-tuned version of [ainize/bart-base-cnn](https://huggingface.co/ainize/bart-base-cnn) on a homemade dataset. Each sample of the dataset is an english sentence that has been duplicated 10 times and where random errors (7%) were added. It achieves the following results on the evaluation set: - Loss: 0.0025 - Rouge1: 81.4214 - Rouge2: 80.2027 - Rougel: 81.4202 - Rougelsum: 81.4241 - Gen Len: 19.3962
e7759bcdb61686208377bf37b5def202
apache-2.0
['generated_from_trainer']
false
Intended uses & limitations The goal of this model is to correct a sentence, given several versions of it with various mistakes. Text sample : _TheIdeSbgn of thh Eiffel Toweg is aYtribeted to Ma. . ahd design of The Eijfel Tower is attribQtedBto ta. . The designYof the EifZel Tower Vs APtWibuteQ to Ma. . The xeQign oC the EiffelXTower ik attributed to Ma. . ghebFesign of theSbiffel TJwer is atMributed to Ma. . The desOBn of thQ Eiffel ToweP isfattributnd toBMa. . The design of the EBfUel Fower is JtAriOuted tx Ma. . The design of Jhe ENffel LoweF is aptrVbuted Lo Ma. . The deslgX of the lPffel Towermis attributedhtohMa. . The desRgn of thekSuffel Tower is Ttkribufed to Ma. ._
f2282b247695be2da1156039a3abcd07
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP
e8830ffbb2725a9d1c62394cb6a3cebe
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.0071 | 1.0 | 2365 | 0.0039 | 81.3664 | 80.0861 | 81.3601 | 81.3667 | 19.3967 | | 0.0033 | 2.0 | 4730 | 0.0029 | 81.3937 | 80.1548 | 81.3902 | 81.3974 | 19.3961 | | 0.0018 | 3.0 | 7095 | 0.0029 | 81.3838 | 80.1404 | 81.385 | 81.3878 | 19.3965 | | 0.001 | 4.0 | 9460 | 0.0025 | 81.4214 | 80.2027 | 81.4202 | 81.4241 | 19.3962 |
9aafe3ff59464ad333577fe69f4ec461
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
**megaPals2.1** Hi guys! Do you remember the superhero vintage animated series? Do you like the 70s style? This Stable Diffusion 2.1 embedding is for you! Some recomendations: the magic word for your prompts is megaPals. If you enjoy my work, please consider supporting me: [![Buy me a coffee](https://badgen.net/badge/icon/buymeacoffee?icon=buymeacoffee&label)](https://www.buymeacoffee.com/elrivx) Examples: <img src=https://imgur.com/wZmw8Xr.png width=30% height=30%> <img src=https://imgur.com/JJGBmT8.png width=30% height=30%> <img src=https://imgur.com/0Nr4IJm.png width=30% height=30%> <img src=https://imgur.com/rRN9r1N.png width=30% height=30%>
3604be4c936a8a9bae1ea5da703bf2f0
apache-2.0
['generated_from_trainer']
false
bert-tiny-Massive-intent-KD-BERT_and_distilBERT This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 2.3729 - Accuracy: 0.8470
4cfb4f80e5ab2ab657198a9b0d0526b6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP
218898fceca02847635a4e8619a96d27
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 15.1159 | 1.0 | 720 | 12.8257 | 0.2253 | | 12.9949 | 2.0 | 1440 | 10.9891 | 0.4304 | | 11.3865 | 3.0 | 2160 | 9.5622 | 0.5032 | | 10.0553 | 4.0 | 2880 | 8.3700 | 0.5539 | | 8.9431 | 5.0 | 3600 | 7.4127 | 0.6104 | | 8.0135 | 6.0 | 4320 | 6.6185 | 0.6286 | | 7.1987 | 7.0 | 5040 | 5.9517 | 0.6818 | | 6.5168 | 8.0 | 5760 | 5.3879 | 0.7118 | | 5.9352 | 9.0 | 6480 | 4.9426 | 0.7275 | | 5.4299 | 10.0 | 7200 | 4.5637 | 0.7413 | | 5.0017 | 11.0 | 7920 | 4.2379 | 0.7585 | | 4.5951 | 12.0 | 8640 | 3.9699 | 0.7678 | | 4.2849 | 13.0 | 9360 | 3.7416 | 0.7737 | | 3.991 | 14.0 | 10080 | 3.5502 | 0.7865 | | 3.7455 | 15.0 | 10800 | 3.4090 | 0.7900 | | 3.5315 | 16.0 | 11520 | 3.3053 | 0.7914 | | 3.345 | 17.0 | 12240 | 3.1670 | 0.8003 | | 3.1767 | 18.0 | 12960 | 3.0739 | 0.8013 | | 3.0322 | 19.0 | 13680 | 2.9927 | 0.8047 | | 2.8864 | 20.0 | 14400 | 2.9366 | 0.8037 | | 2.7728 | 21.0 | 15120 | 2.8666 | 0.8091 | | 2.6732 | 22.0 | 15840 | 2.8146 | 0.8126 | | 2.5726 | 23.0 | 16560 | 2.7588 | 0.8195 | | 2.493 | 24.0 | 17280 | 2.7319 | 0.8273 | | 2.4183 | 25.0 | 18000 | 2.6847 | 0.8249 | | 2.3526 | 26.0 | 18720 | 2.6317 | 0.8323 | | 2.2709 | 27.0 | 19440 | 2.6071 | 0.8288 | | 2.2125 | 28.0 | 20160 | 2.5982 | 0.8323 | | 2.1556 | 29.0 | 20880 | 2.5546 | 0.8337 | | 2.1042 | 30.0 | 21600 | 2.5278 | 0.8318 | | 2.054 | 31.0 | 22320 | 2.5005 | 0.8411 | | 2.0154 | 32.0 | 23040 | 2.4891 | 0.8347 | | 1.9785 | 33.0 | 23760 | 2.4633 | 0.8367 | | 1.9521 | 34.0 | 24480 | 2.4451 | 0.8421 | | 1.9247 | 35.0 | 25200 | 2.4370 | 0.8416 | | 1.8741 | 36.0 | 25920 | 2.4197 | 0.8446 | | 1.8659 | 37.0 | 26640 | 2.4081 | 0.8406 | | 1.8367 | 38.0 | 27360 | 2.3979 | 0.8426 | | 1.8153 | 39.0 | 28080 | 2.3758 | 0.8451 | | 1.7641 | 40.0 | 28800 | 2.3729 | 0.8470 | | 1.7608 | 41.0 | 29520 | 2.3683 | 0.8460 | | 1.7647 | 42.0 | 30240 | 2.3628 | 0.8446 | | 1.7656 | 43.0 | 30960 | 2.3492 | 0.8470 |
ba3882c92bca1cdbdb76b7102139424b
apache-2.0
['automatic-speech-recognition', 'fa']
false
exp_w2v2t_fa_xlsr-53_s204 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
0e87144618d10fff7753350d7b711cc4
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0981 - Accuracy: 0.9801
03bdfd3c5555b00a474ce57a0dc0a156
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6641 | 1.0 | 399 | 0.5522 | 0.9337 | | 0.2698 | 2.0 | 798 | 0.2015 | 0.9715 | | 0.1839 | 3.0 | 1197 | 0.1195 | 0.9793 | | 0.1582 | 4.0 | 1596 | 0.1039 | 0.9791 | | 0.1425 | 5.0 | 1995 | 0.0981 | 0.9801 |
bae576a1ab8ec5ed7ac5f6a0722496fc
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000
83c6bd32ba6cbba1e4a5b4e8ee99f182
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.7384 | 0.61 | 500 | 1.6251 | | 0.0325 | 1.22 | 1000 | 0.0146 | | 0.0104 | 1.83 | 1500 | 0.0094 | | 0.008 | 2.44 | 2000 | 0.0074 | | 0.0061 | 3.05 | 2500 | 0.0058 | | 0.0057 | 3.66 | 3000 | 0.0050 | | 0.0059 | 4.27 | 3500 | 0.0050 | | 0.0047 | 4.88 | 4000 | 0.0050 | | 0.0043 | 5.49 | 4500 | 0.0045 | | 0.0043 | 6.11 | 5000 | 0.0045 | | 0.0036 | 6.72 | 5500 | 0.0043 | | 0.0038 | 7.33 | 6000 | 0.0041 | | 0.0034 | 7.94 | 6500 | 0.0044 | | 0.0036 | 8.55 | 7000 | 0.0040 | | 0.0032 | 9.16 | 7500 | 0.0039 | | 0.0033 | 9.77 | 8000 | 0.0037 | | 0.0032 | 10.38 | 8500 | 0.0036 | | 0.0029 | 10.99 | 9000 | 0.0035 | | 0.003 | 11.6 | 9500 | 0.0035 | | 0.0027 | 12.21 | 10000 | 0.0036 |
4971b45cdbb3bada17ad340b0422e709
mit
['generated_from_trainer']
false
bart-large-cnn-samsum-ElectrifAi_v8.3 This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8755 - Rouge1: 60.4165 - Rouge2: 41.6463 - Rougel: 50.9083 - Rougelsum: 59.2499 - Gen Len: 109.7
90d48ed4962998eb6b8ba8c9e8017b3b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP
d0cc222206520cf10166488a7226186b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 20 | 0.9037 | 57.105 | 36.4038 | 46.3683 | 55.8701 | 99.15 | | No log | 2.0 | 40 | 0.8759 | 58.7016 | 39.3877 | 47.444 | 57.4063 | 113.8 | | No log | 3.0 | 60 | 0.8755 | 60.4165 | 41.6463 | 50.9083 | 59.2499 | 109.7 |
e2f62361969c82abba83145acbcd3eb9
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
12628cab2af629647877f967a9f62273
apache-2.0
['translation']
false
opus-mt-sv-hu * source languages: sv * target languages: hu * OPUS readme: [sv-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-hu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-hu/opus-2020-01-26.eval.txt)
36746184364b9452a503d8013f870d5d
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_stsb_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.2659 - Pearson: nan - Spearmanr: nan - Combined Score: nan
0fc79509782d43c2dc1b4a31d70c78bc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 7.0456 | 1.0 | 23 | 4.3280 | nan | nan | nan | | 4.7979 | 2.0 | 46 | 3.4200 | nan | nan | nan | | 3.7359 | 3.0 | 69 | 2.7494 | nan | nan | nan | | 2.9308 | 4.0 | 92 | 2.3396 | nan | nan | nan | | 2.3776 | 5.0 | 115 | 2.2659 | nan | nan | nan | | 2.1865 | 6.0 | 138 | 2.3171 | nan | nan | nan | | 2.1731 | 7.0 | 161 | 2.3598 | nan | nan | nan | | 2.1793 | 8.0 | 184 | 2.4690 | 0.1389 | 0.1432 | 0.1410 | | 2.1725 | 9.0 | 207 | 2.3589 | 0.0899 | 0.0808 | 0.0854 | | 2.1621 | 10.0 | 230 | 2.3156 | 0.0853 | 0.0802 | 0.0827 |
72b091739ce90a26c7ae0df17f0c8b3b
apache-2.0
['summarization', 'persian', 'generated_from_trainer']
false
mt5-base-finetuned-persian This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 3.6086 - Rouge-1: 22.02 - Rouge-2: 7.41 - Rouge-l: 18.95 - Gen Len: 19.0 - Bertscore: 69.89
f61555e0fcb1bf88941a1e2268e33b02
apache-2.0
['summarization', 'persian', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - label_smoothing_factor: 0.1
c8ee7165a625518e7a4018396bb07122
apache-2.0
['summarization', 'persian', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 7.2823 | 0.96 | 19 | 3.9800 | 19.78 | 5.57 | 16.24 | 19.0 | 68.19 | | 4.7334 | 1.96 | 38 | 3.7620 | 20.92 | 7.49 | 18.27 | 18.91 | 68.72 | | 4.3891 | 2.96 | 57 | 3.6349 | 21.07 | 7.66 | 18.53 | 18.96 | 69.73 | | 4.2 | 3.96 | 76 | 3.6315 | 19.63 | 6.49 | 16.61 | 19.0 | 69.15 | | 3.9202 | 4.96 | 95 | 3.6086 | 21.2 | 6.8 | 17.06 | 19.0 | 69.48 |
97ff1513e0575c799f5e38ed996bfd43
mit
[]
false
Model description LegalBert is a BERT-base-cased model fine-tuned on a subset of the `case.law` corpus. Further details can be found in this paper: [A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering](http://ceur-ws.org/Vol-2645/paper5.pdf) Nils Holzenberger, Andrew Blair-Stanek and Benjamin Van Durme *Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop, 24 August 2020*
d56f7078aaaf8722cc29b6d179c770b1
mit
[]
false
Citation ``` @inproceedings{holzenberger20dataset, author = {Nils Holzenberger and Andrew Blair{-}Stanek and Benjamin Van Durme}, title = {A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering}, booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2020 co-located with the 26th {ACM} {SIGKDD} International Conference on Knowledge Discovery {\&} Data Mining {(KDD} 2020), Virtual Workshop, August 24, 2020}, series = {{CEUR} Workshop Proceedings}, volume = {2645}, pages = {31--38}, publisher = {CEUR-WS.org}, year = {2020}, url = {http://ceur-ws.org/Vol-2645/paper5.pdf}, } ```
eae285db53493733b262f55f5dbc7dbe
other
[]
false
Pre-trained language models (PLMs) have achieved great success in natural language processing. Most of PLMs follow the default setting of architecture hyper-parameters (e.g., the hidden dimension is a quarter of the intermediate dimension in feed-forward sub-networks) in BERT. In this paper, we adopt the one-shot Neural Architecture Search (NAS) to automatically search architecture hyper-parameters for efficient pre-trained language models (at least 6x faster than BERT-base). AutoTinyBERT provides a model zoo that can meet different latency requirements.
08d91179d634feb02b85ef1fceec63ca
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_multiple_datasets']
false
whisper-small-mn-12 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2949 - Wer: 32.3301 - Cer: 13.3493
362f5b101e026c0f1590edadacc6c713
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_multiple_datasets']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 25000 - mixed_precision_training: Native AMP
31d355b97aacd6aaef9eccf2e373f28c
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_multiple_datasets']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.3012 | 1.05 | 1000 | 0.3749 | 43.2379 | 17.6739 | | 0.2171 | 2.11 | 2000 | 0.3012 | 36.7435 | 15.2029 | | 0.1732 | 3.16 | 3000 | 0.2823 | 33.4225 | 13.7561 | | 0.145 | 4.21 | 4000 | 0.2822 | 32.4995 | 13.2436 | | 0.1159 | 5.27 | 5000 | 0.2949 | 32.3301 | 13.3493 | | 0.0863 | 6.32 | 6000 | 0.3116 | 32.7234 | 13.3892 | | 0.0685 | 7.38 | 7000 | 0.3343 | 32.4776 | 13.3077 | | 0.0506 | 8.43 | 8000 | 0.3584 | 33.3952 | 13.7736 | | 0.0336 | 9.48 | 9000 | 0.3861 | 33.7011 | 13.8493 | | 0.0215 | 10.54 | 10000 | 0.4193 | 33.7011 | 14.0140 | | 0.0141 | 11.59 | 11000 | 0.4463 | 34.0343 | 14.0298 | | 0.0089 | 12.64 | 12000 | 0.4660 | 33.6137 | 13.8052 | | 0.0057 | 13.7 | 13000 | 0.4913 | 33.9797 | 13.9849 | | 0.0039 | 14.75 | 14000 | 0.5078 | 33.9906 | 14.0656 | | 0.0033 | 15.81 | 15000 | 0.5244 | 33.7721 | 13.9192 | | 0.0024 | 16.86 | 16000 | 0.5358 | 33.7612 | 13.7910 | | 0.0018 | 17.91 | 17000 | 0.5469 | 33.6465 | 13.8468 | | 0.0013 | 18.97 | 18000 | 0.5614 | 33.6683 | 13.7553 | | 0.0014 | 20.02 | 19000 | 0.5707 | 33.6574 | 13.8884 | | 0.0006 | 21.07 | 20000 | 0.5835 | 34.0671 | 14.0764 | | 0.0007 | 22.13 | 21000 | 0.5927 | 33.9742 | 14.0772 | | 0.0005 | 23.18 | 22000 | 0.5994 | 34.0398 | 14.0290 | | 0.0004 | 24.24 | 23000 | 0.6067 | 33.9469 | 13.9217 | | 0.0003 | 25.29 | 24000 | 0.6109 | 33.9688 | 13.9591 | | 0.0003 | 26.34 | 25000 | 0.6130 | 33.8267 | 13.8360 |
7ad0a9b6b165455014b1277e72c8c2b0
apache-2.0
['translation']
false
opus-mt-es-zai * source languages: es * target languages: zai * OPUS readme: [es-zai](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-zai/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-zai/opus-2020-01-16.eval.txt)
9c2e78ea7586620b975bd3e4466b17a1
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
xslr-commonvoice This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.3835 - Wer: 0.3450
486a40bae7831766c51704bffe815447
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP
d411ee372ee67f7a436b6020c34edfe7
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.92 | 100 | 3.5761 | 1.0 | | No log | 1.83 | 200 | 3.0512 | 0.9999 | | No log | 2.75 | 300 | 1.0185 | 0.8188 | | No log | 3.67 | 400 | 0.5936 | 0.6411 | | 3.2139 | 4.59 | 500 | 0.4986 | 0.5267 | | 3.2139 | 5.5 | 600 | 0.4327 | 0.4732 | | 3.2139 | 6.42 | 700 | 0.4227 | 0.4462 | | 3.2139 | 7.34 | 800 | 0.4213 | 0.4291 | | 3.2139 | 8.26 | 900 | 0.4016 | 0.4033 | | 0.22 | 9.17 | 1000 | 0.3987 | 0.3825 | | 0.22 | 10.09 | 1100 | 0.4065 | 0.3867 | | 0.22 | 11.01 | 1200 | 0.3929 | 0.3842 | | 0.22 | 11.93 | 1300 | 0.3775 | 0.3687 | | 0.22 | 12.84 | 1400 | 0.3891 | 0.3536 | | 0.1005 | 13.76 | 1500 | 0.3850 | 0.3492 | | 0.1005 | 14.68 | 1600 | 0.3823 | 0.3441 |
e260f9ebae9f85f9700fbc4d9052b87f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 2
3a022afc7cc8d0b716bde962910afec2
mit
[]
false
dovin-baan on Stable Diffusion This is the `<dovin-baan>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<dovin-baan> 0](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/7.jpeg) ![<dovin-baan> 1](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/15.jpeg) ![<dovin-baan> 2](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/12.jpeg) ![<dovin-baan> 3](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/3.jpeg) ![<dovin-baan> 4](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/2.jpeg) ![<dovin-baan> 5](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/9.jpeg) ![<dovin-baan> 6](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/13.jpeg) ![<dovin-baan> 7](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/5.jpeg) ![<dovin-baan> 8](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/1.jpeg) ![<dovin-baan> 9](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/10.jpeg) ![<dovin-baan> 10](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/11.jpeg) ![<dovin-baan> 11](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/4.jpeg) ![<dovin-baan> 12](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/8.jpeg) ![<dovin-baan> 13](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/0.jpeg) ![<dovin-baan> 14](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/14.jpeg) ![<dovin-baan> 15](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/16.jpeg) ![<dovin-baan> 16](https://huggingface.co/sd-concepts-library/dovin-baan/resolve/main/concept_images/6.jpeg)
68b217a64c57302ed1af1736990e44ef
apache-2.0
['generated_from_trainer']
false
finetuned_distilgpt2_sst2_negation0.001_pretrainedTrue_epochs3 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 3.2638
044456757fdd061b2f1d1f390d7e8698
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6836 | 1.0 | 1322 | 3.2638 | | 2.5043 | 2.0 | 2644 | 3.2590 | | 2.4514 | 3.0 | 3966 | 3.2638 |
4c3f959bd7c1be51dd2ecc910065637d