license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
cc-by-4.0
[]
false
| Language | Setting | |----------------------------------------------------------------------|-------|:--------------:|---------------| | [prompt-ls-en-1](https://huggingface.co/lmvasque/prompt-ls-en-1) | 1 | English | fine-tune | | [prompt-ls-en-2](https://huggingface.co/lmvasque/prompt-ls-en-2) | 2 | English | fine-tune | | [roberta-large](https://huggingface.co/roberta-large) | 3 | English | zero-shot | | [prompt-ls-es-1](https://huggingface.co/lmvasque/prompt-ls-es-1) | 1 | Spanish | fine-tune | | [prompt-ls-es-2](https://huggingface.co/lmvasque/prompt-ls-es-2) | 2 | Spanish | fine-tune | | [prompt-ls-es-3](https://huggingface.co/lmvasque/prompt-ls-es-3) | 3 | Spanish | fine-tune | | [prompt-ls-pt-1](https://huggingface.co/lmvasque/prompt-ls-pt-1) | 1 | Portuguese | fine-tune | | **[prompt-ls-pt-2](https://huggingface.co/lmvasque/prompt-ls-pt-2)** | **2** | **Portuguese** | **fine-tune** | | [prompt-ls-pt-3](https://huggingface.co/lmvasque/prompt-ls-pt-3) | 3 | Portuguese | fine-tune | For the zero-shot setting, we used the original models with no further training. Links to these models are also updated in the table above.
41a2c8d423847af2af102f41dd77e6d6
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/nli-bert-large-max-pooling This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
656573f68476840368c42b0a66cfe113
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/nli-bert-large-max-pooling') embeddings = model.encode(sentences) print(embeddings) ```
b09642d8cc4c33d2ffbe0ac0119beea9
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-bert-large-max-pooling') model = AutoModel.from_pretrained('sentence-transformers/nli-bert-large-max-pooling')
07930aa3a26571a2d3da2b9ed9c41f79
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-bert-large-max-pooling)
f56e29c33e2a323d5b31605ba1f8cde6
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
15bdc0401e22b4f7c0418792b37c1eea
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-de-to-en-lr1e-4 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.8228 - Bleu: 11.427 - Gen Len: 17.2674
16859e3ffed9fbabb9afdfec2843bbed
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
71b4db6049bff71dfeb84cdc055762f2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 272 | 1.9605 | 9.0786 | 17.3148 | | 2.3992 | 2.0 | 544 | 1.8884 | 10.1443 | 17.3301 | | 2.3992 | 3.0 | 816 | 1.8647 | 10.4816 | 17.3258 | | 2.0832 | 4.0 | 1088 | 1.8473 | 10.7396 | 17.3231 | | 2.0832 | 5.0 | 1360 | 1.8343 | 11.0937 | 17.2621 | | 1.9193 | 6.0 | 1632 | 1.8282 | 11.1303 | 17.3098 | | 1.9193 | 7.0 | 1904 | 1.8234 | 11.2971 | 17.2991 | | 1.8351 | 8.0 | 2176 | 1.8241 | 11.3433 | 17.2621 | | 1.8351 | 9.0 | 2448 | 1.8224 | 11.394 | 17.2691 | | 1.7747 | 10.0 | 2720 | 1.8228 | 11.427 | 17.2674 |
d82c3c2850e49bc3f03bd508a8855581
apache-2.0
['translation']
false
opus-mt-tpi-en * source languages: tpi * target languages: en * OPUS readme: [tpi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.eval.txt)
425c04162c2ab56034fa5f3a0377f20a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-ft1500_norm500_aug2-3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5766 - Mse: 5.1532 - Mae: 1.3526 - R2: -0.0072 - Accuracy: 0.4734
d89cefb2170b4d51585fb707891c2a2e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:--------:| | 1.0562 | 1.0 | 15533 | 2.5766 | 5.1532 | 1.3526 | -0.0072 | 0.4734 |
bb0fe67eea5739928b63b7194b8eeede
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_qqp_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.6884 - Accuracy: 0.7872 - F1: 0.7062 - Combined Score: 0.7467
7a4996fd7b9e48ccfdbce88140f0c957
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.9518 | 1.0 | 2843 | 0.8352 | 0.7536 | 0.6530 | 0.7033 | | 0.8249 | 2.0 | 5686 | 0.7766 | 0.7607 | 0.6219 | 0.6913 | | 0.7847 | 3.0 | 8529 | 0.7625 | 0.7648 | 0.6402 | 0.7025 | | 0.7498 | 4.0 | 11372 | 0.7551 | 0.7638 | 0.6197 | 0.6917 | | 0.7137 | 5.0 | 14215 | 0.7387 | 0.7691 | 0.6545 | 0.7118 | | 0.6762 | 6.0 | 17058 | 0.7165 | 0.7753 | 0.6720 | 0.7237 | | 0.6373 | 7.0 | 19901 | 0.7042 | 0.7783 | 0.6765 | 0.7274 | | 0.6045 | 8.0 | 22744 | 0.7075 | 0.7799 | 0.6902 | 0.7350 | | 0.5729 | 9.0 | 25587 | 0.7233 | 0.7792 | 0.6639 | 0.7215 | | 0.545 | 10.0 | 28430 | 0.7088 | 0.7805 | 0.7180 | 0.7493 | | 0.5183 | 11.0 | 31273 | 0.6884 | 0.7872 | 0.7062 | 0.7467 | | 0.4948 | 12.0 | 34116 | 0.7064 | 0.7869 | 0.7076 | 0.7472 | | 0.4724 | 13.0 | 36959 | 0.7053 | 0.7884 | 0.7120 | 0.7502 | | 0.4514 | 14.0 | 39802 | 0.7314 | 0.7903 | 0.7024 | 0.7464 | | 0.4321 | 15.0 | 42645 | 0.7112 | 0.7891 | 0.7228 | 0.7560 | | 0.4152 | 16.0 | 45488 | 0.7410 | 0.7909 | 0.7211 | 0.7560 |
73079ec1c6fb38d2e62e5a19fee68cd9
cc-by-4.0
['Transformers', 'text-classification', 'multi-class-classification']
false
**People Involved** * [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1) **Affiliations** 1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
54da255a04b99e82ebf624cc61ff0966
cc-by-4.0
['Transformers', 'text-classification', 'multi-class-classification']
false
Model XLM-Roberta : [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base) Paper : [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/pdf/1911.02116.pdf)
7d77b726b7cc320b98f46a26486cbcfd
cc-by-4.0
['Transformers', 'text-classification', 'multi-class-classification']
false
Demo: How to use in HuggingFace Transformers Pipeline Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers``` ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline model_name = 'qanastek/51-languages-classifier' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer) res = classifier("פרק הבא בפודקאסט בבקשה") print(res) ``` Outputs: ```python [{'label': 'he-IL', 'score': 0.9998375177383423}] ```
b33631d84fbb93936117358a74784dc8
cc-by-4.0
['Transformers', 'text-classification', 'multi-class-classification']
false
Training data [MASSIVE](https://huggingface.co/datasets/qanastek/MASSIVE) is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
28e805ad3bf27f2c12d0aeddb0a5826c
cc-by-4.0
['Transformers', 'text-classification', 'multi-class-classification']
false
Languages Thee model is capable of distinguish 51 languages : - `Afrikaans - South Africa (af-ZA)` - `Amharic - Ethiopia (am-ET)` - `Arabic - Saudi Arabia (ar-SA)` - `Azeri - Azerbaijan (az-AZ)` - `Bengali - Bangladesh (bn-BD)` - `Chinese - China (zh-CN)` - `Chinese - Taiwan (zh-TW)` - `Danish - Denmark (da-DK)` - `German - Germany (de-DE)` - `Greek - Greece (el-GR)` - `English - United States (en-US)` - `Spanish - Spain (es-ES)` - `Farsi - Iran (fa-IR)` - `Finnish - Finland (fi-FI)` - `French - France (fr-FR)` - `Hebrew - Israel (he-IL)` - `Hungarian - Hungary (hu-HU)` - `Armenian - Armenia (hy-AM)` - `Indonesian - Indonesia (id-ID)` - `Icelandic - Iceland (is-IS)` - `Italian - Italy (it-IT)` - `Japanese - Japan (ja-JP)` - `Javanese - Indonesia (jv-ID)` - `Georgian - Georgia (ka-GE)` - `Khmer - Cambodia (km-KH)` - `Korean - Korea (ko-KR)` - `Latvian - Latvia (lv-LV)` - `Mongolian - Mongolia (mn-MN)` - `Malay - Malaysia (ms-MY)` - `Burmese - Myanmar (my-MM)` - `Norwegian - Norway (nb-NO)` - `Dutch - Netherlands (nl-NL)` - `Polish - Poland (pl-PL)` - `Portuguese - Portugal (pt-PT)` - `Romanian - Romania (ro-RO)` - `Russian - Russia (ru-RU)` - `Slovanian - Slovania (sl-SL)` - `Albanian - Albania (sq-AL)` - `Swedish - Sweden (sv-SE)` - `Swahili - Kenya (sw-KE)` - `Hindi - India (hi-IN)` - `Kannada - India (kn-IN)` - `Malayalam - India (ml-IN)` - `Tamil - India (ta-IN)` - `Telugu - India (te-IN)` - `Thai - Thailand (th-TH)` - `Tagalog - Philippines (tl-PH)` - `Turkish - Turkey (tr-TR)` - `Urdu - Pakistan (ur-PK)` - `Vietnamese - Vietnam (vi-VN)` - `Welsh - United Kingdom (cy-GB)`
2ebfbbaf689f6ca33324c5705fefcc95
cc-by-4.0
['Transformers', 'text-classification', 'multi-class-classification']
false
Evaluation results ```plain precision recall f1-score support af-ZA 0.9821 0.9805 0.9813 2974 am-ET 1.0000 1.0000 1.0000 2974 ar-SA 0.9809 0.9822 0.9815 2974 az-AZ 0.9946 0.9845 0.9895 2974 bn-BD 0.9997 0.9990 0.9993 2974 cy-GB 0.9970 0.9929 0.9949 2974 da-DK 0.9575 0.9617 0.9596 2974 de-DE 0.9906 0.9909 0.9908 2974 el-GR 0.9997 0.9973 0.9985 2974 en-US 0.9712 0.9866 0.9788 2974 es-ES 0.9825 0.9842 0.9834 2974 fa-IR 0.9940 0.9973 0.9956 2974 fi-FI 0.9943 0.9946 0.9945 2974 fr-FR 0.9963 0.9923 0.9943 2974 he-IL 1.0000 0.9997 0.9998 2974 hi-IN 1.0000 0.9980 0.9990 2974 hu-HU 0.9983 0.9950 0.9966 2974 hy-AM 1.0000 0.9993 0.9997 2974 id-ID 0.9319 0.9291 0.9305 2974 is-IS 0.9966 0.9943 0.9955 2974 it-IT 0.9698 0.9926 0.9811 2974 ja-JP 0.9987 0.9963 0.9975 2974 jv-ID 0.9628 0.9744 0.9686 2974 ka-GE 0.9993 0.9997 0.9995 2974 km-KH 0.9867 0.9963 0.9915 2974 kn-IN 1.0000 0.9993 0.9997 2974 ko-KR 0.9917 0.9997 0.9956 2974 lv-LV 0.9990 0.9950 0.9970 2974 ml-IN 0.9997 0.9997 0.9997 2974 mn-MN 0.9987 0.9966 0.9976 2974 ms-MY 0.9359 0.9418 0.9388 2974 my-MM 1.0000 0.9993 0.9997 2974 nb-NO 0.9600 0.9533 0.9566 2974 nl-NL 0.9850 0.9748 0.9799 2974 pl-PL 0.9946 0.9923 0.9934 2974 pt-PT 0.9885 0.9798 0.9841 2974 ro-RO 0.9919 0.9916 0.9918 2974 ru-RU 0.9976 0.9983 0.9980 2974 sl-SL 0.9956 0.9939 0.9948 2974 sq-AL 0.9936 0.9896 0.9916 2974 sv-SE 0.9902 0.9842 0.9872 2974 sw-KE 0.9867 0.9953 0.9910 2974 ta-IN 1.0000 1.0000 1.0000 2974 te-IN 1.0000 0.9997 0.9998 2974 th-TH 1.0000 0.9983 0.9992 2974 tl-PH 0.9929 0.9899 0.9914 2974 tr-TR 0.9869 0.9872 0.9871 2974 ur-PK 0.9983 0.9929 0.9956 2974 vi-VN 0.9993 0.9973 0.9983 2974 zh-CN 0.9812 0.9832 0.9822 2974 zh-TW 0.9832 0.9815 0.9823 2974 accuracy 0.9889 151674 macro avg 0.9889 0.9889 0.9889 151674 weighted avg 0.9889 0.9889 0.9889 151674 ``` Keywords : language identification ; language identification ; multilingual ; classification
4ad6216a7ef4f9f7cea8d1c7c760f582
mit
[]
false
uma-clean-object on Stable Diffusion This is the `<uma-clean-object>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<uma-clean-object> 0](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/unnamed_10_.jpg) ![<uma-clean-object> 1](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/unnamed_1_.jpg) ![<uma-clean-object> 2](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/unnamed_12_.jpg) ![<uma-clean-object> 3](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/FcybPCqaUAAxIEn.png) ![<uma-clean-object> 4](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/3-30-25.png) ![<uma-clean-object> 5](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/Fc8KllxagAMYlhf.png) ![<uma-clean-object> 6](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/FcuE6B4aUAEi422.png) ![<uma-clean-object> 7](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/10.jpg) ![<uma-clean-object> 8](https://huggingface.co/sd-concepts-library/uma-clean-object/resolve/main/concept_images/file.jpg)
0e20b9e5e3ee8e804c304da18b8c8052
apache-2.0
[]
false
Model Details **Model Description:** This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned on SST-2. This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7). - **Developed by:** Hugging Face - **Model Type:** Text Classification - **Language(s):** English - **License:** Apache-2.0 - **Parent Model:** For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased). - **Resources for more information:** - [Model Documentation](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert
bed3b3a039c1df6da9ca3a6705ba5e5f
apache-2.0
[]
false
How to Get Started With the Model Example of single-label classification: ​​ ```python import torch from transformers import DistilBertTokenizer, DistilBertForSequenceClassification tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits predicted_class_id = logits.argmax().item() model.config.id2label[predicted_class_id] ```
e12984aff6281cdee4210098c97bc462
apache-2.0
[]
false
Direct Use This model can be used for topic classification. You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
c35b178db12c0d8a2f18e87ae7a428cb
apache-2.0
[]
false
Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
cdd3cac18de383bf4684e7283ea39aca
apache-2.0
[]
false
Risks, Limitations and Biases Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations. For instance, for sentences like `This film was filmed in COUNTRY`, this binary classification model will give radically different probabilities for the positive label depending on the country (0.89 if the country is France, but 0.08 if the country is Afghanistan) when nothing in the input indicates such a strong semantic shift. In this [colab](https://colab.research.google.com/gist/ageron/fb2f64fb145b4bc7c49efc97e5f114d3/biasmap.ipynb), [Aurélien Géron](https://twitter.com/aureliengeron) made an interesting map plotting these probabilities for each country. <img src="https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/resolve/main/map.jpeg" alt="Map of positive probabilities per country." width="500"/> We strongly advise users to thoroughly probe these aspects on their use-cases in order to evaluate the risks of this model. We recommend looking at the following bias evaluation datasets as a place to start: [WinoBias](https://huggingface.co/datasets/wino_bias), [WinoGender](https://huggingface.co/datasets/super_glue), [Stereoset](https://huggingface.co/datasets/stereoset).
57f5f272ff273bffa7604cd0fe8bcd85
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseCLM This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8277
6aae130a5c5bca5fcc8ca338dfab9e2b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2083 | 1.0 | 768 | 2.9175 | | 2.9739 | 2.0 | 1536 | 2.7931 | | 2.9174 | 3.0 | 2304 | 2.8351 |
e9eed5d90756f5d8ba60d378ccefaa8a
apache-2.0
['generated_from_keras_callback']
false
Okyx/finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.0154 - Validation Loss: 3.3292 - Epoch: 7
a4f10e66b880a24849ce1269190fb7d1
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.2009 | 4.0465 | 0 | | 5.7436 | 3.6640 | 1 | | 5.0419 | 3.5296 | 2 | | 4.6412 | 3.4582 | 3 | | 4.3722 | 3.3943 | 4 | | 4.1947 | 3.3610 | 5 | | 4.0747 | 3.3295 | 6 | | 4.0154 | 3.3292 | 7 |
897f8d3fe2a92bc9c7cd08529336d33f
mit
['generated_from_trainer']
false
celt-covid-twitter-bert-v2 This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4237 - F1: 0.8495
42f503f35e76363ea7fca2eeb71c2739
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5772 | 1.0 | 988 | 0.3683 | 0.8449 | | 0.3161 | 2.0 | 1976 | 0.4237 | 0.8495 |
6989596fe9bd772e23a6b9f4435eb919
apache-2.0
['translation']
false
opus-mt-fj-fr * source languages: fj * target languages: fr * OPUS readme: [fj-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fj-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.eval.txt)
b821c15d0675e50d116b9ff1012763ad
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1100k']
false
MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1100k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
2d7ab29f6bc86732909b7c4ec0a17329
apache-2.0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1100k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1100k') model = TFBertModel.from_pretrained("google/multiberts-seed_2-step_1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_2-step_1100k') model = BertModel.from_pretrained("google/multiberts-seed_2-step_1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
79e6921873e5395548ea4ea6890c0ad6
mit
['generated_from_trainer']
false
agitated_jones This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
b173b1a67df2d198a3a8eaf247a1a638
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'agitated_jones', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
752b7c9b818081a476e3ad5f459ab0a4
apache-2.0
['translation']
false
lit-rus * source group: Lithuanian * target group: Russian * OPUS readme: [lit-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md) * model: transformer-align * source language(s): lit * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.eval.txt)
58b858274fd4c395aba49dad669d6b33
apache-2.0
['translation']
false
System Info: - hf_name: lit-rus - source_languages: lit - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'ru'] - src_constituents: {'lit'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt - src_alpha3: lit - tgt_alpha3: rus - short_pair: lt-ru - chrF2_score: 0.695 - bleu: 51.7 - brevity_penalty: 0.982 - ref_len: 15395.0 - src_name: Lithuanian - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: lt - tgt_alpha2: ru - prefer_old: False - long_pair: lit-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
fe7111eae693dc28b77d04ed9238cf7a
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2474 - F1: 0.8270
ac2877989dc162339c1f797f22613ba7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 70 | 0.3527 | 0.7372 | | 0.5173 | 2.0 | 140 | 0.2580 | 0.7916 | | 0.5173 | 3.0 | 210 | 0.2474 | 0.8270 |
894046a6d63000b22ba3ffb1b4421e6a
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_rte_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6918 - Accuracy: 0.5271
ba85135e5f5f1d9453d619a030722af7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6948 | 1.0 | 10 | 0.6991 | 0.4729 | | 0.6969 | 2.0 | 20 | 0.6918 | 0.5271 | | 0.6939 | 3.0 | 30 | 0.6945 | 0.4729 | | 0.6948 | 4.0 | 40 | 0.6926 | 0.5271 | | 0.6935 | 5.0 | 50 | 0.6950 | 0.4729 | | 0.6936 | 6.0 | 60 | 0.6924 | 0.5271 | | 0.6941 | 7.0 | 70 | 0.6926 | 0.5271 |
5c5289209ff6ad0ec4c55409499e22f3
mit
[]
false
T5-base model fine-tuned on BioASQ for Biological Question Answering 👩‍⚕️👨‍⚕️ [Google's T5-base](https://huggingface.co/t5-base) fine-tuned on [BioASQ](https://github.com/dmis-lab/biobert) (secondary task) for **Q&A** downstream task.
300c245afecf5fad6ac96b6104c49c96
mit
[]
false
Details of T5 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
f7761d2d9194d2998c93688daf5d61a2
mit
[]
false
Usage 🚀 ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("ozcangundes/T5-base-for-BioQA") model = T5ForConditionalGeneration.from_pretrained("ozcangundes/T5-base-for-BioQA") def get_answer(question,context): source_encoding=tokenizer( question, context, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"]) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ```
cd58c8111d9a0c88ee4268473230e6d2
mit
[]
false
Example 1 ```python question={ "context":"Effect of food on the pharmacokinetics of empagliflozin, a sodium glucose cotransporter 2 (SGLT2) inhibitor, and assessment of dose proportionality in healthy volunteers. OBJECTIVES: Empagliflozin is an orally available, potent and highly selective inhibitor of the sodium glucose cotransporter 2 (SGLT2). This study was undertaken to investigate the effect of food on the pharmacokinetics of 25 mg empagliflozin and to assess dose proportionality between 10 mg and 25 mg empagliflozin under fasted conditions. MATERIALS AND METHODS: In this open-label, 3-way, cross-over study, 18 healthy volunteers received 3 single doses of empagliflozin in a randomized sequence (25 mg empagliflozin under fasted conditions, 25 mg empagliflozin after a high-fat, high-calorie breakfast and 10 mg empagliflozin under fasted conditions), each separated by a washout period of at least 7 days. Serial plasma samples were collected at selected time points over a period of 72 hours. RESULTS: Administration with food had no clinically relevant effect on the area under the plasma concentration-time curve (AUC0-∞) of empagliflozin (geometric mean ratio (GMR): 84.04, 90% confidence interval (CI): 80.86 - 87.34). The decrease observed in the maximum plasma concentrations (Cmax) of empagliflozin (GMR: 63.22, 90% CI: 56.74 - 70.44) when administered with food was not considered clinically meaningful. The increases in AUC0-∞ and Cmax for 10 mg vs. 25 mg empagliflozin administered under fasting conditions were roughly dose-proportional, as demonstrated by the slope β of the regression lines being slightly less than 1 (slope β for AUC0-∞: 0.94, 95% CI: 0.90 - 0.97; slope β for Cmax: 0.91, 95% CI: 0.80 - 1.01). Empagliflozin was well tolerated under fed and fasting conditions. CONCLUSIONS: The results support administration of empagliflozin tablets independently of food. Increases in empagliflozin exposure under fasting conditions were roughly dose-proportional between 10 mg and 25 mg empagliflozin.", "question":"Which protein does empagliflozin inhibit?" } get_answer(question["question"],question["context"]) ``` > SGLT2
681dfa507e5f96a7180e045997a4eb8f
mit
[]
false
Example 2 ```python question2={ "context":"Dermatitis herpetiformis: jejunal findings and skin response to gluten free diet. Fifty seven children with dermatitis herpetiformis, 18 from Finland and 39 from Hungary, were studied. Diagnostic criteria included the finding of granular IgA deposits in the skin of all patients. The mean age at onset of the rash was 7 X 2 years and favoured sites were the elbows, knees, and buttocks. Symptoms suggesting small intestinal disease were rare but in 35 (61%) of the children subtotal villous atrophy and in 16 (28%) partial villous atrophy were found on jejunal biopsy. Eighteen children underwent a second biopsy after a mean of 21 months on a gluten free diet; villous height was found to be increased and the intraepithelial lymphocyte count decreased in all these patients. Gluten challenge caused a reversal in the two children who underwent a third biopsy. The effect of the gluten free diet on the rash was examined in Finnish children by observing the daily requirements of dapsone, a drug used to control the rash at the beginning of the diet. Eight (67%) of the 12 children were able to stop taking dapsone after a mean of 11 months on the diet and all three patients treated with diet alone became asymptomatic after three to 6 months on the diet. These results confirm that most children with dermatitis herpetiformis have jejunal villous atrophy, though they rarely have gastrointestinal symptoms. The central role of gluten in childhood dermatitis herpetiformis is evidenced by the fact that a gluten free diet helps the damaged jejunal mucosa to recover and controls the rash even in those children who do not have an abnormal jejunal biopsy.", "question":"What is the typical rash associated with gluten?" } get_answer(question2["question"],question2["context"]) ``` > dermatitis herpetiformis Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/simple-icons@3.0.1/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
3a0b9fe9a5d85221e22fa909ca97c51d
gpl
['corenlp']
false
Core NLP model for en CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations. Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
3f261c4ea793c077b5ac26084a969cea
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Jak's Voxel-ish Image Pack for Stable Diffusion Another fantastic image pack brought to you by 143 training images through 8000 training steps, 20% Training text crafted by Jak_TheAI_Artist Include Prompt trigger: "voxel-ish" to activate. Tip: add "intricate detail" in prompt to make a semi-realistic image.
8c4b77887ddab811eee23233d3d9a205
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
UPDATE: Version 1.2 available [here](https://huggingface.co/plasmo/vox2) Sample pictures of this concept: voxel-ish ![voxel-ish 0](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/wizard.jpg) ![voxel-ish 1](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/lion.jpg) ![voxel-ish 2](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/ww2.jpg) ![voxel-ish 3](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/ww.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/scarlett.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/owl.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/turtle.jpg) ![voxel-ish 4](https://huggingface.co/plasmo/voxel-ish/resolve/main/concept_images/cycle.jpg)
fd4527fe67f0b1d23bb4c88c3629bd77
cc-by-sa-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 - mixed_precision_training: Native AMP
024cfcea1ab109f58c82ff0ba9dbc98a
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small - Swedish This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 & NST dataset. It achieves the following results on the evaluation set: - Loss: 0.3551 - Wer: 19.2143
77e7114c7f175073d0cddde698057952
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 8000 - mixed_precision_training: Native AMP
3aa91b46ce62dfa3412308deb447109e
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2128 | 0.85 | 1000 | 0.2955 | 22.1613 | | 0.0871 | 1.71 | 2000 | 0.2790 | 20.8034 | | 0.0373 | 2.56 | 3000 | 0.2884 | 19.9269 | | 0.0163 | 3.41 | 4000 | 0.3082 | 19.5477 | | 0.0046 | 4.27 | 5000 | 0.3183 | 19.5881 | | 0.0023 | 5.12 | 6000 | 0.3397 | 19.3757 | | 0.0023 | 5.97 | 7000 | 0.3468 | 19.3219 | | 0.0013 | 6.83 | 8000 | 0.3551 | 19.2143 |
f0d35442b7af1a706ce4bd0e74bb408a
apache-2.0
['translation']
false
opus-mt-es-ht * source languages: es * target languages: ht * OPUS readme: [es-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ht/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ht/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ht/opus-2020-01-16.eval.txt)
51473354655c325240e5e182db99a45b
apache-2.0
['automatic-speech-recognition', 'id']
false
exp_w2v2t_id_unispeech_s149 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
08872c24dab61d50eabb2c6681fa5ab1
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-wnli-target-glue-rte This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-wnli](https://huggingface.co/muhtasham/tiny-mlm-glue-wnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6882 - Accuracy: 0.5596
76bfc5ebfa0b454d2f9835cb5c5c0e99
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6475 | 6.41 | 500 | 0.7071 | 0.5596 | | 0.4526 | 12.82 | 1000 | 0.8708 | 0.5704 | | 0.2668 | 19.23 | 1500 | 1.1317 | 0.5704 | | 0.162 | 25.64 | 2000 | 1.4052 | 0.5704 | | 0.0978 | 32.05 | 2500 | 1.8224 | 0.5812 | | 0.0658 | 38.46 | 3000 | 2.0893 | 0.5668 | | 0.0488 | 44.87 | 3500 | 2.4656 | 0.5560 | | 0.0409 | 51.28 | 4000 | 2.6882 | 0.5596 |
d25ff2f72afa3808a986f278bcdd8ff7
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'nl', 'robust-speech-event']
false
wav2vec2-large-xls-r-300m-nl This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the test set: - Loss: 0.3923 - Wer: 0.1748
30183bd72dc6d99b6d7b6f5db8f59261
apache-2.0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'nl', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.5787 | 0.89 | 400 | 0.6354 | 0.5643 | | 0.3036 | 1.78 | 800 | 0.3690 | 0.3552 | | 0.188 | 2.67 | 1200 | 0.3239 | 0.2958 | | 0.1434 | 3.56 | 1600 | 0.3093 | 0.2515 | | 0.1245 | 4.44 | 2000 | 0.3024 | 0.2433 | | 0.1095 | 5.33 | 2400 | 0.3249 | 0.2643 | | 0.0979 | 6.22 | 2800 | 0.3191 | 0.2281 | | 0.0915 | 7.11 | 3200 | 0.3152 | 0.2216 | | 0.0829 | 8.0 | 3600 | 0.3419 | 0.2218 | | 0.0777 | 8.89 | 4000 | 0.3432 | 0.2132 | | 0.073 | 9.78 | 4400 | 0.3223 | 0.2131 | | 0.0688 | 10.67 | 4800 | 0.3094 | 0.2152 | | 0.0647 | 11.56 | 5200 | 0.3411 | 0.2152 | | 0.0639 | 12.44 | 5600 | 0.3762 | 0.2135 | | 0.0599 | 13.33 | 6000 | 0.3790 | 0.2137 | | 0.0572 | 14.22 | 6400 | 0.3693 | 0.2118 | | 0.0563 | 15.11 | 6800 | 0.3495 | 0.2139 | | 0.0521 | 16.0 | 7200 | 0.3800 | 0.2023 | | 0.0508 | 16.89 | 7600 | 0.3678 | 0.2033 | | 0.0513 | 17.78 | 8000 | 0.3845 | 0.1987 | | 0.0476 | 18.67 | 8400 | 0.3511 | 0.2037 | | 0.045 | 19.56 | 8800 | 0.3794 | 0.1994 | | 0.044 | 20.44 | 9200 | 0.3525 | 0.2050 | | 0.043 | 21.33 | 9600 | 0.4082 | 0.2007 | | 0.0409 | 22.22 | 10000 | 0.3866 | 0.2004 | | 0.0393 | 23.11 | 10400 | 0.3899 | 0.2008 | | 0.0382 | 24.0 | 10800 | 0.3626 | 0.1951 | | 0.039 | 24.89 | 11200 | 0.3936 | 0.1953 | | 0.0361 | 25.78 | 11600 | 0.4262 | 0.1928 | | 0.0362 | 26.67 | 12000 | 0.3796 | 0.1934 | | 0.033 | 27.56 | 12400 | 0.3616 | 0.1934 | | 0.0321 | 28.44 | 12800 | 0.3742 | 0.1933 | | 0.0325 | 29.33 | 13200 | 0.3582 | 0.1869 | | 0.0309 | 30.22 | 13600 | 0.3717 | 0.1874 | | 0.029 | 31.11 | 14000 | 0.3814 | 0.1894 | | 0.0296 | 32.0 | 14400 | 0.3698 | 0.1877 | | 0.0281 | 32.89 | 14800 | 0.3976 | 0.1899 | | 0.0275 | 33.78 | 15200 | 0.3854 | 0.1858 | | 0.0264 | 34.67 | 15600 | 0.4021 | 0.1889 | | 0.0261 | 35.56 | 16000 | 0.3850 | 0.1830 | | 0.0242 | 36.44 | 16400 | 0.4091 | 0.1878 | | 0.0245 | 37.33 | 16800 | 0.4012 | 0.1846 | | 0.0243 | 38.22 | 17200 | 0.3996 | 0.1833 | | 0.0223 | 39.11 | 17600 | 0.3962 | 0.1815 | | 0.0223 | 40.0 | 18000 | 0.3898 | 0.1832 | | 0.0219 | 40.89 | 18400 | 0.4019 | 0.1822 | | 0.0211 | 41.78 | 18800 | 0.4035 | 0.1809 | | 0.021 | 42.67 | 19200 | 0.3915 | 0.1826 | | 0.0208 | 43.56 | 19600 | 0.3934 | 0.1784 | | 0.0188 | 44.44 | 20000 | 0.3912 | 0.1787 | | 0.0195 | 45.33 | 20400 | 0.3989 | 0.1766 | | 0.0186 | 46.22 | 20800 | 0.3887 | 0.1773 | | 0.0188 | 47.11 | 21200 | 0.3982 | 0.1758 | | 0.0175 | 48.0 | 21600 | 0.3933 | 0.1755 | | 0.0172 | 48.89 | 22000 | 0.3921 | 0.1749 | | 0.0187 | 49.78 | 22400 | 0.3923 | 0.1748 |
83c7b77e5b387a37b172abfedd10ca7b
mit
['generated_from_trainer']
false
xlnet-base-cased_fold_9_binary_v1 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7204 - F1: 0.8203
f4a67de62b409c7a02e44eabb494c38c
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 291 | 0.4045 | 0.8001 | | 0.4262 | 2.0 | 582 | 0.3914 | 0.8297 | | 0.4262 | 3.0 | 873 | 0.5050 | 0.8029 | | 0.2488 | 4.0 | 1164 | 0.7681 | 0.8007 | | 0.2488 | 5.0 | 1455 | 0.8349 | 0.8262 | | 0.1483 | 6.0 | 1746 | 0.9045 | 0.8220 | | 0.0894 | 7.0 | 2037 | 1.1584 | 0.8165 | | 0.0894 | 8.0 | 2328 | 1.1818 | 0.8300 | | 0.0389 | 9.0 | 2619 | 1.3332 | 0.8147 | | 0.0389 | 10.0 | 2910 | 1.2373 | 0.8285 | | 0.038 | 11.0 | 3201 | 1.3156 | 0.8234 | | 0.038 | 12.0 | 3492 | 1.3251 | 0.8341 | | 0.0211 | 13.0 | 3783 | 1.3144 | 0.8255 | | 0.0158 | 14.0 | 4074 | 1.5686 | 0.8168 | | 0.0158 | 15.0 | 4365 | 1.5382 | 0.8185 | | 0.0165 | 16.0 | 4656 | 1.5203 | 0.8282 | | 0.0165 | 17.0 | 4947 | 1.5352 | 0.8136 | | 0.0142 | 18.0 | 5238 | 1.4799 | 0.8243 | | 0.0062 | 19.0 | 5529 | 1.5030 | 0.8294 | | 0.0062 | 20.0 | 5820 | 1.6264 | 0.8094 | | 0.0078 | 21.0 | 6111 | 1.6949 | 0.8122 | | 0.0078 | 22.0 | 6402 | 1.7106 | 0.8139 | | 0.0043 | 23.0 | 6693 | 1.7234 | 0.8218 | | 0.0043 | 24.0 | 6984 | 1.7344 | 0.8208 | | 0.0028 | 25.0 | 7275 | 1.7204 | 0.8203 |
08b34d7a55a599a8cbf070fcd2ecba16
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
false
LoRA DreamBooth - walter-white These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "break bad" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. Test prompt: break bad ![image_0](test_images/image_0.png) ![image_1](test_images/image_1.png) ![image_2](test_images/image_2.png) ![image_3](test_images/image_3.png)
3e1d3b86afeadcaaf94ce330fd718f37
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-ner-false-finetuned-ner-2002 This model is a fine-tuned version of [StivenLancheros/xlm-roberta-base-finetuned-ner-false](https://huggingface.co/StivenLancheros/xlm-roberta-base-finetuned-ner-false) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0725 - Precision: 0.9412 - Recall: 0.9507 - F1: 0.9459 - Accuracy: 0.9904
977512200d392f28de4daba9b3ef856e
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.086 | 1.0 | 7021 | 0.0709 | 0.9221 | 0.9261 | 0.9241 | 0.9872 | | 0.0352 | 2.0 | 14042 | 0.0871 | 0.9243 | 0.9354 | 0.9298 | 0.9879 | | 0.0203 | 3.0 | 21063 | 0.0747 | 0.9398 | 0.9490 | 0.9444 | 0.9901 | | 0.0184 | 4.0 | 28084 | 0.0725 | 0.9412 | 0.9507 | 0.9459 | 0.9904 |
30ba9e5080b666e26733ae679a122020
mit
['generated_from_trainer']
false
final_model_output_subreddit-wallstreetbets This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5351
65060d901b0b4367f1563513aae1f184
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP
05b6259467bb48ab3f7b4744c5dd5d87
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.7979 | 1.25 | 5000 | 3.6293 | | 3.4998 | 2.49 | 10000 | 3.5351 |
34fcf3908a85d2fa8f21ad4d5613d536
creativeml-openrail-m
['text-to-image']
false
Sample pictures of: sdcid (use that on your prompt) ![sdcid 0](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%286%29.jpg)![sdcid 1](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%287%29.jpg)![sdcid 2](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%281%29.jpg)![sdcid 3](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%283%29.jpg)![sdcid 4](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%284%29.jpg)![sdcid 5](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%285%29.jpg)![sdcid 6](https://huggingface.co/AppInApp/a0c306ab-cdd9-4303-ace9-6d021d3520d5/resolve/main/instance_data/sdcid_%282%29.jpg)
cbc3b2e70e3eb6cae6ddccf60e8f37b0
apache-2.0
['generated_from_trainer']
false
wav2vec2-xls-r-300m-ar-7 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 61.6652 - Wer: 0.2222
98b3f5bf2a232b1d93427865091d2997
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6306.7719 | 4.71 | 400 | 617.7255 | 1.0 | | 1222.8073 | 9.41 | 800 | 81.7446 | 0.3820 | | 326.9842 | 14.12 | 1200 | 67.3986 | 0.2859 | | 223.859 | 18.82 | 1600 | 60.8896 | 0.2492 | | 175.5662 | 23.53 | 2000 | 59.2339 | 0.2256 | | 146.3602 | 28.24 | 2400 | 61.6652 | 0.2222 |
ef9d005dca66fb6863103364e6698b6a
['apache-2.0']
['causal-lm', 'text-generation']
false
How to use ```python from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch DEVICE = torch.device("cuda:0") model_name_or_path = "radm/rugpt3medium-tathagata" tokenizer = GPT2Tokenizer.from_pretrained("sberbank-ai/rugpt3medium_based_on_gpt2") model = GPT2LMHeadModel.from_pretrained(model_name_or_path).to(DEVICE) text = "В чем смысл жизни?\n" input_ids = tokenizer.encode(text, return_tensors="pt").to(DEVICE) model.eval() with torch.no_grad(): out = model.generate(input_ids, do_sample=True, num_beams=4, temperature=1.1, top_p=0.9, top_k=50, max_length=250, min_length=50, early_stopping=True, no_repeat_ngram_size=2 ) generated_text = list(map(tokenizer.decode, out))[0] print() print(generated_text) ```
566a9df570337d45c3bfbb202b7799e6
['apache-2.0']
['causal-lm', 'text-generation']
false
Dataset Dataset based on summaries of major Buddhist, Hindu and Advaita texts such as: - Diamond Sutra - Lankavatara Sutra - Sri Nisargadatta Maharaj quotes - Quotes from the Bhagavad Gita Dataset link: [tathagata](https://huggingface.co/datasets/radm/tathagata)
c68b6e2ae0e61d84ba7cb4457acd3b3d
apache-2.0
['translation']
false
opus-mt-yap-en * source languages: yap * target languages: en * OPUS readme: [yap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.eval.txt)
dfba6f38d73dcce6739ef7cc35f9b086
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Turkish This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 Turkish dataset. It achieves the following results on the evaluation set: - Loss: 0.1879 - Wer: 10.5033
ad387fc78722ff5056ebb3f9f8676947
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Model description The model is fine-tuned for 1000 steps/updates. - Zero-shot - 20.89 (CV11) - Fine-tune on CV11 - 10.50 (CV11) (-49%) ------------------------------------------------------------------- - Zeroshot - 10.4 (Google Fluers) - Fine-tune on CV11 - 9.26 (Google Fluers)
466f4e82ee0fda1fafc92c1315807395
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0348 | 3.05 | 1000 | 0.1879 | 10.5033 |
9ebda24a85ff362af19f76a77cefaf90
apache-2.0
['automatic-speech-recognition', 'id']
false
exp_w2v2t_id_vp-it_s609 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
def9a2ed927836cbd7d408bc271f8bc2
apache-2.0
['summarization', 'translation']
false
Model Card for T5 11B ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
dc28e73795d2c90c7436bfa1e2c04a2d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000222 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP
77d7264fa37afd1e709ae993ba7b3436
apache-2.0
['automatic-speech-recognition', 'it']
false
exp_w2v2t_it_no-pretraining_s842 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
009f22a75f50d3470c03d70c29c62a47
apache-2.0
['generated_from_keras_callback']
false
hsohn3/cchs-bert-visit-uncased-wordlevel-block512-batch4-ep100 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7195 - Epoch: 99
bb856fa28da33450680cf47ea217b0c9
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Epoch | |:----------:|:-----:| | 3.8730 | 0 | | 3.0562 | 1 | | 3.0168 | 2 | | 3.0032 | 3 | | 2.9954 | 4 | | 2.9951 | 5 | | 2.9904 | 6 | | 2.9765 | 7 | | 2.9788 | 8 | | 2.9692 | 9 | | 2.9656 | 10 | | 2.9761 | 11 | | 2.9643 | 12 | | 2.9393 | 13 | | 2.9026 | 14 | | 2.8685 | 15 | | 2.8438 | 16 | | 2.8279 | 17 | | 2.8107 | 18 | | 2.7896 | 19 | | 2.7716 | 20 | | 2.7458 | 21 | | 2.7118 | 22 | | 2.6519 | 23 | | 2.5933 | 24 | | 2.4702 | 25 | | 2.2842 | 26 | | 2.0712 | 27 | | 1.8406 | 28 | | 1.6374 | 29 | | 1.4836 | 30 | | 1.3824 | 31 | | 1.3079 | 32 | | 1.2538 | 33 | | 1.2054 | 34 | | 1.1700 | 35 | | 1.1432 | 36 | | 1.1122 | 37 | | 1.0939 | 38 | | 1.0645 | 39 | | 1.0465 | 40 | | 1.0248 | 41 | | 1.0069 | 42 | | 0.9902 | 43 | | 0.9769 | 44 | | 0.9510 | 45 | | 0.9394 | 46 | | 0.9316 | 47 | | 0.9181 | 48 | | 0.9090 | 49 | | 0.9010 | 50 | | 0.8934 | 51 | | 0.8791 | 52 | | 0.8759 | 53 | | 0.8652 | 54 | | 0.8566 | 55 | | 0.8511 | 56 | | 0.8414 | 57 | | 0.8373 | 58 | | 0.8302 | 59 | | 0.8241 | 60 | | 0.8246 | 61 | | 0.8207 | 62 | | 0.8110 | 63 | | 0.8081 | 64 | | 0.8010 | 65 | | 0.7995 | 66 | | 0.7965 | 67 | | 0.7941 | 68 | | 0.7849 | 69 | | 0.7866 | 70 | | 0.7874 | 71 | | 0.7796 | 72 | | 0.7742 | 73 | | 0.7706 | 74 | | 0.7687 | 75 | | 0.7686 | 76 | | 0.7663 | 77 | | 0.7586 | 78 | | 0.7554 | 79 | | 0.7563 | 80 | | 0.7541 | 81 | | 0.7527 | 82 | | 0.7482 | 83 | | 0.7460 | 84 | | 0.7436 | 85 | | 0.7423 | 86 | | 0.7422 | 87 | | 0.7385 | 88 | | 0.7367 | 89 | | 0.7321 | 90 | | 0.7320 | 91 | | 0.7354 | 92 | | 0.7271 | 93 | | 0.7270 | 94 | | 0.7210 | 95 | | 0.7236 | 96 | | 0.7263 | 97 | | 0.7237 | 98 | | 0.7195 | 99 |
451a9248104abf33e39a8bbc3a3915ca
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
cifar10_outputs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cifar10 dataset. It achieves the following results on the evaluation set: - Loss: 0.0806 - Accuracy: 0.9914
eaa9b1133d9728ce073a671f8a1dab82
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 17 - eval_batch_size: 17 - seed: 1337 - distributed_type: IPU - gradient_accumulation_steps: 128 - total_train_batch_size: 8704 - total_eval_batch_size: 272 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 100.0 - training precision: Mixed Precision
b4f952929abdcd5cfd4615aeee82c3c4
cc-by-sa-4.0
[]
false
BERT Base Japanese for Irony This is a BERT Base model for sentiment analysis in Japanese additionally finetuned for automatic irony detection. The model was based on [bert-base-japanese-sentiment](https://huggingface.co/daigo/bert-base-japanese-sentiment), and later finetuned on a dataset containing ironic and sarcastic tweets.
8feb720665f8b00c5d4482be21fdb3be
cc-by-sa-4.0
[]
false
Citations Please, cite this model using the following citation. ``` @inproceedings{dan2022bert-base-irony02, title={北見工業大学 テキスト情報処理研究室 ELECTRA Base 皮肉検出モデル (daigo ver.)}, author={団 俊輔 and プタシンスキ ミハウ and ジェプカ ラファウ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = "https://huggingface.co/kit-nlp/bert-base-japanese-sentiment-irony" } ```
6ba6141885ff540a8a2b6d17675f6228
apache-2.0
Text Classification
false
BatteryBERT-uncased for Battery Abstract Classification **Language model:** batterybert-uncased **Language:** English **Downstream-task:** Text Classification **Training data:** training\_data.csv **Eval data:** val\_data.csv **Code:** See [example](https://github.com/ShuHuang/batterybert) **Infrastructure**: 8x DGX A100
ac94a588db094686b33592435446cdb4
cc-by-4.0
['generated_from_trainer']
false
hing-mbert-finetuned-TRAC-DS This model is a fine-tuned version of [l3cube-pune/hing-mbert](https://huggingface.co/l3cube-pune/hing-mbert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9044 - Accuracy: 0.7010 - Precision: 0.6772 - Recall: 0.6723 - F1: 0.6740
5cf6ee6a52f8a69024cd214b8682b3fe
cc-by-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.824279936868144e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
97ed213d2eb6f542ea8a47d067f1fa75
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.837 | 1.0 | 1224 | 0.7640 | 0.6422 | 0.6377 | 0.6475 | 0.6277 | | 0.6164 | 2.0 | 2448 | 0.8456 | 0.6724 | 0.6581 | 0.6623 | 0.6547 | | 0.434 | 3.0 | 3672 | 1.0284 | 0.6969 | 0.6715 | 0.6771 | 0.6729 | | 0.267 | 4.0 | 4896 | 1.5533 | 0.6912 | 0.6644 | 0.6675 | 0.6655 | | 0.1542 | 5.0 | 6120 | 1.9044 | 0.7010 | 0.6772 | 0.6723 | 0.6740 |
0d81c7fe996140894d0a6d2e6b154cef
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
thilinamethsahan Dreambooth model trained by Thilinameths with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Thilinameths/thilinamethsahan/resolve/main/sample_images/00032-3236085139-highly_detailed_portrait_thilina_holding_rifle_in_gta_v_stephen_bliss_unreal_engine_fantasy_art_by_greg_rutkowski_loish_rhads_fe.png) ![1](https://huggingface.co/Thilinameths/thilinamethsahan/resolve/main/sample_images/00045-1477352525-thilina_in_GTA_art_style.png) ![2](https://huggingface.co/Thilinameths/thilinamethsahan/resolve/main/sample_images/00053-2991734913-a_portrait_of_thilina_as_gta_5_cover_art.png) ![3](https://huggingface.co/Thilinameths/thilinamethsahan/resolve/main/sample_images/thilinag.png)
e533eff933ecca7487b162cc4540a68e
cc-by-sa-4.0
['japanese', 'wikipedia', 'question-answering', 'dependency-parsing']
false
Model Description This is a BERT model pretrained on Japanese Wikipedia texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
7f8e1423b661416152df11503c222242
cc-by-sa-4.0
['japanese', 'wikipedia', 'question-answering', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head") model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head") qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False) print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ``` or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/)) ```py class TransformersUD(object): def __init__(self,bert): import os from transformers import (AutoTokenizer,AutoModelForQuestionAnswering, AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline) self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForQuestionAnswering.from_pretrained(bert) x=AutoModelForTokenClassification.from_pretrained if os.path.isdir(bert): d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger")) else: from transformers.utils import cached_file c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json")) d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c) s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json")) t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s) self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer, aggregation_strategy="simple") self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)] z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w) r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan) v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[] for i,t in enumerate(v): q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id] c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]]) b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c] with torch.no_grad(): d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]), token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b])) s,e=d.start_logits.tolist(),d.end_logits.tolist() for i in range(n): for j in range(n): m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: i=([p for s,e,p in w]+["root"]).index("root") j=i+1 if i<n else numpy.nanargmax(m[:,0]) m[0:j,0]=m[j+1:,0]=numpy.nan h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="
fbf331fbfc52059c98b0c6e2d159ba03
cc-by-sa-4.0
['japanese', 'wikipedia', 'question-answering', 'dependency-parsing']
false
text = "+text.replace("\n"," ")+"\n" for i,(s,e,p) in enumerate(w,1): p="root" if h[i]==0 else "dep" if p=="root" else p u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]), str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=TransformersUD("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head") print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ```
2096509f89e9175fd27bee9e2120bcc7
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small sv-SE - KTH This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3310 - Wer: 19.1193
708980b7894f025c805a0e694e9c96b0
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1015 | 1.29 | 1000 | 0.2880 | 20.4134 | | 0.0387 | 2.59 | 2000 | 0.2959 | 19.6810 | | 0.0126 | 3.88 | 3000 | 0.3103 | 19.2990 | | 0.0035 | 5.17 | 4000 | 0.3310 | 19.1193 |
ba72b48019d055131d50645db98bf5d7
creativeml-openrail-m
[]
false
Preview Images https://imgur.com/a/vwO6f5A IMPORTANT INSTRUCTIONS!! This model was trained on SD base 1.5 version BUT It does also work for 1.4 as they both share the same Clip encoder. Install instructions. Simply place the chimp.pt file inside the \stable-diffusion-webui\models\hypernetworks folder. Load the model inside the Automatic1111 interface under settings hypernetwork. Use instructions. Use between 0.55-1.0 hypernetwork strength, more strength will give a more real chimpl look while .55 gives a more human form chimp look. I find .7 works well enough. Use DPM++ SDE Karras sampler with 15 steps and CFG of 6.0. Make sure and always include the word chimp somewhere in the prompt. For people always preface the subject with chimp, example "chimp man walking", "chimp girl playing in the backyard", etc... VERY IMPORTANT! Always describe the background in some detail or you WILL get a very generic boring background.. So for example DON'T just say "an old chimp man". DO say "an old chimp man inside a rustic hut". Some fun info. People have been sleeping on hypernetworks and I plan to change that. Hopefully the flexibility of this hypernetwok will show everyone their true potential. Because this model is a hypernetwork it can be used in conjunction with ANY model based on the 1.4 CLIP architecture. That means this model will work on any custom 1.4 or 1.5 model, like the modern disney model, or classic disney, etc… for example, let's say you want to load classic disney as base. Well simply load the classic disney model, make sure and preface every prompt with classic disney. As per instructions of the model. Then follow up with my “chimp” tag as instructed once you have loaded the hypernetwork. So the prompt should look something like this “classic disney. chimp girl playing in the backyard.” Make sure and adjust the hypernetwork strength to .5 for a more cartoon look or .7 for a realistic chimp look. Have fun folks!
1d7074023a33e754ae4fb4599ecc5f79
wtfpl
[]
false
Embedding in a Dishonored-ish style. Works really well with other embeddings for a dystopian, sad, painterly vibe. No training settings this time, as I completely forgot to write those down. My apologies. ![20593-3049434783-headshot portrait painting of assassin, art by thishonor, dramatic lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1672151679576-6312579fc7577b68d90a7646.png) ![20600-843573381-victorian london street, big ben, art by thishonor, dramatic lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1672151679581-6312579fc7577b68d90a7646.png) ![20614-2662206447-victorian london street, cathedral, art by thishonor, dystopian, rainy, dramatic lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1672151679412-6312579fc7577b68d90a7646.png) ![20622-1816753543-Overgrown city street, wrecks, art by thishonor, dystopian, rainy, dramatic lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1672151679660-6312579fc7577b68d90a7646.png) ![20632-1466641715-gas station with motorcycles, art by thishonor, dystopian, rainy, dramatic lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1672151679067-6312579fc7577b68d90a7646.png) ![20639-2862686179-cute tabby kitten looking at the camera, art by thishonor, dramatic lighting.png](https://s3.amazonaws.com/moonup/production/uploads/1672151679583-6312579fc7577b68d90a7646.png) ![20582-3919106540-headshot portrait of woman wearing ball dress, art by thishonor.png](https://s3.amazonaws.com/moonup/production/uploads/1672151679575-6312579fc7577b68d90a7646.png)
597deae9b63f060667f233dc58ffaad7