license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.6348 | 2.2921 | 0 | | 2.3547 | 2.1969 | 1 | | 2.2381 | 2.0656 | 2 | | 2.1568 | 2.0696 | 3 | | 2.1510 | 1.9786 | 4 | | 2.1493 | 2.0436 | 5 | | 2.1469 | 2.0735 | 6 | | 2.1520 | 2.0695 | 7 | | 2.1617 | 2.0451 | 8 | | 2.1600 | 2.0358 | 9 |
feef6dcd1aad54ee2cb992bb8514316b
apache-2.0
['generated_from_trainer']
false
bert-small-finetuned-cuad-full This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the cuad dataset. It achieves the following results on the evaluation set: - Loss: 0.0274
423b716297037c2401089a4265918249
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.0323 | 1.0 | 47569 | 0.0280 | | 0.0314 | 2.0 | 95138 | 0.0265 | | 0.0276 | 3.0 | 142707 | 0.0274 |
c41c3ed8a763b0c270f242366bb94841
mit
['torch']
false
BERT BASE (cased) finetuned on Bulgarian part-of-speech data Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it does make a difference between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/). It was finetuned on public part-of-speech Bulgarian data. Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
724a6c293e8a091bcc544208a99d5e7f
mit
['torch']
false
How to use Here is how to use this model in PyTorch: ```python >>> from transformers import pipeline >>> >>> model = pipeline( >>> 'token-classification', >>> model='rmihaylov/bert-base-pos-theseus-bg', >>> tokenizer='rmihaylov/bert-base-pos-theseus-bg', >>> device=0, >>> revision=None) >>> output = model('Здравей, аз се казвам Иван.') >>> print(output) [{'end': 7, 'entity': 'INTJ', 'index': 1, 'score': 0.9640711, 'start': 0, 'word': '▁Здравей'}, {'end': 8, 'entity': 'PUNCT', 'index': 2, 'score': 0.9998927, 'start': 7, 'word': ','}, {'end': 11, 'entity': 'PRON', 'index': 3, 'score': 0.9998872, 'start': 8, 'word': '▁аз'}, {'end': 14, 'entity': 'PRON', 'index': 4, 'score': 0.99990034, 'start': 11, 'word': '▁се'}, {'end': 21, 'entity': 'VERB', 'index': 5, 'score': 0.99989736, 'start': 14, 'word': '▁казвам'}, {'end': 26, 'entity': 'PROPN', 'index': 6, 'score': 0.99990785, 'start': 21, 'word': '▁Иван'}, {'end': 27, 'entity': 'PUNCT', 'index': 7, 'score': 0.9999685, 'start': 26, 'word': '.'}] ```
6a6cec8a82e378f656f0fcc6f4749160
mit
[]
false
lofa on Stable Diffusion This is the `<lofa>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<lofa> 0](https://huggingface.co/sd-concepts-library/lofa/resolve/main/concept_images/2.jpeg) ![<lofa> 1](https://huggingface.co/sd-concepts-library/lofa/resolve/main/concept_images/0.jpeg) ![<lofa> 2](https://huggingface.co/sd-concepts-library/lofa/resolve/main/concept_images/1.jpeg) ![<lofa> 3](https://huggingface.co/sd-concepts-library/lofa/resolve/main/concept_images/3.jpeg) ![<lofa> 4](https://huggingface.co/sd-concepts-library/lofa/resolve/main/concept_images/4.jpeg)
3dd2b83ca684d884ef17d9801d75ebf1
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-fine-tuned-emotions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1377 - Accuracy: 0.9335 - F1 Score: 0.9338
4777709b99652d5091827e3c6308328e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
db8e1511edf06f1b112c7aa060ff72e4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.478 | 1.0 | 125 | 0.1852 | 0.931 | 0.9309 | | 0.1285 | 2.0 | 250 | 0.1377 | 0.9335 | 0.9338 |
cab551f8829da7664b1cd3235735c64a
mit
['generated_from_trainer']
false
Multilingual Verdict Classifier This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on 2,500 deduplicated multilingual verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into 65 languages with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/). It achieves the following results on the evaluation set, being 1,000 such verdicts, but here including duplicates to represent the true distribution: - Loss: 0.2238 - F1 Macro: 0.8540 - F1 Misinformation: 0.9798 - F1 Factual: 0.9889 - F1 Other: 0.5934 - Prec Macro: 0.8348 - Prec Misinformation: 0.9860 - Prec Factual: 0.9889 - Prec Other: 0.5294
0f921edb5b508b1c10f3eb2a2e698667
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 162525 - num_epochs: 1000
4ec6393ba2f3d8f97b0b84d974a9e618
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:| | 1.1109 | 0.1 | 2000 | 1.2166 | 0.0713 | 0.1497 | 0.0 | 0.0640 | 0.2451 | 0.7019 | 0.0 | 0.0334 | | 0.9551 | 0.2 | 4000 | 0.7801 | 0.3611 | 0.8889 | 0.0 | 0.1943 | 0.3391 | 0.8915 | 0.0 | 0.1259 | | 0.9275 | 0.3 | 6000 | 0.7712 | 0.3468 | 0.9123 | 0.0 | 0.1282 | 0.3304 | 0.9051 | 0.0 | 0.0862 | | 0.8881 | 0.39 | 8000 | 0.5386 | 0.3940 | 0.9524 | 0.0 | 0.2297 | 0.3723 | 0.9748 | 0.0 | 0.1420 | | 0.7851 | 0.49 | 10000 | 0.3298 | 0.6886 | 0.9626 | 0.7640 | 0.3393 | 0.6721 | 0.9798 | 0.7727 | 0.2639 | | 0.639 | 0.59 | 12000 | 0.2156 | 0.7847 | 0.9633 | 0.9355 | 0.4554 | 0.7540 | 0.9787 | 0.9062 | 0.3770 | | 0.5677 | 0.69 | 14000 | 0.1682 | 0.7877 | 0.9694 | 0.9667 | 0.4270 | 0.7763 | 0.9745 | 0.9667 | 0.3878 | | 0.5218 | 0.79 | 16000 | 0.1475 | 0.8037 | 0.9692 | 0.9667 | 0.4752 | 0.7804 | 0.9812 | 0.9667 | 0.3934 | | 0.4682 | 0.89 | 18000 | 0.1458 | 0.8097 | 0.9734 | 0.9667 | 0.4889 | 0.7953 | 0.9791 | 0.9667 | 0.44 | | 0.4188 | 0.98 | 20000 | 0.1416 | 0.8370 | 0.9769 | 0.9724 | 0.5618 | 0.8199 | 0.9826 | 0.9670 | 0.5102 | | 0.3735 | 1.08 | 22000 | 0.1624 | 0.8094 | 0.9698 | 0.9368 | 0.5217 | 0.7780 | 0.9823 | 0.89 | 0.4615 | | 0.3242 | 1.18 | 24000 | 0.1648 | 0.8338 | 0.9769 | 0.9727 | 0.5517 | 0.8167 | 0.9826 | 0.9570 | 0.5106 | | 0.2785 | 1.28 | 26000 | 0.1843 | 0.8261 | 0.9739 | 0.9780 | 0.5263 | 0.8018 | 0.9836 | 0.9674 | 0.4545 | | 0.25 | 1.38 | 28000 | 0.1975 | 0.8344 | 0.9744 | 0.9834 | 0.5455 | 0.8072 | 0.9859 | 0.9780 | 0.4576 | | 0.2176 | 1.48 | 30000 | 0.1849 | 0.8209 | 0.9691 | 0.9889 | 0.5047 | 0.7922 | 0.9846 | 0.9889 | 0.4030 | | 0.1966 | 1.58 | 32000 | 0.2119 | 0.8194 | 0.9685 | 0.9944 | 0.4954 | 0.7920 | 0.9846 | 1.0 | 0.3913 | | 0.1738 | 1.67 | 34000 | 0.2110 | 0.8352 | 0.9708 | 0.9944 | 0.5405 | 0.8035 | 0.9881 | 1.0 | 0.4225 | | 0.1625 | 1.77 | 36000 | 0.2152 | 0.8165 | 0.9709 | 0.9834 | 0.4950 | 0.7905 | 0.9835 | 0.9780 | 0.4098 | | 0.1522 | 1.87 | 38000 | 0.2300 | 0.8097 | 0.9697 | 0.9832 | 0.4762 | 0.7856 | 0.9835 | 0.9888 | 0.3846 | | 0.145 | 1.97 | 40000 | 0.1955 | 0.8519 | 0.9774 | 0.9889 | 0.5895 | 0.8280 | 0.9860 | 0.9889 | 0.5091 | | 0.1248 | 2.07 | 42000 | 0.2308 | 0.8149 | 0.9703 | 0.9889 | 0.4854 | 0.7897 | 0.9835 | 0.9889 | 0.3968 | | 0.1186 | 2.17 | 44000 | 0.2368 | 0.8172 | 0.9733 | 0.9834 | 0.4948 | 0.7942 | 0.9836 | 0.9780 | 0.4211 | | 0.1122 | 2.26 | 46000 | 0.2401 | 0.7968 | 0.9804 | 0.8957 | 0.5143 | 0.8001 | 0.9849 | 1.0 | 0.4154 | | 0.1099 | 2.36 | 48000 | 0.2290 | 0.8119 | 0.9647 | 0.9834 | 0.4874 | 0.7777 | 0.9880 | 0.9780 | 0.3671 | | 0.1093 | 2.46 | 50000 | 0.2256 | 0.8247 | 0.9745 | 0.9889 | 0.5106 | 0.8053 | 0.9825 | 0.9889 | 0.4444 | | 0.1053 | 2.56 | 52000 | 0.2416 | 0.8456 | 0.9799 | 0.9889 | 0.5679 | 0.8434 | 0.9805 | 0.9889 | 0.5610 | | 0.1049 | 2.66 | 54000 | 0.2850 | 0.7585 | 0.9740 | 0.8902 | 0.4112 | 0.7650 | 0.9802 | 0.9865 | 0.3284 | | 0.098 | 2.76 | 56000 | 0.2828 | 0.8049 | 0.9642 | 0.9889 | 0.4615 | 0.7750 | 0.9856 | 0.9889 | 0.3506 | | 0.0962 | 2.86 | 58000 | 0.2238 | 0.8540 | 0.9798 | 0.9889 | 0.5934 | 0.8348 | 0.9860 | 0.9889 | 0.5294 | | 0.0975 | 2.95 | 60000 | 0.2494 | 0.8249 | 0.9715 | 0.9889 | 0.5143 | 0.7967 | 0.9858 | 0.9889 | 0.4154 | | 0.0877 | 3.05 | 62000 | 0.2464 | 0.8274 | 0.9733 | 0.9889 | 0.5200 | 0.8023 | 0.9847 | 0.9889 | 0.4333 | | 0.0848 | 3.15 | 64000 | 0.2338 | 0.8263 | 0.9740 | 0.9889 | 0.5161 | 0.8077 | 0.9814 | 0.9889 | 0.4528 | | 0.0859 | 3.25 | 66000 | 0.2335 | 0.8365 | 0.9750 | 0.9889 | 0.5455 | 0.8108 | 0.9859 | 0.9889 | 0.4576 | | 0.084 | 3.35 | 68000 | 0.2067 | 0.8343 | 0.9763 | 0.9889 | 0.5376 | 0.8148 | 0.9837 | 0.9889 | 0.4717 | | 0.0837 | 3.45 | 70000 | 0.2516 | 0.8249 | 0.9746 | 0.9889 | 0.5111 | 0.8097 | 0.9803 | 0.9889 | 0.46 | | 0.0809 | 3.54 | 72000 | 0.2948 | 0.8258 | 0.9728 | 0.9944 | 0.5102 | 0.8045 | 0.9824 | 1.0 | 0.4310 | | 0.0833 | 3.64 | 74000 | 0.2457 | 0.8494 | 0.9744 | 0.9944 | 0.5794 | 0.8173 | 0.9893 | 1.0 | 0.4627 | | 0.0796 | 3.74 | 76000 | 0.3188 | 0.8277 | 0.9733 | 0.9889 | 0.5208 | 0.8059 | 0.9825 | 0.9889 | 0.4464 | | 0.0821 | 3.84 | 78000 | 0.2642 | 0.8343 | 0.9714 | 0.9944 | 0.5370 | 0.8045 | 0.9870 | 1.0 | 0.4265 |
4425cb8f6094a9af178668e9a22dcc4a
mit
[]
false
DarkPlane on Stable Diffusion This is the `<DarkPlane>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<DarkPlane> 0](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/9.jpeg) ![<DarkPlane> 1](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/10.jpeg) ![<DarkPlane> 2](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/7.jpeg) ![<DarkPlane> 3](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/1.jpeg) ![<DarkPlane> 4](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/2.jpeg) ![<DarkPlane> 5](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/8.jpeg) ![<DarkPlane> 6](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/0.jpeg) ![<DarkPlane> 7](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/20.jpeg) ![<DarkPlane> 8](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/17.jpeg) ![<DarkPlane> 9](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/12.jpeg) ![<DarkPlane> 10](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/3.jpeg) ![<DarkPlane> 11](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/11.jpeg) ![<DarkPlane> 12](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/4.jpeg) ![<DarkPlane> 13](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/19.jpeg) ![<DarkPlane> 14](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/13.jpeg) ![<DarkPlane> 15](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/5.jpeg) ![<DarkPlane> 16](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/15.jpeg) ![<DarkPlane> 17](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/18.jpeg) ![<DarkPlane> 18](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/14.jpeg) ![<DarkPlane> 19](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/16.jpeg) ![<DarkPlane> 20](https://huggingface.co/sd-concepts-library/darkplane/resolve/main/concept_images/6.jpeg)
f09f36a247957fcd0780aca555db48da
gpl-2.0
['corenlp']
false
Core NLP model for french CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations. Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP). This card and repo were automatically prepared with `hugging_corenlp.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-01-21 01:37:03.293
fdbbafc2e4ccd66052c3b62dd241f7dd
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-auto_and_commute-3-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2614 - Accuracy: 0.4289
80487f90205185690d95f6f6cdc73ee3
mit
['generated_from_trainer']
false
afro-xlmr-large AfroXLMR-large was created by MLM adaptation of XLM-R-large model on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Nigerian-Pidgin, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
f8b6f294794b0877ad30bffc86fa8c56
mit
['generated_from_trainer']
false
Eval results on MasakhaNER (F-score) language| XLM-R-miniLM| XLM-R-base |XLM-R-large | afro-xlmr-large | afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini -|-|-|-|-|-|-|- amh |69.5|70.6|76.2|79.7|76.1|70.1|69.7 hau |74.5|89.5|90.5|91.4|91.2|91.4|87.7 ibo |81.9|84.8|84.1|87.7|87.4|86.6|83.5 kin |68.6|73.3|73.8|79.1|78.0|77.5|74.1 lug |64.7|79.7|81.6|86.7|82.9|83.2|77.4 luo |11.7|74.9|73.6|78.1|75.1|75.4|17.5 pcm |83.2|87.3|89.0|91.0|89.6|89.0|85.5 swa |86.3|87.4|89.4|90.4|88.6|88.7|86.0 wol |51.7|63.9|67.9|69.6|67.4|65.9|59.0 yor |72.0|78.3|78.9|85.2|82.1|81.3|75.1 avg |66.4|79.0|80.5|83.9|81.8|80.9|71.6
7f1db1b1fd84ca602557a834ad3b804a
mit
['generated_from_trainer']
false
BibTeX entry and citation info ``` @inproceedings{alabi-etal-2022-adapting, title = "Adapting Pre-trained Language Models to {A}frican Languages via Multilingual Adaptive Fine-Tuning", author = "Alabi, Jesujoba O. and Adelani, David Ifeoluwa and Mosbach, Marius and Klakow, Dietrich", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.382", pages = "4336--4349", abstract = "Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {---} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.", } ```
aa5021d2197a72c77e316f3c6e838cce
apache-2.0
['generated_from_trainer']
false
wav2vec2-10 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0354 - Wer: 1.0
c1913c630a3966fe3b40126ca271a72d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30
a3ace68023effaac9a83dc5fb2bece99
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 4.2231 | 0.78 | 200 | 3.0442 | 1.0 | | 2.8665 | 1.57 | 400 | 3.0081 | 1.0 | | 2.8596 | 2.35 | 600 | 3.0905 | 1.0 | | 2.865 | 3.14 | 800 | 3.0443 | 1.0 | | 2.8613 | 3.92 | 1000 | 3.0316 | 1.0 | | 2.8601 | 4.71 | 1200 | 3.0574 | 1.0 | | 2.8554 | 5.49 | 1400 | 3.0261 | 1.0 | | 2.8592 | 6.27 | 1600 | 3.0785 | 1.0 | | 2.8606 | 7.06 | 1800 | 3.1129 | 1.0 | | 2.8547 | 7.84 | 2000 | 3.0647 | 1.0 | | 2.8565 | 8.63 | 2200 | 3.0624 | 1.0 | | 2.8633 | 9.41 | 2400 | 2.9900 | 1.0 | | 2.855 | 10.2 | 2600 | 3.0084 | 1.0 | | 2.8581 | 10.98 | 2800 | 3.0092 | 1.0 | | 2.8545 | 11.76 | 3000 | 3.0299 | 1.0 | | 2.8583 | 12.55 | 3200 | 3.0293 | 1.0 | | 2.8536 | 13.33 | 3400 | 3.0566 | 1.0 | | 2.8556 | 14.12 | 3600 | 3.0385 | 1.0 | | 2.8573 | 14.9 | 3800 | 3.0098 | 1.0 | | 2.8551 | 15.69 | 4000 | 3.0623 | 1.0 | | 2.8546 | 16.47 | 4200 | 3.0964 | 1.0 | | 2.8569 | 17.25 | 4400 | 3.0648 | 1.0 | | 2.8543 | 18.04 | 4600 | 3.0377 | 1.0 | | 2.8532 | 18.82 | 4800 | 3.0454 | 1.0 | | 2.8579 | 19.61 | 5000 | 3.0301 | 1.0 | | 2.8532 | 20.39 | 5200 | 3.0364 | 1.0 | | 2.852 | 21.18 | 5400 | 3.0187 | 1.0 | | 2.8561 | 21.96 | 5600 | 3.0172 | 1.0 | | 2.8509 | 22.75 | 5800 | 3.0420 | 1.0 | | 2.8551 | 23.53 | 6000 | 3.0309 | 1.0 | | 2.8552 | 24.31 | 6200 | 3.0416 | 1.0 | | 2.8521 | 25.1 | 6400 | 3.0469 | 1.0 | | 2.852 | 25.88 | 6600 | 3.0489 | 1.0 | | 2.854 | 26.67 | 6800 | 3.0394 | 1.0 | | 2.8572 | 27.45 | 7000 | 3.0336 | 1.0 | | 2.8502 | 28.24 | 7200 | 3.0363 | 1.0 | | 2.8557 | 29.02 | 7400 | 3.0304 | 1.0 | | 2.8522 | 29.8 | 7600 | 3.0354 | 1.0 |
143c35efe63711012c30b88b0c120034
apache-2.0
['generated_from_keras_callback']
false
afrodp95/distilbert-base-uncased-finetuned-job-skills-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0923 - Validation Loss: 0.1313 - Train Precision: 0.3601 - Train Recall: 0.4922 - Train F1: 0.4159 - Train Accuracy: 0.9522 - Epoch: 5
373858b0682aa32ce70538e9f43a8b33
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1386, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
794eafa00699c8cf6a8f327dc00167ad
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.3257 | 0.1935 | 0.3122 | 0.2144 | 0.2542 | 0.9521 | 0 | | 0.1564 | 0.1464 | 0.3503 | 0.3423 | 0.3463 | 0.9546 | 1 | | 0.1257 | 0.1365 | 0.3593 | 0.4893 | 0.4143 | 0.9522 | 2 | | 0.1102 | 0.1318 | 0.3607 | 0.4692 | 0.4079 | 0.9521 | 3 | | 0.1002 | 0.1305 | 0.3504 | 0.4941 | 0.4100 | 0.9515 | 4 | | 0.0923 | 0.1313 | 0.3601 | 0.4922 | 0.4159 | 0.9522 | 5 |
280a1e7639ac5028bd29cbec43447b82
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_vp-nl_s354 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
9f55ee25d5ebce706c6c14896cbf3224
apache-2.0
['generated_from_trainer']
false
bart-large-commentaries_hdwriter This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1619 - Rouge1: 26.1101 - Rouge2: 9.928 - Rougel: 22.9007 - Rougelsum: 23.117 - Gen Len: 15.9536
944ece00fcac0df2d1e7e6800de5ec96
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6237 | 1.0 | 5072 | 2.5309 | 26.4063 | 9.1795 | 22.6699 | 22.9125 | 17.3103 | | 1.8808 | 2.0 | 10144 | 2.5049 | 25.3706 | 8.7568 | 21.8594 | 22.1233 | 15.8579 | | 1.3084 | 3.0 | 15216 | 2.6680 | 26.6284 | 9.9914 | 23.1477 | 23.3625 | 16.8832 | | 0.9247 | 4.0 | 20288 | 2.8923 | 26.3827 | 9.8217 | 22.9524 | 23.1651 | 15.4529 | | 0.692 | 5.0 | 25360 | 3.1619 | 26.1101 | 9.928 | 22.9007 | 23.117 | 15.9536 |
9378c9a720c41de885b0528e72b95abb
apache-2.0
['translation']
false
run-fra * source group: Rundi * target group: French * OPUS readme: [run-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md) * model: transformer-align * source language(s): run * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.eval.txt)
555587af4dab8212d38e8856f5dc9b2c
apache-2.0
['translation']
false
System Info: - hf_name: run-fra - source_languages: run - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['rn', 'fr'] - src_constituents: {'run'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt - src_alpha3: run - tgt_alpha3: fra - short_pair: rn-fr - chrF2_score: 0.397 - bleu: 18.2 - brevity_penalty: 1.0 - ref_len: 7496.0 - src_name: Rundi - tgt_name: French - train_date: 2020-06-16 - src_alpha2: rn - tgt_alpha2: fr - prefer_old: False - long_pair: run-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
57ab89df96f7e9d88ed7036e9a7251d5
apache-2.0
['generated_from_trainer']
false
distilbert_add_pre-training-dim-96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikitext wikitext-103-raw-v1 dataset. It achieves the following results on the evaluation set: - Loss: 6.6092 - Accuracy: 0.1494
4fb8d94fb84fb970eed970e38fa58515
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 15 - mixed_precision_training: Native AMP
f5fa74d4882d99f64bfd92be10270e54
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 14.685 | 1.0 | 3573 | 9.3922 | 0.1240 | | 8.0255 | 2.0 | 7146 | 7.1510 | 0.1315 | | 7.0152 | 3.0 | 10719 | 6.7861 | 0.1482 | | 6.8127 | 4.0 | 14292 | 6.7053 | 0.1493 | | 6.74 | 5.0 | 17865 | 6.6695 | 0.1474 | | 6.7067 | 6.0 | 21438 | 6.6431 | 0.1491 | | 6.6871 | 7.0 | 25011 | 6.6204 | 0.1483 | | 6.6748 | 8.0 | 28584 | 6.6250 | 0.1473 | | 6.6649 | 9.0 | 32157 | 6.6108 | 0.1486 | | 6.6596 | 10.0 | 35730 | 6.6140 | 0.1497 | | 6.6536 | 11.0 | 39303 | 6.6067 | 0.1493 | | 6.6483 | 12.0 | 42876 | 6.6140 | 0.1489 | | 6.6463 | 13.0 | 46449 | 6.6096 | 0.1484 | | 6.6434 | 14.0 | 50022 | 6.5570 | 0.1526 | | 6.6414 | 15.0 | 53595 | 6.5836 | 0.1526 |
3fc9a4a213ccb761852c69ecf6d91681
apache-2.0
['generated_from_trainer']
false
mt5_fill_puntuation This model is a fine-tuned version of [jamie613/mt5_fill_puntuation](https://huggingface.co/jamie613/mt5_fill_puntuation) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0717
782c9a18488d89012f1c6c6dd6ee2b57
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
b2b2d06a931fbc0f9a6b4858e4090b76
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0918 | 0.04 | 500 | 0.0803 | | 0.0894 | 0.07 | 1000 | 0.0773 | | 0.0905 | 0.11 | 1500 | 0.0822 | | 0.0908 | 0.15 | 2000 | 0.0833 | | 0.0868 | 0.18 | 2500 | 0.0840 | | 0.09 | 0.22 | 3000 | 0.0811 | | 0.0868 | 0.26 | 3500 | 0.0735 | | 0.0869 | 0.29 | 4000 | 0.0805 | | 0.0874 | 0.33 | 4500 | 0.0742 | | 0.088 | 0.37 | 5000 | 0.0749 | | 0.0884 | 0.4 | 5500 | 0.0730 | | 0.0861 | 0.44 | 6000 | 0.0749 | | 0.0804 | 0.48 | 6500 | 0.0739 | | 0.0845 | 0.51 | 7000 | 0.0717 | | 0.0861 | 0.55 | 7500 | 0.0743 | | 0.0812 | 0.59 | 8000 | 0.0726 | | 0.0824 | 0.62 | 8500 | 0.0729 | | 0.0836 | 0.66 | 9000 | 0.0751 | | 0.079 | 0.7 | 9500 | 0.0731 | | 0.0806 | 0.73 | 10000 | 0.0725 | | 0.0798 | 0.77 | 10500 | 0.0749 | | 0.0794 | 0.81 | 11000 | 0.0725 | | 0.0795 | 0.84 | 11500 | 0.0726 | | 0.0755 | 0.88 | 12000 | 0.0732 | | 0.0815 | 0.92 | 12500 | 0.0722 | | 0.0776 | 0.95 | 13000 | 0.0719 | | 0.0838 | 0.99 | 13500 | 0.0717 |
45bbb17fd789b8c58838dd841395410e
mit
['generated_from_keras_callback']
false
deepiit98/Catalan_language-clustered This model is a fine-tuned version of [nandysoham16/13-clustered_aug](https://huggingface.co/nandysoham16/13-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5877 - Train End Logits Accuracy: 0.8681 - Train Start Logits Accuracy: 0.8507 - Validation Loss: 0.4207 - Validation End Logits Accuracy: 0.8182 - Validation Start Logits Accuracy: 0.8182 - Epoch: 0
1cce319ac6488bcfd585772744569f56
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.5877 | 0.8681 | 0.8507 | 0.4207 | 0.8182 | 0.8182 | 0 |
1981b366c370e11afb8ce0421fc3b4a8
creativeml-openrail-m
['text-to-image']
false
Roy Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: r123oy (use that on your prompt) ![r123oy 0](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%281%29.jpg)![r123oy 1](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%282%29.jpg)![r123oy 2](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%283%29.jpg)![r123oy 3](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%284%29.jpg)![r123oy 4](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%285%29.jpg)![r123oy 5](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%286%29.jpg)![r123oy 6](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%287%29.jpg)![r123oy 7](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%288%29.jpg)![r123oy 8](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%289%29.jpg)![r123oy 9](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2810%29.jpg)![r123oy 10](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2811%29.jpg)![r123oy 11](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2812%29.jpg)![r123oy 12](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2813%29.jpg)![r123oy 13](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2814%29.jpg)![r123oy 14](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2815%29.jpg)![r123oy 15](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2816%29.jpg)![r123oy 16](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2817%29.jpg)![r123oy 17](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2818%29.jpg)![r123oy 18](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2819%29.jpg)![r123oy 19](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2820%29.jpg)![r123oy 20](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2821%29.jpg)![r123oy 21](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2822%29.jpg)![r123oy 22](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2823%29.jpg)![r123oy 23](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2824%29.jpg)![r123oy 24](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2825%29.jpg)![r123oy 25](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2826%29.jpg)![r123oy 26](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2827%29.jpg)![r123oy 27](https://huggingface.co/duja1/roy/resolve/main/concept_images/r123oy_%2828%29.jpg)
cb56e90b0a45535ab557c06837349d10
apache-2.0
[]
false
bert-base-en-es-pt-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
e8e2fb8a959734be41611629cd3fd427
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-pt-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-pt-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
19ac9c95eefb0be97d512b95275d05d9
mit
[]
false
Collage14 on Stable Diffusion This is the `<C14>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<C14> 0](https://huggingface.co/sd-concepts-library/collage14/resolve/main/concept_images/1.jpeg) ![<C14> 1](https://huggingface.co/sd-concepts-library/collage14/resolve/main/concept_images/5.jpeg) ![<C14> 2](https://huggingface.co/sd-concepts-library/collage14/resolve/main/concept_images/0.jpeg) ![<C14> 3](https://huggingface.co/sd-concepts-library/collage14/resolve/main/concept_images/4.jpeg) ![<C14> 4](https://huggingface.co/sd-concepts-library/collage14/resolve/main/concept_images/2.jpeg) ![<C14> 5](https://huggingface.co/sd-concepts-library/collage14/resolve/main/concept_images/3.jpeg)
0b5cf2627c48eb42c6fce5b464f4fc7b
apache-2.0
['translation']
false
opus-mt-en-he * source languages: en * target languages: he * OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt)
585eaf263ce59c645b89edec57617694
apache-2.0
['generated_from_trainer']
false
wav2vec2-6 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.2459 - Wer: 1.0
6f431d5d65e6a3ccc6379d7fda7050e6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30
95e13a5c734d37b650ac47e14ef887a3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 4.5873 | 1.56 | 200 | 5.4586 | 1.0 | | 4.1846 | 3.12 | 400 | 5.2278 | 1.0 | | 4.1711 | 4.69 | 600 | 5.3131 | 1.0 | | 4.1581 | 6.25 | 800 | 5.2558 | 1.0 | | 4.1275 | 7.81 | 1000 | 5.2556 | 1.0 | | 4.1452 | 9.38 | 1200 | 5.2637 | 1.0 | | 4.1614 | 10.94 | 1400 | 5.2847 | 1.0 | | 4.1667 | 12.5 | 1600 | 5.2349 | 1.0 | | 4.1471 | 14.06 | 1800 | 5.2850 | 1.0 | | 4.1268 | 15.62 | 2000 | 5.2510 | 1.0 | | 4.1701 | 17.19 | 2200 | 5.2605 | 1.0 | | 4.1459 | 18.75 | 2400 | 5.2493 | 1.0 | | 4.1411 | 20.31 | 2600 | 5.2649 | 1.0 | | 4.1351 | 21.88 | 2800 | 5.2541 | 1.0 | | 4.1442 | 23.44 | 3000 | 5.2459 | 1.0 | | 4.1805 | 25.0 | 3200 | 5.2232 | 1.0 | | 4.1262 | 26.56 | 3400 | 5.2384 | 1.0 | | 4.145 | 28.12 | 3600 | 5.2522 | 1.0 | | 4.142 | 29.69 | 3800 | 5.2459 | 1.0 |
7e71483d37ee1edd4a5d4d2f665f0c2e
mit
['generated_from_trainer']
false
distilcamembert-cae-territory This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7346 - Precision: 0.7139 - Recall: 0.6835 - F1: 0.6887
87983868f748fe3efb96990486e00fa0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 1.1749 | 1.0 | 40 | 1.0498 | 0.1963 | 0.4430 | 0.2720 | | 0.9833 | 2.0 | 80 | 0.8853 | 0.7288 | 0.6709 | 0.6625 | | 0.6263 | 3.0 | 120 | 0.7503 | 0.7237 | 0.6709 | 0.6689 | | 0.3563 | 4.0 | 160 | 0.7346 | 0.7139 | 0.6835 | 0.6887 | | 0.2253 | 5.0 | 200 | 0.7303 | 0.7139 | 0.6835 | 0.6887 |
57f2b5997a589f5b21bad76c23d79830
mit
[]
false
hmBERT: Historical Multilingual Language Models for Named Entity Recognition More information about our hmBERT model can be found in our new paper: ["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575).
3a1a7089f180894c476b673badb4630d
mit
[]
false
Smaller Models We have also released smaller models for the multilingual model: | Model identifier | Model Hub link | ----------------------------------------------- | --------------------------------------------------------------------------- | `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased) | `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased) | `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased) | `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
f764a08c2e112e8e0b02f0b7863f0f10
cc-by-4.0
['roberta', 'roberta-base', 'masked-language-modeling']
false
Overview **Language model:** roberta-base **Language:** English **Downstream-task:** Fill-Mask **Training data:** wikimovies **Eval data:** wikimovies **Infrastructure**: 2x Tesla v100 **Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/shell_scripts/train_movie_roberta.sh)
cb877c761e41d2729dd36c172e1bc3d0
cc-by-4.0
['roberta', 'roberta-base', 'masked-language-modeling']
false
Hyperparameters ``` num_examples = 4346 batch_size = 16 n_epochs = 3 base_LM_model = "roberta-base" learning_rate = 5e-05 max_query_length=64 Gradient Accumulation steps = 1 Total optimization steps = 816 evaluation_strategy=IntervalStrategy.NO prediction_loss_only=False per_device_train_batch_size=8 per_device_eval_batch_size=8 adam_beta1=0.9 adam_beta2=0.999 adam_epsilon=1e-08, max_grad_norm=1.0 lr_scheduler_type=SchedulerType.LINEAR warmup_ratio=0.0 seed=42 eval_steps=500 metric_for_best_model=None greater_is_better=None label_smoothing_factor=0.0 ```
b07ad75066c84330d51cf1b019f4292a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_fold_1_ternary This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0582 - F1: 0.7326
5c2c0d139186852163cdc0b9a289c809
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 290 | 0.5524 | 0.6755 | | 0.5648 | 2.0 | 580 | 0.5654 | 0.7124 | | 0.5648 | 3.0 | 870 | 0.6547 | 0.6896 | | 0.2712 | 4.0 | 1160 | 0.8916 | 0.7263 | | 0.2712 | 5.0 | 1450 | 1.1187 | 0.7120 | | 0.1147 | 6.0 | 1740 | 1.2778 | 0.7114 | | 0.0476 | 7.0 | 2030 | 1.4441 | 0.7151 | | 0.0476 | 8.0 | 2320 | 1.5535 | 0.7133 | | 0.0187 | 9.0 | 2610 | 1.6439 | 0.7212 | | 0.0187 | 10.0 | 2900 | 1.7261 | 0.7313 | | 0.0138 | 11.0 | 3190 | 1.6806 | 0.7292 | | 0.0138 | 12.0 | 3480 | 1.8425 | 0.7111 | | 0.009 | 13.0 | 3770 | 1.9207 | 0.7213 | | 0.0045 | 14.0 | 4060 | 1.8900 | 0.7202 | | 0.0045 | 15.0 | 4350 | 1.9730 | 0.7216 | | 0.0042 | 16.0 | 4640 | 2.0775 | 0.7041 | | 0.0042 | 17.0 | 4930 | 2.0514 | 0.7106 | | 0.0019 | 18.0 | 5220 | 2.0582 | 0.7326 | | 0.0039 | 19.0 | 5510 | 2.1010 | 0.7081 | | 0.0039 | 20.0 | 5800 | 2.0487 | 0.7273 | | 0.0025 | 21.0 | 6090 | 2.0415 | 0.7254 | | 0.0025 | 22.0 | 6380 | 2.0753 | 0.7157 | | 0.0017 | 23.0 | 6670 | 2.0554 | 0.7246 | | 0.0017 | 24.0 | 6960 | 2.0644 | 0.7290 | | 0.0001 | 25.0 | 7250 | 2.0711 | 0.7310 |
1aac2426f03df0f6b1614a3c557d82ec
apache-2.0
['translation', 'generated_from_trainer']
false
marian-finetuned-kde4-en-to-vi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 1.2134 - Bleu: 51.2085
9bdf8ac7bf03d7d11137b1b923de777a
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
stargate-diffusion-sg1-1 Dreambooth model trained by Aphophis420 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook USE: *prompt*, still from stargate sg1 Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) ![0](https://huggingface.co/Aphophis420/stargate-diffusion-sg1-1/resolve/main/04738.png) ![1](https://huggingface.co/Aphophis420/stargate-diffusion-sg1-1/resolve/main/04745.png) ![2](https://huggingface.co/Aphophis420/stargate-diffusion-sg1-1/resolve/main/04787.png) ![3](https://huggingface.co/Aphophis420/stargate-diffusion-sg1-1/resolve/main/04797.png) ![4](https://huggingface.co/Aphophis420/stargate-diffusion-sg1-1/resolve/main/04808.png) ![5](https://huggingface.co/Aphophis420/stargate-diffusion-sg1-1/resolve/main/04824.png) ![6](https://huggingface.co/Aphophis420/stargate-diffusion-sg1-1/resolve/main/04814.png)
b28386794e6cf78faaee78c24af8a528
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8442 - Matthews Correlation: 0.5443
bcfa0e2dc5d96cd5c05287c7b9b98051
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5267 | 1.0 | 535 | 0.5646 | 0.3655 | | 0.3477 | 2.0 | 1070 | 0.5291 | 0.4898 | | 0.2324 | 3.0 | 1605 | 0.5629 | 0.5410 | | 0.1774 | 4.0 | 2140 | 0.7630 | 0.5370 | | 0.1248 | 5.0 | 2675 | 0.8442 | 0.5443 |
7f78403f9aa806d20495dd7c83e7d779
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3404 - Accuracy: 0.8667 - F1: 0.8734
d7302df606f579a8b256876a9bde1a55
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mn', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
wav2vec2-large-xls-r-300m-mongolian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - MN dataset. It achieves the following results on the evaluation set: - Loss: 0.6003 - Wer: 0.4473
1616b789641e32b3991ce6c9c32b86fd
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mn', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 100.0 - mixed_precision_training: Native AMP
d2be108dff6c18ce8cb7c65b6996c914
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mn', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.3677 | 15.87 | 2000 | 0.6432 | 0.6198 | | 1.1379 | 31.75 | 4000 | 0.6196 | 0.5592 | | 1.0093 | 47.62 | 6000 | 0.5828 | 0.5117 | | 0.8888 | 63.49 | 8000 | 0.5754 | 0.4822 | | 0.7985 | 79.37 | 10000 | 0.5987 | 0.4690 | | 0.697 | 95.24 | 12000 | 0.6014 | 0.4471 |
39c3562a727243424b5fd2e043c2be5e
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2251 - Accuracy: 0.9215 - F1: 0.9215
9c6dee935f92d9b9055f697701e668be
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.851 | 1.0 | 250 | 0.3314 | 0.8985 | 0.8952 | | 0.2565 | 2.0 | 500 | 0.2251 | 0.9215 | 0.9215 |
e51debd62d42b290a24add79737ff425
mit
['generated_from_trainer']
false
gpt2.CEBaB_confounding.observational.sa.5-class.seed_43 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.9838 - Accuracy: 0.5918 - Macro-f1: 0.4948 - Weighted-macro-f1: 0.5380
2279e04fe5b7d5dfc17d2015b1b3813f
mit
['generated_from_trainer']
false
xlnet-base-cased-IUChatbot-ontologyDts-localParams This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0238
bd2a565e9a10cac7ff562f61b2cb65aa
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
efe01ca76074b3abdc454e07f1e59638
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1172 | 1.0 | 1119 | 0.0657 | | 0.0564 | 2.0 | 2238 | 0.0237 | | 0.033 | 3.0 | 3357 | 0.0238 |
1f7dbb9aa9b99d7915da2d5bfa0f19c3
mit
['generated_from_trainer']
false
baitblocker This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the [id_clickbait](https://huggingface.co/datasets/id_clickbait) dataset. It achieves the following results on the evaluation set: - Loss: 0.4660 - Accuracy: 0.8347
d9b27cc9914b921de4fbc6a81a6fca0b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
814be693faeb665bf71f3c1bfa8fd538
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4025 | 1.0 | 1500 | 0.4074 | 0.827 | | 0.3581 | 2.0 | 3000 | 0.4090 | 0.8283 | | 0.333 | 3.0 | 4500 | 0.4660 | 0.8347 |
ad28f2783dbcfb74ff0223c8a95dd8a1
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
<center><b>HighRiseMixV2.5</b></center> <p align="center"><img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00733-2938506110-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo))%2C%20(gradient%20pink%20eye%2C%20black%20hair%2C%20short%20hair%2C%20school%20uniform%2C%20mic.png"> <img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00729-221520444-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo))%2C%20(gradient%20pink%20eye%2C%20black%20hair%2C%20short%20hair%2C%20school%20uniform%2C%20mic.png"></p> <center><b>HighRiseMixV2</b></center> <p align="center"><img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00016-3185527639-(masterpiece%2C%20excellent%20quality%2C%20high%20quality)%2C%20(1girl%2C%20solo%2C%20cowboy%20shot)%2C%20looking%20at%20viewer%2C%20sky%2C%20city%2C%20skyscrapers%2C%20pavement%2C.png"> <img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00021-3185527644-(masterpiece%2C%20excellent%20quality%2C%20high%20quality)%2C%20(1girl%2C%20solo%2C%20cowboy%20shot)%2C%20looking%20at%20viewer%2C%20sky%2C%20city%2C%20skyscrapers%2C%20pavement%2C.png"></p> U-Net mixed model <b>specialized for city and skyscrapers background.</b> <b>FP16 Pruned version</b>(No EMA). (Quality change may occur in very small details on buildings' textures) <b>V2 Update Log : </b> Added models : AikimixPv1.0, Counterfeit V2.0, pastelmix-better-vae Adjusted character style(more cute, anime style) <b>V2.5 Update Log : </b> Added models : AikimixCv1.5 Just some very little changes to textures adjusted to my taste. It doesn't matter which one to use. There are pros and cons between V2 and V2.5 so you can just use what you want. <b>Recommended prompts : </b> (masterpiece, best quality, excellent quality), ((1girl, solo)), sky, city, (skyscrapers), trees, pavement, lens flare EasyNegative, moss, phone, man, pedestrians, extras, border, outside border, white border (EasyNegative is a negative embedding : https://huggingface.co/datasets/gsdf/EasyNegative) <b>Recommended settings : </b> Sampler : DPM++ 2M Karras OR DPM++ SDE Karras Sampling steps : 25 ~ 30 Resolution : 512x768 OR 768x512 CFG Scale : 9 <b> Upscale is a must-do!! </b> Otherwise, you won't get intended results. Upscaler : Latent (nearest) Hires steps : 0 Denoise : 0.6 Upscale 2x <b>Recommended VAEs : </b> kl-f8-anime2 orangemix.vae.pt <b> Mixed models : </b> AbyssOrangeMix2_NSFW, AnythingV4.5, AikimiXPv1.0, BasilMixFixed, Counterfeit V2.0, CounterfeitV2.5, EerieOrangeMix2, pastelmix-better-vae, PowercolorV2 (Thanks to everyone who made above models!) This is my first mixed model being uploaded to public site, so feel free to give feedbacks as you wish, I'll try and work around with it.
58ce30b1246c6b7540f1a928fee47fa9
mit
['generated_from_trainer']
false
xlnet-base-cased-IUChatbot-ontologyDts This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4965
128641411304aca5c45c26ba6fd05439
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 318 | 0.5005 | | 0.8222 | 2.0 | 636 | 0.4488 | | 0.8222 | 3.0 | 954 | 0.4965 |
d6334836a0400c4f1208d9496ead6f6b
apache-2.0
['Super-Resolution', 'computer-vision', 'RealSR', 'gan']
false
Model Description [RealSR](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Ji_Real-World_Super-Resolution_via_Kernel_Estimation_and_Noise_Injection_CVPRW_2020_paper.pdf): Real-World Super-Resolution via Kernel Estimation and Noise Injection. [NTIRE 2020 Challenge on Real-World Image Super-Resolution](https://arxiv.org/abs/2005.01996): Methods and Results [Paper Repo](https://github.com/Tencent/Real-SR): Implementation of paper.
1c121f8059f089926ef044d75a58e066
apache-2.0
['Super-Resolution', 'computer-vision', 'RealSR', 'gan']
false
BibTeX Entry and Citation Info ``` @inproceedings{zhang2021designing, title={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution}, author={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu}, booktitle={IEEE International Conference on Computer Vision}, pages={4791--4800}, year={2021} } ``` ``` @inproceedings{zhang2021designing, title={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution}, author={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu}, booktitle={IEEE International Conference on Computer Vision}, pages={4791--4800}, year={2021} } ``` ``` @article{Lugmayr2020ntire, title={NTIRE 2020 Challenge on Real-World Image Super-Resolution: Methods and Results}, author={Andreas Lugmayr, Martin Danelljan, Radu Timofte, Namhyuk Ahn, Dongwoon Bai, Jie Cai, Yun Cao, Junyang Chen, Kaihua Cheng, SeYoung Chun, Wei Deng, Mostafa El-Khamy Chiu, Man Ho, Xiaozhong Ji, Amin Kheradmand, Gwantae Kim, Hanseok Ko, Kanghyu Lee, Jungwon Lee, Hao Li, Ziluan Liu, Zhi-Song Liu, Shuai Liu, Yunhua Lu, Zibo Meng, Pablo Navarrete, Michelini Christian, Micheloni Kalpesh, Prajapati Haoyu, Ren Yong, Hyeok Seo, Wan-Chi Siu, Kyung-Ah Sohn, Ying Tai, Rao Muhammad Umer, Shuangquan Wang, Huibing Wang, Timothy Haoning Wu, Haoning Wu, Biao Yang, Fuzhi Yang, Jaejun Yoo, Tongtong Zhao, Yuanbo Zhou, Haijie Zhuo, Ziyao Zong, Xueyi Zou}, journal={CVPR Workshops}, year={2020}, } ```
338824f84102e702ff77e00b70c70452
apache-2.0
['generated_from_trainer']
false
favs_token_classification_v2_updated_data This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the token_classification_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5346 - Precision: 0.6923 - Recall: 0.8357 - F1: 0.7573 - Accuracy: 0.8493
57afb1e646c64ae9ea53eaa10297519f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 2.3096 | 1.0 | 13 | 1.9927 | 0.3011 | 0.2 | 0.2403 | 0.3726 | | 2.038 | 2.0 | 26 | 1.7093 | 0.2569 | 0.2643 | 0.2606 | 0.4274 | | 1.8391 | 3.0 | 39 | 1.4452 | 0.3057 | 0.4214 | 0.3544 | 0.5562 | | 1.4912 | 4.0 | 52 | 1.2176 | 0.4130 | 0.5429 | 0.4691 | 0.6493 | | 1.3296 | 5.0 | 65 | 1.0368 | 0.4973 | 0.6643 | 0.5688 | 0.7123 | | 1.2036 | 6.0 | 78 | 0.9084 | 0.5053 | 0.6786 | 0.5793 | 0.7260 | | 0.9244 | 7.0 | 91 | 0.8148 | 0.5543 | 0.7286 | 0.6296 | 0.7616 | | 0.8293 | 8.0 | 104 | 0.7482 | 0.5698 | 0.7286 | 0.6395 | 0.7726 | | 0.7422 | 9.0 | 117 | 0.6961 | 0.5833 | 0.75 | 0.6562 | 0.7836 | | 0.6379 | 10.0 | 130 | 0.6613 | 0.6124 | 0.7786 | 0.6855 | 0.8027 | | 0.6071 | 11.0 | 143 | 0.6357 | 0.6193 | 0.7786 | 0.6899 | 0.8082 | | 0.5526 | 12.0 | 156 | 0.6033 | 0.6433 | 0.7857 | 0.7074 | 0.8164 | | 0.537 | 13.0 | 169 | 0.5813 | 0.6512 | 0.8 | 0.7179 | 0.8301 | | 0.4806 | 14.0 | 182 | 0.5706 | 0.6608 | 0.8071 | 0.7267 | 0.8329 | | 0.4503 | 15.0 | 195 | 0.5594 | 0.6647 | 0.8071 | 0.7290 | 0.8356 | | 0.4149 | 16.0 | 208 | 0.5503 | 0.6805 | 0.8214 | 0.7443 | 0.8438 | | 0.4175 | 17.0 | 221 | 0.5430 | 0.6824 | 0.8286 | 0.7484 | 0.8438 | | 0.4337 | 18.0 | 234 | 0.5396 | 0.6923 | 0.8357 | 0.7573 | 0.8493 | | 0.3965 | 19.0 | 247 | 0.5361 | 0.6882 | 0.8357 | 0.7548 | 0.8493 | | 0.3822 | 20.0 | 260 | 0.5346 | 0.6923 | 0.8357 | 0.7573 | 0.8493 |
0b1b14661abae153c7f42e48bbaa4f95
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_gender_male-2_female-8_s728 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4bfe301fbe69cad04a08b1f28d8120a7
apache-2.0
['generated_from_trainer']
false
ner_peoples_daily This model is a fine-tuned version of [hfl/rbt6](https://huggingface.co/hfl/rbt6) on the peoples_daily_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0249 - Precision: 0.9205 - Recall: 0.9365 - F1: 0.9285 - Accuracy: 0.9930
1c5e3c727dcc4ab11e58931d8b51ea20
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
0a930ac60959605e3c7dc72ff2ce8187
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3154 | 1.0 | 164 | 0.0410 | 0.8258 | 0.8684 | 0.8466 | 0.9868 | | 0.0394 | 2.0 | 328 | 0.0287 | 0.8842 | 0.9070 | 0.8954 | 0.9905 | | 0.0293 | 3.0 | 492 | 0.0264 | 0.8978 | 0.9168 | 0.9072 | 0.9916 | | 0.02 | 4.0 | 656 | 0.0254 | 0.9149 | 0.9226 | 0.9188 | 0.9923 | | 0.016 | 5.0 | 820 | 0.0250 | 0.9167 | 0.9281 | 0.9224 | 0.9927 | | 0.0124 | 6.0 | 984 | 0.0252 | 0.9114 | 0.9328 | 0.9220 | 0.9928 | | 0.0108 | 7.0 | 1148 | 0.0249 | 0.9169 | 0.9339 | 0.9254 | 0.9928 | | 0.0097 | 8.0 | 1312 | 0.0249 | 0.9205 | 0.9365 | 0.9285 | 0.9930 |
635e533419fa9d47e889e8d456af9235
creativeml-openrail-m
[]
false
Usage ```python from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/miniSD-diffusers") pipe = pipe.to("cuda") prompt = "a photograph of an wrinkly old man laughing" image = pipe(prompt, width=256, height=256).images[0] image.save('test.jpg') ```
56b97428c45f42d83f8c6f9e660f837c
creativeml-openrail-m
[]
false
Training details Fine tuned from the stable-diffusion 1.4 checkpoint as follows: - 22,000 steps fine-tuning only the attention layers of the unet, learn rate=1e-5, batch size=256 - 66,000 steps training the full unet, learn rate=5e-5, batch size=552 - GPUs provided by [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud) - Trained on [LAION Improved Aesthetics 6plus](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus). - Trained using https://github.com/justinpinkney/stable-diffusion, original [checkpoint available here](https://huggingface.co/justinpinkney/miniSD)
91f3e010611d9086dadcd28d7b8d1e0e
creativeml-openrail-m
[]
false
License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: - You can't use the model to deliberately produce nor share illegal or harmful outputs or content - The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license - You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
9eb06f426888bdd965d280cdc414191c
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
TARDISfusion <p> <img src="https://huggingface.co/Guizmus/Tardisfusion/raw/main/showcase.jpg"/><br/> This is a Dreamboothed Stable Diffusion model trained on 3 Style concepts.<br/> The total dataset is made of 209 pictures, and the training has been done on runawayml 1.5 with 2500 steps and the new VAE. The following tokens will add their corresponding concept :<br/> <ul> <li><b>Classic Tardis style</b> : Architectural and furniture style seen inside the TARDIS in the series before the reboot.</li> <li><b>Modern Tardis style</b>: Architectural and furniture style seen inside the TARDIS in the series after the reboot</li> <li><b>Tardis Box style</b>: A style made from the TARDIS seen from the outside. Summons a TARDIS anywhere.</li> </ul> </p> [CKPT download link](https://huggingface.co/Guizmus/Tardisfusion/resolve/main/Tardisfusion-v2.ckpt)
f9615be88345b496e4a7d50a588481b5
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Guizmus/Tardisfusion" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a bedroom, Classic Tardis style" image = pipe(prompt).images[0] image.save("./TARDIS Style.png") ```
ae8fe24f0241150366b8a881275652c6
apache-2.0
['automatic-speech-recognition', 'NyanjaSpeech', 'generated_from_trainer']
false
xls-r-300m-nyanja-model_v1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NYANJASPEECH - NYA dataset. It achieves the following results on the evaluation set: - Loss: 0.2772 - Wer: 0.9074
a298d298db29d54ea49c35a93023b262
apache-2.0
['automatic-speech-recognition', 'NyanjaSpeech', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP
d29b666b3a9dcfe44da525c1dc2ae54b
apache-2.0
['automatic-speech-recognition', 'NyanjaSpeech', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7585 | 1.58 | 500 | 0.3574 | 0.9679 | | 0.4736 | 3.16 | 1000 | 0.2772 | 0.9074 | | 0.4776 | 4.75 | 1500 | 0.2853 | 0.9578 |
d4c2285b4d7016ef8a740ab2bab95607
mit
['fill-mask']
false
ClinicalBERT - Bio + Clinical BERT Model The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries. This model card describes the Bio+Clinical BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on all MIMIC notes.
ed91e23c8ff34f7bdd92c6430080b105
mit
['fill-mask']
false
Pretraining Data The `Bio_ClinicalBERT` model was trained on all notes from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words).
fc766da5acb064cb9a83a3fa7bc2a8c5
mit
['fill-mask']
false
Note Preprocessing Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer).
420691debb0cf1460c4966bfd4225163
mit
['fill-mask']
false
Pretraining Procedures The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`).
8a00d40c0845c9a16ce2365e529bf632
mit
['fill-mask']
false
Pretraining Hyperparameters We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15 and max predictions per sequence = 20).
56077944a7bfe3e29be31d9785bb5fd3
mit
['fill-mask']
false
How to use the model Load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") ```
b423bee422de38d268d2c7d371f7dda9
mit
['fill-mask']
false
More Information Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
9b666b5a2758b0965756c015257a3ce7
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-checkpoint-9 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-8](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-8) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9203 - Wer: 0.3258
fd0504cfaa87ad18d7b1f2a200432e83
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.2783 | 1.58 | 1000 | 0.5610 | 0.3359 | | 0.2251 | 3.16 | 2000 | 0.5941 | 0.3374 | | 0.173 | 4.74 | 3000 | 0.6026 | 0.3472 | | 0.1475 | 6.32 | 4000 | 0.6750 | 0.3482 | | 0.1246 | 7.9 | 5000 | 0.6673 | 0.3414 | | 0.1081 | 9.48 | 6000 | 0.7072 | 0.3409 | | 0.1006 | 11.06 | 7000 | 0.7413 | 0.3392 | | 0.0879 | 12.64 | 8000 | 0.7831 | 0.3394 | | 0.0821 | 14.22 | 9000 | 0.7371 | 0.3333 | | 0.0751 | 15.8 | 10000 | 0.8321 | 0.3445 | | 0.0671 | 17.38 | 11000 | 0.8362 | 0.3357 | | 0.0646 | 18.96 | 12000 | 0.8709 | 0.3367 | | 0.0595 | 20.54 | 13000 | 0.8352 | 0.3321 | | 0.0564 | 22.12 | 14000 | 0.8854 | 0.3323 | | 0.052 | 23.7 | 15000 | 0.9031 | 0.3315 | | 0.0485 | 25.28 | 16000 | 0.9171 | 0.3278 | | 0.046 | 26.86 | 17000 | 0.9390 | 0.3254 | | 0.0438 | 28.44 | 18000 | 0.9203 | 0.3258 |
5e84e38fdf682adec0a585b3d6b6391e
apache-2.0
['automatic-speech-recognition', 'et']
false
exp_w2v2t_et_wavlm_s753 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
466c9434c59698a9cc6d8212a9856d68
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
2701a0164fe30e59253a31b621765fbe