license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['toxicity', 'portuguese', 'hate speech', 'offensive language', 'generated_from_trainer']
false
Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dougtrajano/toxic-comment-classification") model = AutoModelForSequenceClassification.from_pretrained("dougtrajano/toxic-comment-classification") ```
03153a73d1a2bb9cc345b84b94364f08
apache-2.0
['toxicity', 'portuguese', 'hate speech', 'offensive language', 'generated_from_trainer']
false
Limitations and bias The following factors may degrade the model’s performance. **Text Language**: The model was trained on Brazilian Portuguese texts, so it may not work well with Portuguese dialects. **Text Origin**: The model was trained on texts from social media and a few texts from other sources, so it may not work well on other types of texts.
cbe42f5616a4d9969b890e7667c27be1
apache-2.0
['toxicity', 'portuguese', 'hate speech', 'offensive language', 'generated_from_trainer']
false
Trade-offs Sometimes models exhibit performance issues under particular circumstances. In this section, we'll discuss situations in which you might discover that the model performs less than optimally, and should plan accordingly. **Text Length**: The model was fine-tuned on texts with a word count between 1 and 178 words (average of 18 words). It may give poor results on texts with a word count outside this range.
ce7f41d93cf0efa30bb0664fdc4e7521
apache-2.0
['toxicity', 'portuguese', 'hate speech', 'offensive language', 'generated_from_trainer']
false
Performance The model was evaluated on the test set of the [OLID-BR](https://dougtrajano.github.io/olid-br/) dataset. **Accuracy:** 0.8578 **Precision:** 0.8594 **Recall:** 0.8578 **F1-Score:** 0.8580 | Class | Precision | Recall | F1-Score | Support | | :---: | :-------: | :----: | :------: | :-----: | | `NOT-OFFENSIVE` | 0.8886 | 0.8490 | 0.8683 | 1,775 | | `OFFENSIVE` | 0.8233 | 0.8686 | 0.8453 | 1,438 |
fce4a93f7cade2fd1d315f132717c449
apache-2.0
['toxicity', 'portuguese', 'hate speech', 'offensive language', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.255788747459486e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1993 - optimizer: Adam with betas=(0.8445637934160373,0.8338816842140165) and epsilon=2.527092625455385e-08 - lr_scheduler_type: linear - num_epochs: 30 - label_smoothing_factor: 0.07158711257743958
e1bbe41994664d9fe4dc5274df7c01d3
other
[]
false
モデル説明 (model explanation) - CoolJapanDiffusion 2.1.1 + 0.8(YaguruMagiku-v3.1-AnyBased - HassanBlend1.5) + 0.8(AbyssOrangeMix2_sfw - HassanBlend1.5) - **マージ元の一部のルーツにNAIリークやInsta系モデルが含まれるという噂があるので、NAIリークアンチ・Insta系モデルアンチには非推奨** - Stable Diffusion 2.x系と1.x系のマージの実験。不思議な絵が出る。 - colabのWebUIで動かせる。 - [これ](https://colab.research.google.com/drive/1ldhBc70wvuvkp4Af_vNTzTfBXwpf_cH5?usp=sharing)の以下の書き換えを行う。やり方は[ここ](https://the-pioneer.notion.site/Colab-Automatic1111-6043f15ef44d4ba0b11920c95d33a78c)。 ```python !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z https://huggingface.co/JosephusCheung/ACertainModel/resolve/main/ACertainModel-half.ckpt ``` - CoolJapanDiffusion 2.1.1 + 0.8(YaguruMagiku-v3.1-AnyBased - HassanBlend1.5) + 0.8(AbyssOrangeMix2_sfw - HassanBlend1.5) - **Since the part of the original models might have the root back in NovelAI leak and Instagram based models, according to some rumors, I do not recommend you use it, if you are a hater of NAI leak/Instagram based models and their derivatives.** - Since this model is an experimental model to see what will happen when merging a SD 1.x based model to SD 2.x, it is very likely that you get a weird result. - You can run this model on colab WebUI. - Rewrite the following line of [this notebook](https://colab.research.google.com/drive/1ldhBc70wvuvkp4Af_vNTzTfBXwpf_cH5?usp=sharing) following the instructions I posted [here](https://the-pioneer.notion.site/Colab-Automatic1111-6043f15ef44d4ba0b11920c95d33a78c). ```python !aria2c --summary-interval=10 -x 16 -s 16 --allow-overwrite=true -Z https://huggingface.co/JosephusCheung/ACertainModel/resolve/main/ACertainModel-half.ckpt ```
053b8e6e062d67f7d6f4e92444f72f20
other
[]
false
extras.py **本ファイルのみ、CC0 1.0ライセンスとする(WebUIのAGPLとの互換性維持のため)。** WebUIの同名ファイルを置き換えることであなた自身のマージを作ることができます。 - ``No interpolation``はひっかけで、マージはしません。最初これで、マージできたと勘違いしていました。 - ``Weighted sum``は比率0.1程度でも元のモデルを跡形もなく破壊します。0.01なら大丈夫でしたが、その間のどこがボーダーなのかは不明です。 - ``Add difference``は比較的元のモデルを維持したままで画風などを変更できます。ただし、やりすぎるとこのモデルのような結果になります。また、変更内容がマージに使ったSD 1.x系に期待した内容通りになる保証もありません。 **Note that this file and only this file in the model is released under public domain (CC0 1.0), in order to keep it compatible with the AGPL license of WebUI.** By replacing the file with the same name in WebUI, you can create your own merged model. - ``No interpolation`` is NOT a merging operation. It will work, but it will only return the same model as model A. - ``Weighted sum`` can easily destroy the original SD 2.x based model. Multiplier 0.1 was enough for it, whereas 0.01 was OK. There should be a border zone somewhere. - ``Add difference`` will work relatively fine, but going too far will likely result in a model similar to this. Additionally, there is no guarantee that you can get the style and/or content you expected to the original SD 1.x model you merged to.
d7ac86d4fcb86148bc1a94f145f2904f
other
[]
false
License: The Libertarian OpenRAIL License 注意: アップロード者が日本語母語話者であるため、翻訳版と日本語版に差異がある場合、**元の日本語版**が優先されるものとする。 Caution: Since the uploader is a Japanese native, in the event of any differences in meaning between the original Japanese version and a translation, **the original Japanese version** takes precedence. 要約: ほぼCreativeML Open RAIL-M。但しリバタリアン的解釈によって再構成。CreativeML Open RAIL-Mの制限は、同解釈において維持されているものと判断する。 Summary: A CreativeML Open RAIL-M, interpreted and reconstructed under a libertarian manner. The restriction of CreativeML Open RAIL-M is considered to be valid under such interpretation.
3e6075cb58094a90209f8916040d9774
other
[]
false
主な相違 (differences from the original CreativeML Open RAIL-M license) - 違法性は、無罪推定の原則に基づき、有罪確定を以て、かつそれのみによって判断する(有罪が確定するまで、法令違反であるように見えても、ライセンス者は違法とはみなさない)。 - ex. フェアユース文化圏は無論、親告罪である日本においても、著作者が訴えない範囲のほどほどの二次創作は、事実上問題視しない。 - 本モデル及び派生モデルによる生成物はパブリック・ドメイン(CC0 1.0)とすることを義務付け、生成者を含む任意の人物による(再)利用の自由を保障する。 - Stability.aiが運営するDream Studioが生成物をCC0 1.0としているが、元のモデルライセンスと両立していることに注意せよ。 - 派生モデルでは、本ライセンスと同等以上の制限とともに、同等以上の自由も保障しなければならない。 - The violation of law or regulation will be judged by and only by your conviction per the presumption of innocence (unless you are convicted, it is not enough to claim it is illegal for the Licensor, even if it looks like it). - ex. Fanart in Japan is technically illegal, unlike countries which have fair use, but as long as it is in the moderate range and the copright holder won't sue you, we will practically do not consider it as problematic. - Outputs you generated by the Model or Derivatives of the Model must be distributed under public domain (CC0 1.0), to ensure not only you but anyone can (re)use it freely. - Note that Dream Studio, run by Stability.ai demands the output be CC0 1.0 as well, but still isn't against the original model license. - Derivatives of the Model will always have to include - at minimum - the same use-based restrictions <u>and the same open permissions</u>.
3328adb375d50f002a5675b643da11ea
apache-2.0
['argumentation']
false
Generate objections to a claim This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
3043555996642b9d766328d1298c5a39
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
Model description Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
eab068826458fc17207dd4cfe078485d
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
Intended uses and limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification.
78176d6f4d55b70ef2880e6177946deb
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences.
c155e2d42cd228b0b79cb06d19bce608
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora has been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 903,558,13 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | EHR documents | 95,267,20 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. | | [Scielo](https://zenodo.org/record/2541681
116af397a9aaf92799cd2b210eaccfc0
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
.YlP1DshBwio) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536
9c99e1103b736de1272069239f80535e
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources is aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. |
75f1efb6246f609e3fbea165312d7c0c
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
Evaluation The model has been fine-tuned on three Named Entity Recognition (NER) tasks using three clinical NER datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041
f3f733155d7e099df7561023516ceeb3
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. We addressed the NER task as a token classification problem using a standard linear layer along with the BIO tagging schema. We compared our models with the general-domain Spanish [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne), the general-domain multilingual model that supports Spanish [mBERT](https://huggingface.co/bert-base-multilingual-cased), the domain-specific English model [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2), and three domain-specific models based on continual pre-training, [mBERT-Galén](https://ieeexplore.ieee.org/document/9430499), [XLM-R-Galén](https://ieeexplore.ieee.org/document/9430499) and [BETO-Galén](https://ieeexplore.ieee.org/document/9430499). The table below shows the F1 scores obtained: | Tasks/Models | bsc-bio-ehr-es | XLM-R-Galén | BETO-Galén | mBERT-Galén | mBERT | BioBERT | roberta-base-bne | |--------------|----------------|--------------------|--------------|--------------|--------------|--------------|------------------| | PharmaCoNER | **0.8913** | 0.8754 | 0.8537 | 0.8594 | 0.8671 | 0.8545 | 0.8474 | | CANTEMIST | **0.8340** | 0.8078 | 0.8153 | 0.8168 | 0.8116 | 0.8070 | 0.7875 | | ICTUSnet | **0.8756** | 0.8716 | 0.8498 | 0.8509 | 0.8631 | 0.8521 | 0.8677 | The fine-tuning scripts can be found in the official GitHub [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
164798d5cbfb33b65642941c9afbeb98
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
Citing information If you use these models, please cite our work: ```bibtext @inproceedings{carrino-etal-2022-pretrained, title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish", author = "Carrino, Casimiro Pio and Llop, Joan and P{\`a}mies, Marc and Guti{\'e}rrez-Fandi{\~n}o, Asier and Armengol-Estap{\'e}, Jordi and Silveira-Ocampo, Joaqu{\'\i}n and Valencia, Alfonso and Gonzalez-Agirre, Aitor and Villegas, Marta", booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.bionlp-1.19", doi = "10.18653/v1/2022.bionlp-1.19", pages = "193--199", abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.", } ```
3fceb5dc2cfc530f5ccde5c5185099fe
apache-2.0
['biomedical', 'clinical', 'ehr', 'spanish']
false
Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. </details>
a51c68e327d5459dc2278f9b209f47ea
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Vietnamese This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 vi dataset. It achieves the following results on the evaluation set: - Loss: 0.5686 - Wer: 18.8638
449247e635df9385717ad11ca40038a6
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1400 - mixed_precision_training: Native AMP
0717eec886dd0489a6cdec98bc6802ca
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0063 | 12.01 | 200 | 0.5238 | 19.2915 | | 0.0046 | 24.01 | 400 | 0.5686 | 18.8638 | | 0.0067 | 37.01 | 600 | 0.5924 | 20.6076 | | 0.0004 | 49.01 | 800 | 0.6239 | 19.8070 | | 0.0005 | 62.01 | 1000 | 0.6354 | 19.7631 | | 0.0001 | 74.01 | 1200 | 0.6447 | 19.5547 | | 0.0001 | 87.01 | 1400 | 0.6473 | 19.5547 |
67f91c0cf22f1471930accfed2cf5f43
mit
['generated_from_trainer']
false
bert-indo-base-stance-cls This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0156 - Accuracy: 0.6892 - Precision: 0.6848 - Recall: 0.6892 - F1: 0.6859 - Against: {'precision': 0.6185567010309279, 'recall': 0.5555555555555556, 'f1-score': 0.5853658536585366, 'support': 216} - For: {'precision': 0.7280453257790368, 'recall': 0.7764350453172205, 'f1-score': 0.7514619883040935, 'support': 331}
69fe1f7fd4ecae44751ebb19a05ab19a
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
ad2c80db4767008c7cd7ad95153850e9
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Against | For | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------:| | No log | 1.0 | 137 | 0.6423 | 0.6581 | 0.6894 | 0.6581 | 0.5917 | {'precision': 0.7543859649122807, 'recall': 0.19907407407407407, 'f1-score': 0.31501831501831506, 'support': 216} | {'precision': 0.6469387755102041, 'recall': 0.9577039274924471, 'f1-score': 0.7722289890377587, 'support': 331} | | No log | 2.0 | 274 | 0.6146 | 0.6600 | 0.6691 | 0.6600 | 0.6628 | {'precision': 0.5614754098360656, 'recall': 0.6342592592592593, 'f1-score': 0.5956521739130436, 'support': 216} | {'precision': 0.7392739273927392, 'recall': 0.676737160120846, 'f1-score': 0.7066246056782334, 'support': 331} | | No log | 3.0 | 411 | 0.7572 | 0.6545 | 0.6734 | 0.6545 | 0.6583 | {'precision': 0.550561797752809, 'recall': 0.6805555555555556, 'f1-score': 0.608695652173913, 'support': 216} | {'precision': 0.7535714285714286, 'recall': 0.6374622356495468, 'f1-score': 0.6906710310965631, 'support': 331} | | 0.4855 | 4.0 | 548 | 0.7405 | 0.6892 | 0.6842 | 0.6892 | 0.6851 | {'precision': 0.6210526315789474, 'recall': 0.5462962962962963, 'f1-score': 0.5812807881773399, 'support': 216} | {'precision': 0.7254901960784313, 'recall': 0.7824773413897281, 'f1-score': 0.7529069767441859, 'support': 331} | | 0.4855 | 5.0 | 685 | 1.1222 | 0.6856 | 0.6828 | 0.6856 | 0.6839 | {'precision': 0.6078431372549019, 'recall': 0.5740740740740741, 'f1-score': 0.5904761904761905, 'support': 216} | {'precision': 0.7317784256559767, 'recall': 0.7583081570996979, 'f1-score': 0.7448071216617211, 'support': 331} | | 0.4855 | 6.0 | 822 | 1.4960 | 0.6892 | 0.6830 | 0.6892 | 0.6827 | {'precision': 0.6292134831460674, 'recall': 0.5185185185185185, 'f1-score': 0.5685279187817258, 'support': 216} | {'precision': 0.7181571815718157, 'recall': 0.8006042296072508, 'f1-score': 0.7571428571428572, 'support': 331} | | 0.4855 | 7.0 | 959 | 1.6304 | 0.6801 | 0.6886 | 0.6801 | 0.6827 | {'precision': 0.5843621399176955, 'recall': 0.6574074074074074, 'f1-score': 0.6187363834422658, 'support': 216} | {'precision': 0.756578947368421, 'recall': 0.6948640483383686, 'f1-score': 0.7244094488188976, 'support': 331} | | 0.1029 | 8.0 | 1096 | 1.8381 | 0.6673 | 0.6727 | 0.6673 | 0.6693 | {'precision': 0.5726495726495726, 'recall': 0.6203703703703703, 'f1-score': 0.5955555555555555, 'support': 216} | {'precision': 0.7380191693290735, 'recall': 0.6978851963746223, 'f1-score': 0.717391304347826, 'support': 331} | | 0.1029 | 9.0 | 1233 | 1.9474 | 0.6929 | 0.6876 | 0.6929 | 0.6881 | {'precision': 0.6290322580645161, 'recall': 0.5416666666666666, 'f1-score': 0.582089552238806, 'support': 216} | {'precision': 0.7257617728531855, 'recall': 0.7915407854984894, 'f1-score': 0.7572254335260115, 'support': 331} | | 0.1029 | 10.0 | 1370 | 2.0156 | 0.6892 | 0.6848 | 0.6892 | 0.6859 | {'precision': 0.6185567010309279, 'recall': 0.5555555555555556, 'f1-score': 0.5853658536585366, 'support': 216} | {'precision': 0.7280453257790368, 'recall': 0.7764350453172205, 'f1-score': 0.7514619883040935, 'support': 331} |
06a8096c372b5338256bd59422664e97
apache-2.0
[]
false
This is a pretrained-from-scratch **T5v1.1 base** model (**247M** parameters) on the [t5x](https://github.com/google-research/t5x) platform. Training was performed on a clean 80GB Romanian text corpus for 4M steps with these [scripts](https://github.com/dumitrescustefan/t5x_models). The model was trained with an encoder sequence length of 512 and a decoder sequence length of 256. **!! IMPORTANT !!** This model was pretrained on the span corruption MLM task, meaning this model is **not usable** in any downstream task **without finetuning** first!
0758721afbb2541acfae0322b3f11a92
apache-2.0
[]
false
How to load a t5x model ```python from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained('dumitrescustefan/t5-v1_1-base-romanian') model = T5Model.from_pretrained('dumitrescustefan/t5-v1_1-base-romanian') input_ids = tokenizer("Acesta este un test", return_tensors="pt").input_ids
b752ed21cf5b54d9a4896aea46b80a72
apache-2.0
[]
false
this will print [1, 3, 768] ``` Remember to always sanitize your text! Replace ``ş`` and ``ţ`` cedilla-letters to comma-letters with : ```python text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș") ``` because the model was **not** trained on cedilla ``ş`` and ``ţ``s. If you don't, you will have decreased performance due to ``<UNK>``s and increased number of tokens per word.
51da1cc6aca236cd177d473ec6f149b4
apache-2.0
['generated_from_trainer']
false
sequence_classification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7738 - Accuracy: 0.8529 - F1: 0.8944
3016262ff9040a459cdbe467eff160de
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 459 | 0.3519 | 0.8627 | 0.9 | | 0.4872 | 2.0 | 918 | 0.6387 | 0.8333 | 0.8893 | | 0.2488 | 3.0 | 1377 | 0.7738 | 0.8529 | 0.8944 |
c3396d91d4438183bbb6ad740d369366
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'science']
false
DreamBooth model for the pprotein concept trained by jonathang on the jonathang/dreambooth-hackathon-images-protein3 dataset. This is a Stable Diffusion model fine-tuned on the pprotein concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a 3d model of pprotein** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
12df94ac749561f75012137acf48bffd
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'science']
false
Examples <table> <tr> <td>Generated Image of "a 3d diagram of pprotein in the style of "<br>"Kandinsky"</td> <td>Generated Image of "a 3d diagram of pprotein in the style of "<br>"Van Gogh"</td> <td>Generated Image of "a 3d diagram of pprotein in the style of "<br>"Warhol"</td> </tr> <tr> <td align="center"><img src="https://imgur.com/lhDA041.png" style="height:200px"> </td> <td align="center"><img src="https://imgur.com/iug4k7D.png" style="height:200px"> </td> <td align="center"><img src="https://imgur.com/eIMiTVG.png" style="height:200px"> </td> </tr> <tr> <td>Generated Image of "a 3d diagram of pprotein in the style of "<br>"Leonardo da Vinci"</td> <td>Generated Image of "a 3d diagram of pprotein in the style of "<br>"Frida Kahlo"</td> <td>Generated Image of "a 3d diagram of pprotein in the style of "<br>"Salvador Dahli"</td> </tr> <tr> <td align="center"><img src="https://imgur.com/hzKGWC2.png" style="height:200px"> </td> <td align="center"><img src="https://imgur.com/loc8rLa.png" style="height:200px"> </td> <td align="center"><img src="https://imgur.com/8nK81TA.png" style="height:200px"> </td> </tr> <tr> <td>Generated Image of "Tree in the style of"<br>"3d diagram of pprotein"</td> <td>Generated Image of "Soda Can in the style of"<br>"3d diagram of pprotein"</td> <td>Generated Image of "Vase in the style of"<br>"3d diagram of pprotein"</td> </tr> <tr> <td align="center"><img src="https://imgur.com/czOlY11.png" style="height:200px"> </td> <td align="center"><img src="https://imgur.com/uhwueGs.png" style="height:200px"> </td> <td align="center"><img src="https://imgur.com/gSIrHAh.png" style="height:200px"> </td> </tr> </table>
433eb872265aca114dc49309503c3a95
mit
['generated_from_trainer']
false
finetuned_gpt2_sst2_negation0.8_pretrainedFalse This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 5.2177
3c1d31c7b46cd0e40f6bf4f5701f0758
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.7474 | 1.0 | 1111 | 5.4543 | | 4.378 | 2.0 | 2222 | 5.2688 | | 4.2047 | 3.0 | 3333 | 5.2177 |
0a06442ddaf38535235fc01bcd47e6e7
apache-2.0
[]
false
doc2query/msmarco-hindi-mt5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html
614c9e1b4a358edc3450760620b3d9d8
apache-2.0
[]
false
Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model_name = 'doc2query/msmarco-hindi-mt5-base-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) text = "पाइथन एक सामान्य कार्यों के लिए उपयुक्त, उच्च स्तरीय प्रोग्रामिंग भाषा (General Purpose and High Level Programming language), इन्टरैक्टिव, ऑब्जेक्ट ओरिएन्टेड, स्क्रिप्टिंग भाषा है। इस भाषा को इस तरह से डिजाइन किया गया है ताकि इसमें लिखे गए कोड आसानी से पढ़े और समझे जा सकें।" def create_queries(para): input_ids = tokenizer.encode(para, return_tensors='pt') with torch.no_grad():
2af7b7236d24e8bd5ef99b362a44d591
apache-2.0
salesken
false
Use this model to generate variations to augment the training data used for NLU systems. ```python from transformers import AutoTokenizer, AutoModelWithLMHead import torch if torch.cuda.is_available(): device = torch.device("cuda") else : device = "cpu" tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_generation") model = AutoModelWithLMHead.from_pretrained("salesken/paraphrase_generation").to(device) input_query="every moment is a fresh beginning" query= input_query + " ~~ " input_ids = tokenizer.encode(query.lower(), return_tensors='pt').to(device) sample_outputs = model.generate(input_ids, do_sample=True, num_beams=1, max_length=128, temperature=0.9, top_p= 0.99, top_k = 30, num_return_sequences=40) paraphrases = [] for i in range(len(sample_outputs)): r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0] r = r.split(' ~~ ')[1] if r not in paraphrases: paraphrases.append(r) print(paraphrases) ``` To evaluate if a paraphrase is a semantic variation to the input query or just a surface level variation & rank the generated paraphrases, use the following model: https://huggingface.co/salesken/paraphrase_diversity_ranker
81ee7728515b9894dccdff0965a2a6e3
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-utility-7-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3728 - Accuracy: 0.3956
59595632fa0fdbd4281aaba7377ce43b
apache-2.0
['generated_from_trainer']
false
bart-paraphrase-finetuned-xsum-v2 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2329 - Rouge1: 100.0 - Rouge2: 100.0 - Rougel: 100.0 - Rougelsum: 100.0 - Gen Len: 9.2619
cddf8ccd7bff9504e4fc1556d468d6d0
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP
45613a20db9712d1fd5d717a77b21304
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 21 | 1.2954 | 66.7012 | 60.8612 | 66.5163 | 66.4352 | 13.2857 | | No log | 2.0 | 42 | 0.6866 | 86.8284 | 82.7835 | 86.7208 | 86.784 | 9.5238 | | No log | 3.0 | 63 | 0.4652 | 95.1892 | 93.5619 | 95.2567 | 95.1657 | 10.3095 | | No log | 4.0 | 84 | 0.4280 | 97.7463 | 97.1782 | 97.8708 | 97.718 | 9.5 | | No log | 5.0 | 105 | 0.3712 | 99.6435 | 99.5767 | 99.6435 | 99.6435 | 9.3571 | | No log | 6.0 | 126 | 0.4451 | 99.2695 | 98.9418 | 99.1883 | 99.3506 | 9.3095 | | No log | 7.0 | 147 | 0.3169 | 99.246 | 99.0232 | 99.246 | 99.4048 | 9.619 | | No log | 8.0 | 168 | 0.2942 | 100.0 | 100.0 | 100.0 | 100.0 | 9.4048 | | No log | 9.0 | 189 | 0.3105 | 100.0 | 100.0 | 100.0 | 100.0 | 9.1667 | | No log | 10.0 | 210 | 0.3035 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2619 | | No log | 11.0 | 231 | 0.2983 | 100.0 | 100.0 | 100.0 | 100.0 | 10.5714 | | No log | 12.0 | 252 | 0.2497 | 100.0 | 100.0 | 100.0 | 100.0 | 9.4286 | | No log | 13.0 | 273 | 0.2911 | 100.0 | 100.0 | 100.0 | 100.0 | 9.1667 | | No log | 14.0 | 294 | 0.2619 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2143 | | No log | 15.0 | 315 | 0.2510 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2381 | | No log | 16.0 | 336 | 0.2647 | 100.0 | 100.0 | 100.0 | 100.0 | 9.9048 | | No log | 17.0 | 357 | 0.2438 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2143 | | No log | 18.0 | 378 | 0.2324 | 100.0 | 100.0 | 100.0 | 100.0 | 9.3095 | | No log | 19.0 | 399 | 0.2296 | 100.0 | 100.0 | 100.0 | 100.0 | 9.3095 | | No log | 20.0 | 420 | 0.2329 | 100.0 | 100.0 | 100.0 | 100.0 | 9.2619 |
1ac988541d93fd16d85d515a7cfac386
openrail
[]
false
XLM-RoBERTa (base) fine-tuned on HC3 for ChatGPT text detection **XLM-RoBERTa** (base) fine-tuned on [Hello-SimpleAI](https://huggingface.co/Hello-SimpleAI) **HC3** corpus for **ChatGPT** text detection. All credit to [Hello-SimpleAI](https://huggingface.co/Hello-SimpleAI) for their huge work!
e5739e436a9d3fc811a5fa12b00a4f4e
openrail
[]
false
The model XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al. and first released in this repository.
c4e14ea0f4d22fa7566f95e130f718cd
openrail
[]
false
Human ChatGPT Comparison Corpus (HC3) The first human-ChatGPT comparison corpus, named **HC3** dataset by [Hello-SimpleAI](https://huggingface.co/Hello-SimpleAI) This dataset is introduced in the paper: - Paper: [***How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection***](https://arxiv.org/abs/2301.07597)
c1268f1837cdf306c217d7144dbdab86
openrail
[]
false
Usage ```py from transformers import pipeline ckpt = "mrm8488/xlm-roberta-base-finetuned-HC3-mix" detector = pipeline('text-classification', model=ckpt) text = "Here your text..." result = detector(text) print(result) ```
81754d72c17910aa77b7456f132d5aba
openrail
[]
false
Citation ``` @misc {manuel_romero_2023, author = { {Manuel Romero} }, title = { xlm-roberta-base-finetuned-HC3-mix (Revision b18de48) }, year = 2023, url = { https://huggingface.co/mrm8488/xlm-roberta-base-finetuned-HC3-mix }, doi = { 10.57967/hf/0306 }, publisher = { Hugging Face } } ```
bcccec11dc4ff8bbb7a914931adf8f27
mit
['generated_from_trainer']
false
twitter-data-microsoft-deberta-base-mnli-sentiment-finetuned-memes This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2438 - Accuracy: 0.9296 - Precision: 0.9301 - Recall: 0.9296 - F1: 0.9296
0804b78110c001af8b631edd70518109
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
b52a2d0884d05b5d393a24a3bf652d4d
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.3622 | 1.0 | 1762 | 0.2933 | 0.9060 | 0.9065 | 0.9060 | 0.9057 | | 0.2601 | 2.0 | 3524 | 0.2593 | 0.9194 | 0.9196 | 0.9194 | 0.9192 | | 0.2282 | 3.0 | 5286 | 0.2365 | 0.9279 | 0.9287 | 0.9279 | 0.9280 | | 0.1977 | 4.0 | 7048 | 0.2325 | 0.9293 | 0.9298 | 0.9293 | 0.9293 | | 0.181 | 5.0 | 8810 | 0.2421 | 0.9291 | 0.9301 | 0.9291 | 0.9292 | | 0.1629 | 6.0 | 10572 | 0.2438 | 0.9296 | 0.9301 | 0.9296 | 0.9296 |
631a702d9ad6ccc866c374c6b5c2528b
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
DreamBooth model for the flyfood concept trained by innovation64. This is a Stable Diffusion model fine-tuned on the flyfood concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of flyfood pet** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
3f96a2dcc1df88e6d07330f7b80d7062
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
Description This is a Stable Diffusion model fine-tuned on `派蒙` images for the wildcard theme, for the Hugging Face DreamBooth Hackathon, from the HF CN Community, corporated with the HeyWhale.
87c8f4a6750d848991eaa529659fd964
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7588 - Matthews Correlation: 0.5230
c74ed90f277cb6f7dbcc267bc8653026
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5261 | 1.0 | 535 | 0.5125 | 0.4124 | | 0.3502 | 2.0 | 1070 | 0.5439 | 0.5076 | | 0.2378 | 3.0 | 1605 | 0.6629 | 0.4946 | | 0.1809 | 4.0 | 2140 | 0.7588 | 0.5230 | | 0.1309 | 5.0 | 2675 | 0.8901 | 0.5056 |
0fbdc4d96aeec82f1404938b15e1b295
openrail
['generated_from_trainer']
false
gpt2-shikoto This model was trained on a dataset I obtained from an online novel site. **Please be aware that the stories (training data) might contain inappropriate content. This model is intended for research purposes only.** The base model can be found [here](https://huggingface.co/jed351/gpt2-tiny-zh-hk), which was obtained by patching a [GPT2 Chinese model](https://huggingface.co/ckiplab/gpt2-tiny-chinese) and its tokenizer with Cantonese characters. Refer to the base model for info on the patching process.
c48ea5a5b7b2e2c1acd7f0fbf1b70cb2
openrail
['generated_from_trainer']
false
Training procedure Please refer to the [script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) provided by Huggingface. The model was trained for 400,000 steps on 2 NVIDIA Quadro RTX6000 for around 15 hours at the Research Computing Services of Imperial College London.
14b6ffc381b0806e850df4a4ea0a46ef
openrail
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 20 - eval_batch_size: 20 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 40 - total_eval_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400000 - mixed_precision_training: Native AMP
b1c206c0830759bd5dce13a3eef01f5d
openrail
['generated_from_trainer']
false
How to use it? ``` from transformers import AutoTokenizer from transformers import TextGenerationPipeline, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jed351/gpt2-tiny-zh-hk") model = AutoModelForCausalLM.from_pretrained("jed351/gpt2_tiny_zh-hk-shikoto")
32d126ac2569be3c1dd35f9f5d29909c
apache-2.0
['generated_from_trainer']
false
distilbart-podimo-data-eval-2 This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5823 - Rouge1: 34.3971 - Rouge2: 7.95 - Rougel: 18.7271 - Rougelsum: 30.9024 - Gen Len: 131.919
0fea409db3abf63c6e064b5d1b786357
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 4.1512 | 0.98 | 44 | 3.7806 | 32.727 | 6.5788 | 17.5196 | 29.3777 | 137.2905 | | 3.6342 | 1.98 | 88 | 3.6421 | 32.709 | 6.7877 | 17.8668 | 29.4636 | 134.6648 | | 3.3512 | 2.98 | 132 | 3.5819 | 33.5128 | 7.519 | 18.6614 | 30.1142 | 132.2961 | | 3.141 | 3.98 | 176 | 3.5552 | 33.4795 | 7.3242 | 18.396 | 30.0854 | 132.757 | | 2.9787 | 4.98 | 220 | 3.5583 | 33.5862 | 7.391 | 18.3568 | 30.2461 | 132.4078 | | 2.8555 | 5.98 | 264 | 3.5650 | 34.1111 | 7.8008 | 18.7159 | 30.6055 | 131.3603 | | 2.7648 | 6.98 | 308 | 3.5729 | 34.0981 | 7.6556 | 18.6373 | 30.6269 | 131.2821 | | 2.6645 | 7.98 | 352 | 3.5823 | 34.3971 | 7.95 | 18.7271 | 30.9024 | 131.919 |
5ef3cb67f943ee4b89514575e5f14927
apache-2.0
['automatic-speech-recognition', 'de', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. eval results: WER: 0.20161578657865786 CER: 0.05062357805269733 -->
be18b5d7c91ae1507e7dd095df6bf0bb
apache-2.0
['automatic-speech-recognition', 'de', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1768 - Wer: 0.2016
7ada1bb610a89f247a45e4b2b19792a4
apache-2.0
['automatic-speech-recognition', 'de', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.4 - mixed_precision_training: Native AMP
76c29972ebc939018be4f998b6332ee8
apache-2.0
['automatic-speech-recognition', 'de', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.7531 | 0.04 | 500 | 5.4564 | 1.0 | | 2.9882 | 0.08 | 1000 | 3.0041 | 1.0 | | 2.1953 | 0.13 | 1500 | 1.1723 | 0.7121 | | 1.2406 | 0.17 | 2000 | 0.3656 | 0.3623 | | 1.1294 | 0.21 | 2500 | 0.2843 | 0.2926 | | 1.0731 | 0.25 | 3000 | 0.2554 | 0.2664 | | 1.051 | 0.3 | 3500 | 0.2387 | 0.2535 | | 1.0479 | 0.34 | 4000 | 0.2345 | 0.2512 | | 1.0026 | 0.38 | 4500 | 0.2270 | 0.2452 | | 0.9921 | 0.42 | 5000 | 0.2212 | 0.2353 | | 0.9839 | 0.47 | 5500 | 0.2141 | 0.2330 | | 0.9907 | 0.51 | 6000 | 0.2122 | 0.2334 | | 0.9788 | 0.55 | 6500 | 0.2114 | 0.2270 | | 0.9687 | 0.59 | 7000 | 0.2066 | 0.2323 | | 0.9777 | 0.64 | 7500 | 0.2033 | 0.2237 | | 0.9476 | 0.68 | 8000 | 0.2020 | 0.2194 | | 0.9625 | 0.72 | 8500 | 0.1977 | 0.2191 | | 0.9497 | 0.76 | 9000 | 0.1976 | 0.2175 | | 0.9781 | 0.81 | 9500 | 0.1956 | 0.2159 | | 0.9552 | 0.85 | 10000 | 0.1958 | 0.2191 | | 0.9345 | 0.89 | 10500 | 0.1964 | 0.2158 | | 0.9528 | 0.93 | 11000 | 0.1926 | 0.2154 | | 0.9502 | 0.98 | 11500 | 0.1953 | 0.2149 | | 0.9358 | 1.02 | 12000 | 0.1927 | 0.2167 | | 0.941 | 1.06 | 12500 | 0.1901 | 0.2115 | | 0.9287 | 1.1 | 13000 | 0.1936 | 0.2090 | | 0.9491 | 1.15 | 13500 | 0.1900 | 0.2104 | | 0.9478 | 1.19 | 14000 | 0.1931 | 0.2120 | | 0.946 | 1.23 | 14500 | 0.1914 | 0.2134 | | 0.9499 | 1.27 | 15000 | 0.1931 | 0.2173 | | 0.9346 | 1.32 | 15500 | 0.1913 | 0.2105 | | 0.9509 | 1.36 | 16000 | 0.1902 | 0.2137 | | 0.9294 | 1.4 | 16500 | 0.1895 | 0.2086 | | 0.9418 | 1.44 | 17000 | 0.1913 | 0.2183 | | 0.9302 | 1.49 | 17500 | 0.1884 | 0.2114 | | 0.9418 | 1.53 | 18000 | 0.1894 | 0.2108 | | 0.9363 | 1.57 | 18500 | 0.1886 | 0.2132 | | 0.9338 | 1.61 | 19000 | 0.1856 | 0.2078 | | 0.9185 | 1.66 | 19500 | 0.1852 | 0.2056 | | 0.9216 | 1.7 | 20000 | 0.1874 | 0.2095 | | 0.9176 | 1.74 | 20500 | 0.1873 | 0.2078 | | 0.9288 | 1.78 | 21000 | 0.1865 | 0.2097 | | 0.9278 | 1.83 | 21500 | 0.1869 | 0.2100 | | 0.9295 | 1.87 | 22000 | 0.1878 | 0.2095 | | 0.9221 | 1.91 | 22500 | 0.1852 | 0.2121 | | 0.924 | 1.95 | 23000 | 0.1855 | 0.2042 | | 0.9104 | 2.0 | 23500 | 0.1858 | 0.2105 | | 0.9284 | 2.04 | 24000 | 0.1850 | 0.2080 | | 0.9162 | 2.08 | 24500 | 0.1839 | 0.2045 | | 0.9111 | 2.12 | 25000 | 0.1838 | 0.2080 | | 0.91 | 2.17 | 25500 | 0.1889 | 0.2106 | | 0.9152 | 2.21 | 26000 | 0.1856 | 0.2026 | | 0.9209 | 2.25 | 26500 | 0.1891 | 0.2133 | | 0.9094 | 2.29 | 27000 | 0.1857 | 0.2089 | | 0.9065 | 2.34 | 27500 | 0.1840 | 0.2052 | | 0.9156 | 2.38 | 28000 | 0.1833 | 0.2062 | | 0.8986 | 2.42 | 28500 | 0.1789 | 0.2001 | | 0.9045 | 2.46 | 29000 | 0.1769 | 0.2022 | | 0.9039 | 2.51 | 29500 | 0.1819 | 0.2073 | | 0.9145 | 2.55 | 30000 | 0.1828 | 0.2063 | | 0.9081 | 2.59 | 30500 | 0.1811 | 0.2049 | | 0.9252 | 2.63 | 31000 | 0.1833 | 0.2086 | | 0.8957 | 2.68 | 31500 | 0.1795 | 0.2083 | | 0.891 | 2.72 | 32000 | 0.1809 | 0.2058 | | 0.9023 | 2.76 | 32500 | 0.1812 | 0.2061 | | 0.8918 | 2.8 | 33000 | 0.1775 | 0.1997 | | 0.8852 | 2.85 | 33500 | 0.1790 | 0.1997 | | 0.8928 | 2.89 | 34000 | 0.1767 | 0.2013 | | 0.9079 | 2.93 | 34500 | 0.1735 | 0.1986 | | 0.9032 | 2.97 | 35000 | 0.1793 | 0.2024 | | 0.9018 | 3.02 | 35500 | 0.1778 | 0.2027 | | 0.8846 | 3.06 | 36000 | 0.1776 | 0.2046 | | 0.8848 | 3.1 | 36500 | 0.1812 | 0.2064 | | 0.9062 | 3.14 | 37000 | 0.1800 | 0.2018 | | 0.9011 | 3.19 | 37500 | 0.1783 | 0.2049 | | 0.8996 | 3.23 | 38000 | 0.1810 | 0.2036 | | 0.893 | 3.27 | 38500 | 0.1805 | 0.2056 | | 0.897 | 3.31 | 39000 | 0.1773 | 0.2035 | | 0.8992 | 3.36 | 39500 | 0.1804 | 0.2054 | | 0.8987 | 3.4 | 40000 | 0.1768 | 0.2016 |
0501d3d98d3b1ef2fd74f95130df2202
apache-2.0
['automatic-speech-recognition', 'de', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset mozilla-foundation/common_voice_7_0 --config de --split test --log_outputs ``` 2. To evaluate on test dev data ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
4942408a59fe72a84292205d1a10116c
mit
['generated_from_trainer']
false
xlnet-base-cased_fold_6_binary_v1 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6214 - F1: 0.8352
64da35a12e901450cc71f1c230a7a5e8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 290 | 0.4174 | 0.7980 | | 0.4661 | 2.0 | 580 | 0.4118 | 0.8142 | | 0.4661 | 3.0 | 870 | 0.5152 | 0.8331 | | 0.2714 | 4.0 | 1160 | 0.6901 | 0.8242 | | 0.2714 | 5.0 | 1450 | 0.6853 | 0.8451 | | 0.1542 | 6.0 | 1740 | 0.8570 | 0.8399 | | 0.0935 | 7.0 | 2030 | 1.1342 | 0.8401 | | 0.0935 | 8.0 | 2320 | 1.1763 | 0.8397 | | 0.037 | 9.0 | 2610 | 1.3530 | 0.8215 | | 0.037 | 10.0 | 2900 | 1.3826 | 0.8402 | | 0.0351 | 11.0 | 3190 | 1.4057 | 0.8374 | | 0.0351 | 12.0 | 3480 | 1.4259 | 0.8455 | | 0.0159 | 13.0 | 3770 | 1.4270 | 0.8431 | | 0.0249 | 14.0 | 4060 | 1.4215 | 0.8442 | | 0.0249 | 15.0 | 4350 | 1.4245 | 0.8408 | | 0.0197 | 16.0 | 4640 | 1.4171 | 0.8353 | | 0.0197 | 17.0 | 4930 | 1.4537 | 0.8383 | | 0.0137 | 18.0 | 5220 | 1.4786 | 0.8430 | | 0.0068 | 19.0 | 5510 | 1.5635 | 0.8443 | | 0.0068 | 20.0 | 5800 | 1.5527 | 0.8378 | | 0.0062 | 21.0 | 6090 | 1.5917 | 0.8460 | | 0.0062 | 22.0 | 6380 | 1.6317 | 0.8318 | | 0.005 | 23.0 | 6670 | 1.6226 | 0.8340 | | 0.005 | 24.0 | 6960 | 1.6378 | 0.8310 | | 0.007 | 25.0 | 7250 | 1.6214 | 0.8352 |
237f9e3635528d19d92fdebd698de5a8
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
false
LoRA DreamBooth - onefish51/dog_w_prior-preservation These are LoRA adaption weights for /data2/home/tyu/stable_diffusion/diffusers/stable-diffusion-v1-4. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
b6b1c39540db6a87949ba8a77b513d31
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Es - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Multilingual LibriSpeech dataset. It achieves the following results on the evaluation set: - Loss: 1.2668 - Wer: 60.1623
c17e4b8471e9f6285c24eddfe0a2d051
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-08 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2500 - mixed_precision_training: Native AMP
68a2b7a5457544a69cd3037bb4c06d0e
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 2.2112 | 0.2 | 500 | 1.7394 | 61.1126 | | 1.4913 | 0.4 | 1000 | 1.3758 | 62.8143 | | 1.6651 | 0.6 | 1500 | 1.3100 | 61.3261 | | 1.7031 | 0.8 | 2000 | 1.2752 | 60.5261 | | 1.4289 | 1.0 | 2500 | 1.2668 | 60.1623 |
28f2a899863a1f5aae1cba3e3ac527c6
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-russian-colab-beam_search_test This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7619 - Wer: 0.4680
1088f85141cc60f46df5944e55be3a52
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 100 - mixed_precision_training: Native AMP
e5fab8c7e494cbc747241b981d9c2a6f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 10.0158 | 4.16 | 100 | 5.4134 | 1.0 | | 4.0394 | 8.33 | 200 | 3.4304 | 1.0 | | 3.2721 | 12.49 | 300 | 3.2273 | 1.0 | | 3.1277 | 16.66 | 400 | 2.8023 | 0.9984 | | 1.3791 | 20.82 | 500 | 0.9888 | 0.8546 | | 0.3659 | 24.99 | 600 | 0.7602 | 0.6304 | | 0.1858 | 29.16 | 700 | 0.7965 | 0.6156 | | 0.1403 | 33.33 | 800 | 0.7998 | 0.5839 | | 0.1173 | 37.49 | 900 | 0.8353 | 0.5941 | | 0.0917 | 41.66 | 1000 | 0.8272 | 0.5522 | | 0.0743 | 45.82 | 1100 | 0.8342 | 0.5471 | | 0.063 | 49.99 | 1200 | 0.7988 | 0.5352 | | 0.0528 | 54.16 | 1300 | 0.7740 | 0.5201 | | 0.0456 | 58.33 | 1400 | 0.7636 | 0.5165 | | 0.0389 | 62.49 | 1500 | 0.7922 | 0.5161 | | 0.0329 | 66.66 | 1600 | 0.8035 | 0.5158 | | 0.0283 | 70.82 | 1700 | 0.7873 | 0.4832 | | 0.0255 | 74.99 | 1800 | 0.7853 | 0.4870 | | 0.0236 | 79.16 | 1900 | 0.8236 | 0.5045 | | 0.0202 | 83.33 | 2000 | 0.7661 | 0.4796 | | 0.0165 | 87.49 | 2100 | 0.7584 | 0.4680 | | 0.0156 | 91.66 | 2200 | 0.7685 | 0.4772 | | 0.0149 | 95.82 | 2300 | 0.7519 | 0.4696 | | 0.0126 | 99.99 | 2400 | 0.7619 | 0.4680 |
73b32627dc4b90619df4b020ab32973e
mit
['vision', 'image-to-text', 'image-captioning']
false
GIT (GenerativeImage2Text), base-sized GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
3d43d65cf5721aae778d44e2a9863cf8
mit
['vision', 'image-to-text', 'image-captioning']
false
Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
86db43297105b56bf18e4eb0c7e46083
apache-2.0
['generated_from_trainer', 'translation']
false
mt-ru-sv-finetuned This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-sv](https://huggingface.co/Helsinki-NLP/opus-mt-ru-sv) on the None dataset. It achieves the following results on the Tatoeba.rus.swe evaluation set: - eval_loss: 0.6998 - eval_bleu: 54.4473
ad2f00e306a2b5a8e7fb5bcb6986e99e
mit
['historic german']
false
German Europeana BERT We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/) that were provided by *The European Library*. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert).
7dfcbe4a99b707ae578b1182f5e7d4dc
mit
['historic german']
false
Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ------------------------------------------ | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-german-europeana-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/vocab.txt)
0bf9c92894d689b9d948edcdd5025fb6
mit
['historic german']
false
Usage With Transformers >= 2.3 our German Europeana BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-cased") ```
ce4a4d080643ce4b5542a981f120fb30
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3251 - Accuracy: 0.8767 - F1: 0.8787
f21c2e12693322f91e2eefbf36883560
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2t_de_wav2vec2_s144 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
55a59a6562f2e046771e6369b0f5cedc
apache-2.0
['deep-narrow']
false
T5-Efficient-XXL (Deep-Narrow version) T5-Efficient-XXL is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
6e62419c5fb63dc007660a69d7d1edca
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-xxl** - is of model type **Xxl** with no variations. It has **11307.38** million parameters and thus requires *ca.* **45229.52 MB** of memory in full precision (*fp32*) or **22614.76 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
3c1526dcae552e535f23f36ae55bb13c
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Model description This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli). As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf). If you are looking for a smaller, faster (but less performant) model, you can try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
a6e2bad728f19913bbadd41c3bc024b0
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli") sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ```
491f9c6da582a3b990a4fba9600363bf
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU" hypothesis = "Emmanuel Macron is the President of France" input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device))
213848d593dfb9aa1f14c43b34035560
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ```
1fc8497aad2ce48a3173f360d7add680
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Training data This model was trained on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of 2490 professionally translated texts from English to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)). Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages, but due to quality issues with these machine translations, this model was only trained on the professional translations from the XNLI development set and the original English MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the model to the 15 languages; avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on; and significantly reduces training costs.
7364f44377cc9ca6322988bbdbaaf5d2
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Training procedure mDeBERTa-v3-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=2,
164d1c1800357fc3a649289fc6ee3bf9
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Eval results The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total). Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower than for those languages available in XNLI. Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)). average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh ---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|---------- 0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116
a04627ed927a06a61ae2bd5c8e51d05c
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
f1718ec3f377b74c480d2c86933d5596
mit
['zero-shot-classification', 'text-classification', 'nli', 'pytorch']
false
Debugging and issues Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
27d7cb664d840d3047cfae100ca3974c
apache-2.0
['generated_from_trainer']
false
mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.7721 - Rouge2: 0.0701 - Rougel: 0.7721 - Rougelsum: 0.7718 - Gen Len: 6.329
05ae682095b2e504a36bcc3a7832e155
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 131773 | nan | 0.7721 | 0.0701 | 0.7721 | 0.7718 | 6.329 |
40924b22846f16d385def48d05f337a5
mit
['generated_from_trainer']
false
bert-base-german-cased-finetuned This model is a fine-tuned version of [ogimgio/bert-base-german-cased-issues-128](https://huggingface.co/ogimgio/bert-base-german-cased-issues-128) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4083 - Micro f1: 0.5637 - Macro f1: 0.5041
5ef177bfa32dd77a907abb3696e33ff0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 0.4609 | 1.0 | 103 | 0.4403 | 0.5551 | 0.4453 | | 0.362 | 2.0 | 206 | 0.4083 | 0.5637 | 0.5041 |
72c57780243944e8823503728c793432
apache-2.0
['automatic-speech-recognition', 'sv-SE']
false
exp_w2v2t_sv-se_wav2vec2_s732 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
45d29e6c234b00c92ee66ac51c784b0c
apache-2.0
['generated_from_trainer']
false
TweetEval_ALBERT_5E This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.1990 - Accuracy: 0.9267
6b82ba6c4abcc201e7386ea3e0c3ca07
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
74a361b3a82cf6308c8280c766ca9a68