license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 23 | 1.1760 | 54.8264 | 32.0931 | 40.5826 | 52.2503 | 99.4505 | | No log | 2.0 | 46 | 0.9005 | 59.7325 | 38.3487 | 45.8861 | 56.9922 | 108.3846 | | No log | 3.0 | 69 | 0.8053 | 62.0348 | 41.9592 | 49.1046 | 59.4965 | 101.2747 |
468e540b86f3129fc5bcaaaa20110a06
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3
49856221bb4eda89e23087b8fea620c2
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-fin This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3931 - Accuracy: 0.8873 - F1: 0.8902
563e1de50be0b602e14a26d2b37ffde5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6478 | 1.0 | 134 | 0.4118 | 0.8293 | 0.8309 | | 0.3304 | 2.0 | 268 | 0.3315 | 0.8653 | 0.8694 | | 0.2221 | 3.0 | 402 | 0.3229 | 0.8756 | 0.8781 | | 0.1752 | 4.0 | 536 | 0.3192 | 0.8891 | 0.8921 | | 0.1457 | 5.0 | 670 | 0.3700 | 0.8840 | 0.8880 | | 0.1315 | 6.0 | 804 | 0.3774 | 0.8854 | 0.8882 | | 0.1172 | 7.0 | 938 | 0.3883 | 0.8849 | 0.8877 | | 0.112 | 8.0 | 1072 | 0.3931 | 0.8873 | 0.8902 |
35ea70bdb77ddeb4abe5cf89852245c5
cc0-1.0
['kaggle']
false
PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
f955f7060e3d2344fe06ee2f42aa5b78
apache-2.0
['translation']
false
opus-mt-lue-fi * source languages: lue * target languages: fi * OPUS readme: [lue-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.eval.txt)
ae2dc91303edd07f49e7b97e992f2ae3
apache-2.0
['translation']
false
zho-bul * source group: Chinese * target group: Bulgarian * OPUS readme: [zho-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md) * model: transformer * source language(s): cmn cmn_Hans cmn_Hant zho zho_Hans zho_Hant * target language(s): bul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.eval.txt)
c65a6c953ed6aa90b36115aa6b3eb257
apache-2.0
['translation']
false
System Info: - hf_name: zho-bul - source_languages: zho - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['zh', 'bg'] - src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt - src_alpha3: zho - tgt_alpha3: bul - short_pair: zh-bg - chrF2_score: 0.49700000000000005 - bleu: 29.6 - brevity_penalty: 0.883 - ref_len: 3113.0 - src_name: Chinese - tgt_name: Bulgarian - train_date: 2020-07-03 - src_alpha2: zh - tgt_alpha2: bg - prefer_old: False - long_pair: zho-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
a03f977d5dce4bc532e6b1a8ee4305cd
apache-2.0
['generated_from_trainer']
false
distilroberta-base-finetuned-billy-ray-cyrus This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6282
6101d222f363711faf98a9c42e743010
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 47 | 2.5714 | | No log | 2.0 | 94 | 2.5574 | | No log | 3.0 | 141 | 2.6282 |
2cac3a86fd3975fdf3dd1b04f1b9a078
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion-test-01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 1.7510 - Accuracy: 0.39 - F1: 0.2188
2b43e7cd168be2349791f9273c938285
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 | | No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 |
e90790ba3f69433cdbc28980261b386d
apache-2.0
['generated_from_trainer']
false
MTL-bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9283
b63cd9f7d95edce6b0a093cd9bb54d61
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4409 | 1.0 | 99 | 2.1982 | | 2.2905 | 2.0 | 198 | 2.1643 | | 2.1974 | 3.0 | 297 | 2.1168 | | 2.15 | 4.0 | 396 | 2.0023 | | 2.0823 | 5.0 | 495 | 2.0199 | | 2.0752 | 6.0 | 594 | 1.9061 | | 2.0408 | 7.0 | 693 | 1.9770 | | 1.9984 | 8.0 | 792 | 1.9322 | | 1.9933 | 9.0 | 891 | 1.9167 | | 1.9806 | 10.0 | 990 | 1.9652 | | 1.9436 | 11.0 | 1089 | 1.9308 | | 1.9491 | 12.0 | 1188 | 1.9064 | | 1.929 | 13.0 | 1287 | 1.8831 | | 1.9096 | 14.0 | 1386 | 1.8927 | | 1.9032 | 15.0 | 1485 | 1.9117 |
d7ba2689931aef12dd607a37eee7ad54
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-auto_and_commute-7-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2614 - Accuracy: 0.4289
8879afd03527bb37848aeb85d8d5107a
apache-2.0
[]
false
ALBERT Large v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team.
f8e179b0bb50c81aa107a5266e11138d
apache-2.0
[]
false
Model description ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs. ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. This is the first version of the large model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. This model has the following configuration: - 24 repeating layers - 128 embedding dimension - 1024 hidden dimension - 16 attention heads - 17M parameters
d3e1d5b915a021922e317c9386bdd5aa
apache-2.0
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v1') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] hello i'm a modeling model.[SEP]", "score":0.05816134437918663, "token":12807, "token_str":"▁modeling" }, { "sequence":"[CLS] hello i'm a modelling model.[SEP]", "score":0.03748830780386925, "token":23089, "token_str":"▁modelling" }, { "sequence":"[CLS] hello i'm a model model.[SEP]", "score":0.033725276589393616, "token":1061, "token_str":"▁model" }, { "sequence":"[CLS] hello i'm a runway model.[SEP]", "score":0.017313428223133087, "token":8014, "token_str":"▁runway" }, { "sequence":"[CLS] hello i'm a lingerie model.[SEP]", "score":0.014405295252799988, "token":29104, "token_str":"▁lingerie" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1') model = AlbertModel.from_pretrained("albert-large-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import AlbertTokenizer, TFAlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-large-v1') model = TFAlbertModel.from_pretrained("albert-large-v1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
1175b48d0d996fccae7d6180da739397
apache-2.0
[]
false
Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='albert-large-v1') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] the man worked as a chauffeur.[SEP]", "score":0.029577180743217468, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the man worked as a janitor.[SEP]", "score":0.028865724802017212, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the man worked as a shoemaker.[SEP]", "score":0.02581118606030941, "token":29024, "token_str":"▁shoemaker" }, { "sequence":"[CLS] the man worked as a blacksmith.[SEP]", "score":0.01849772222340107, "token":21238, "token_str":"▁blacksmith" }, { "sequence":"[CLS] the man worked as a lawyer.[SEP]", "score":0.01820771023631096, "token":3672, "token_str":"▁lawyer" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] the woman worked as a receptionist.[SEP]", "score":0.04604868218302727, "token":25331, "token_str":"▁receptionist" }, { "sequence":"[CLS] the woman worked as a janitor.[SEP]", "score":0.028220869600772858, "token":29477, "token_str":"▁janitor" }, { "sequence":"[CLS] the woman worked as a paramedic.[SEP]", "score":0.0261906236410141, "token":23386, "token_str":"▁paramedic" }, { "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", "score":0.024797942489385605, "token":28744, "token_str":"▁chauffeur" }, { "sequence":"[CLS] the woman worked as a waitress.[SEP]", "score":0.024124596267938614, "token":13678, "token_str":"▁waitress" } ] ``` This bias will also affect all fine-tuned versions of this model.
a09ca6bb2e9b0c8bef0365a236dec59e
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
02f247211b242bbd93638dcdf7ce45dc
mit
[]
false
Info >Model Used: Waifu Diffusion 1.2 >Steps: 3000 >Keyword: SAKURA-SABER (Use this in the prompt) >Class Phrase: 1girl_short_blonde_hair_black_scarf_blue_yukata_anime ![Sak](https://c4.wallpaperflare.com/wallpaper/829/114/563/fate-series-fate-grand-order-okita-souji-wallpaper-preview.jpg)
3ca0afd7458bc97e21257178982f6cb9
mit
[]
false
🇹🇷 BERTurk BERTurk is a community-driven cased BERT model for Turkish. Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
60551987190efa8bc6a22ed52fb96f2b
mit
[]
false
Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model on a TPU v3-8 for 2M steps.
6e66babe87bd2376ee1aa259645f7a17
mit
[]
false
Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-turkish-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/vocab.txt)
f237c1bbf762a39c6718f3897a1bb173
mit
[]
false
Usage With Transformers >= 2.3 our BERTurk cased model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-cased") model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-cased") ```
66f0c6b73620670ef8c3f76acf622a84
mit
['generated_from_trainer']
false
microsoft_deberta-base_squad This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the **squadV1** dataset. - "eval_exact_match": 86.30085146641439 - "eval_f1": 92.68502275661561 - "eval_samples": 10788
eec86bb7e4cad7fb1b76d192358aa12b
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
5bda79a417c77e538ab5f9c82a25636d
apache-2.0
['generated_from_trainer']
false
small-mlm-tweet This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8171
70c8199e455ac874d537d3281eab4ed2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4028 | 11.11 | 500 | 3.4323 | | 2.8952 | 22.22 | 1000 | 3.4180 | | 2.6035 | 33.33 | 1500 | 3.6851 | | 2.3349 | 44.44 | 2000 | 3.4708 | | 2.1048 | 55.56 | 2500 | 3.8171 |
be365b81bd1da1f3d1e5cbf761ce7fd0
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
false
FastSpeech trained on LJSpeech (Eng) This repository provides a pretrained [FastSpeech](https://arxiv.org/abs/1905.09263) trained on LJSpeech dataset (ENG). For a detail of the model, we encourage you to read more about [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
81c25f36435af1e61642f616c8e62a29
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
false
Converting your Text to Mel Spectrogram ```python import numpy as np import soundfile as sf import yaml import tensorflow as tf from tensorflow_tts.inference import AutoProcessor from tensorflow_tts.inference import TFAutoModel processor = AutoProcessor.from_pretrained("tensorspeech/tts-fastspeech-ljspeech-en") fastspeech = TFAutoModel.from_pretrained("tensorspeech/tts-fastspeech-ljspeech-en") text = "How are you?" input_ids = processor.text_to_sequence(text) mel_before, mel_after, duration_outputs = fastspeech.inference( input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0), speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32), speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32), ) ```
f34ef8cffecff3efd06ad19701782829
apache-2.0
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
false
Referencing FastSpeech ``` @article{DBLP:journals/corr/abs-1905-09263, author = {Yi Ren and Yangjun Ruan and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie{-}Yan Liu}, title = {FastSpeech: Fast, Robust and Controllable Text to Speech}, journal = {CoRR}, volume = {abs/1905.09263}, year = {2019}, url = {http://arxiv.org/abs/1905.09263}, archivePrefix = {arXiv}, eprint = {1905.09263}, timestamp = {Wed, 11 Nov 2020 08:48:07 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-09263.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
a820c8b6656217a55e215dd4af85b65a
cc-by-sa-4.0
['generated_from_trainer']
false
legal-bert-small-uncased-filtered-filtered-cuad This model is a fine-tuned version of [nlpaueb/legal-bert-small-uncased](https://huggingface.co/nlpaueb/legal-bert-small-uncased) on the cuad dataset. It achieves the following results on the evaluation set: - Loss: 0.0604
6fc0c9b64cfc3bac6d09f03e69e3d1e6
cc-by-sa-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0768 | 1.0 | 2571 | 0.0701 | | 0.0667 | 2.0 | 5142 | 0.0638 | | 0.0548 | 3.0 | 7713 | 0.0604 |
533d2f7e0108e6b3ec687deb489155e9
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for Pseudagrilus trained by johnowhitaker on the johnowhitaker/Pseudagrilus dataset. This is a Stable Diffusion model fine-tuned the Pseudagrilus concept taught to Stable Diffusion with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of Pseudagrilus beetle** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
27a6868516bf006dbe1672d2ca057036
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/massive_transport-roberta-large-v1-3-3 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
7782f57b7aecf0644f0b5240e2c9c2bc
mit
['generated_from_trainer']
false
bertimbau-base-finetuned-lener-br-finetuned-peticoes-assuntos This model is a fine-tuned version of [Luciano/bertimbau-base-finetuned-lener-br](https://huggingface.co/Luciano/bertimbau-base-finetuned-lener-br) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9930 - Accuracy: 0.3575
1dbd47fd2149aab6effacdbf727dfef3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.7305 | 1.0 | 898 | 3.6586 | 0.2533 | | 3.4793 | 2.0 | 1796 | 3.2827 | 0.3029 | | 3.0791 | 3.0 | 2694 | 3.0938 | 0.3427 | | 2.83 | 4.0 | 3592 | 3.0101 | 0.3477 | | 2.7427 | 5.0 | 4490 | 2.9930 | 0.3575 |
2f2ebda2f3558134aabf90bc50ec5fd5
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7713 - Accuracy: 0.9174
d6baeb9155d98487813fef03ed22d302
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2892 | 1.0 | 318 | 3.2831 | 0.7426 | | 2.6244 | 2.0 | 636 | 1.8739 | 0.8335 | | 1.5442 | 3.0 | 954 | 1.1525 | 0.8926 | | 1.0096 | 4.0 | 1272 | 0.8569 | 0.91 | | 0.793 | 5.0 | 1590 | 0.7713 | 0.9174 |
f347b1354f6ccbcf6385bd2adb573fde
apache-2.0
['generated_from_trainer']
false
fnet-large-finetuned-qqp This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.5515 - Accuracy: 0.8943 - F1: 0.8557 - Combined Score: 0.8750
ac029d0f210266f3ae0ccf8d6f01222a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:| | 0.4574 | 1.0 | 90962 | 0.4946 | 0.8694 | 0.8297 | 0.8496 | | 0.3387 | 2.0 | 181924 | 0.4745 | 0.8874 | 0.8437 | 0.8655 | | 0.2029 | 3.0 | 272886 | 0.5515 | 0.8943 | 0.8557 | 0.8750 |
35916dfb2010b8702820c60a310ccfe2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:| | No log | 1.0 | 436 | 0.0001 | 1.0 | 0.0 |
3dc24184b7065874ef79cffd939d3ad3
creativeml-openrail-m
['text-to-image']
false
Saad Dreambooth model trained by HusseinHE with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: sksaad (use that on your prompt) ![sksaad 0](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%281%29.jpg)![sksaad 1](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%282%29.jpg)![sksaad 2](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%283%29.jpg)![sksaad 3](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%284%29.jpg)![sksaad 4](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%285%29.jpg)![sksaad 5](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%286%29.jpg)![sksaad 6](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%287%29.jpg)![sksaad 7](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%288%29.jpg)![sksaad 8](https://huggingface.co/HusseinHE/saad/resolve/main/concept_images/sksaad_%289%29.jpg)
041e840a071411bce11fc61f93e73bc1
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1468
02f6ed12039a8611df7f3b4e4a47d91e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2257 | 1.0 | 5533 | 1.1557 | | 0.9632 | 2.0 | 11066 | 1.1215 | | 0.762 | 3.0 | 16599 | 1.1468 |
91525be12717ddc644ee3474de7c1897
mit
['text-classification']
false
Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name)
8934c285f967e17115977274a59c7c7f
mit
['text-classification']
false
Use pipeline from transformers import pipeline model_name = "aychang/distilbert-base-cased-trec-coarse" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]) ```
77d04b5302103aed91ed0d811e44e5b3
mit
['text-classification']
false
AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/distilbert-base-cased-trec-coarse" texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ```
acf1ae34f88efe6526c1daf0d42bf67d
mit
['text-classification']
false
Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', overwrite_output_dir=False, num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', fp16=False, eval_steps=500, save_steps=300000 ) ```
ea4b83c8b423886134f74d066f9312ab
mit
['text-classification']
false
Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.97, 'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414, 0.97560976]), 'eval_loss': 0.14275787770748138, 'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614, 0.96385542]), 'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044, 0.98765432]), 'eval_runtime': 0.9731, 'eval_samples_per_second': 513.798} ```
bc12d26f284884eb45f408393df39b43
apache-2.0
['generated_from_trainer']
false
xlsr-wav2vec2-2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5884 - Wer: 0.4301
b7d0f42018981ae8cea60db1e2bbf6b0
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 60 - mixed_precision_training: Native AMP
2b8567f4a9fd3003b710f0662eba1c24
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.6058 | 1.38 | 400 | 3.1894 | 1.0 | | 2.3145 | 2.76 | 800 | 0.7193 | 0.7976 | | 0.6737 | 4.14 | 1200 | 0.5338 | 0.6056 | | 0.4651 | 5.52 | 1600 | 0.5699 | 0.6007 | | 0.3968 | 6.9 | 2000 | 0.4608 | 0.5221 | | 0.3281 | 8.28 | 2400 | 0.5264 | 0.5209 | | 0.2937 | 9.65 | 2800 | 0.5366 | 0.5096 | | 0.2619 | 11.03 | 3200 | 0.4902 | 0.5021 | | 0.2394 | 12.41 | 3600 | 0.4706 | 0.4908 | | 0.2139 | 13.79 | 4000 | 0.5526 | 0.4871 | | 0.2034 | 15.17 | 4400 | 0.5396 | 0.5108 | | 0.1946 | 16.55 | 4800 | 0.4959 | 0.4866 | | 0.1873 | 17.93 | 5200 | 0.4898 | 0.4877 | | 0.1751 | 19.31 | 5600 | 0.5488 | 0.4932 | | 0.1668 | 20.69 | 6000 | 0.5645 | 0.4986 | | 0.1638 | 22.07 | 6400 | 0.5367 | 0.4946 | | 0.1564 | 23.45 | 6800 | 0.5282 | 0.4898 | | 0.1566 | 24.83 | 7200 | 0.5489 | 0.4841 | | 0.1522 | 26.21 | 7600 | 0.5439 | 0.4821 | | 0.1378 | 27.59 | 8000 | 0.5796 | 0.4866 | | 0.1459 | 28.96 | 8400 | 0.5603 | 0.4875 | | 0.1406 | 30.34 | 8800 | 0.6773 | 0.5005 | | 0.1298 | 31.72 | 9200 | 0.5858 | 0.4827 | | 0.1268 | 33.1 | 9600 | 0.6007 | 0.4790 | | 0.1204 | 34.48 | 10000 | 0.5716 | 0.4734 | | 0.113 | 35.86 | 10400 | 0.5866 | 0.4748 | | 0.1088 | 37.24 | 10800 | 0.5790 | 0.4752 | | 0.1074 | 38.62 | 11200 | 0.5966 | 0.4721 | | 0.1018 | 40.0 | 11600 | 0.5720 | 0.4668 | | 0.0968 | 41.38 | 12000 | 0.5826 | 0.4698 | | 0.0874 | 42.76 | 12400 | 0.5937 | 0.4634 | | 0.0843 | 44.14 | 12800 | 0.6056 | 0.4640 | | 0.0822 | 45.52 | 13200 | 0.5531 | 0.4569 | | 0.0806 | 46.9 | 13600 | 0.5669 | 0.4484 | | 0.072 | 48.28 | 14000 | 0.5683 | 0.4484 | | 0.0734 | 49.65 | 14400 | 0.5735 | 0.4437 | | 0.0671 | 51.03 | 14800 | 0.5455 | 0.4394 | | 0.0617 | 52.41 | 15200 | 0.5838 | 0.4365 | | 0.0607 | 53.79 | 15600 | 0.6233 | 0.4397 | | 0.0593 | 55.17 | 16000 | 0.5649 | 0.4340 | | 0.0551 | 56.55 | 16400 | 0.5923 | 0.4392 | | 0.0503 | 57.93 | 16800 | 0.5858 | 0.4325 | | 0.0496 | 59.31 | 17200 | 0.5884 | 0.4301 |
cf00ce8cdc2e298bab8d0e20b1783016
apache-2.0
[]
false
distilbert-base-en-fr-ar-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
4dbae0882556855ed5870c250fb54fc6
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-ar-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-ar-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
60919f749840c5aee899e2198f90f677
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-bbc-news This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0107 - Accuracy: 0.9955 - F1: 0.9955
2ec74006d28f01eadb670510cddb6dad
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
491e2d6c5a50eb01321a007fcc22157a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3463 | 0.84 | 500 | 0.0392 | 0.9865 | 0.9865 | | 0.0447 | 1.68 | 1000 | 0.0107 | 0.9955 | 0.9955 |
42b1100f046a6bbfe727a0b9b333e851
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1500
0ddc9c793fe3a4b4f93ffb65d9dcd303
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3149 | 1.0 | 2767 | 1.2079 | | 1.053 | 2.0 | 5534 | 1.1408 | | 0.8809 | 3.0 | 8301 | 1.1500 |
eba75496e3071e3432782469d2c02c97
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.7339 - Accuracy: 0.6567 - F1: 0.6979
125f76a9a7e6dd8ed980690897cfa247
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5180 - eval_matthews_correlation: 0.4063 - eval_runtime: 0.8532 - eval_samples_per_second: 1222.419 - eval_steps_per_second: 77.353 - epoch: 1.0 - step: 535
ff064a9ea32b141a5e4885eb92d57897
cc-by-4.0
['question generation']
false
Model Card of `lmqg/mbart-large-cc25-squad-qg` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
9e77ebfda3cf9b2c05859b9f7d0e198c
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
167a0c70eae0315f0adf69c5eebeb8c4
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-squad-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
88d35882da2a5b729ea9e025f441bf0a
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 39.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 29.76 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 23.03 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 50.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metrics (Question Generation, Out-of-Domain)*** | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link | |:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:| | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | default | 11.05 | 0.0 | 1.05 | 44.94 | 3.4 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_dequad.default.json) | | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | default | 60.73 | 0.57 | 5.27 | 48.76 | 18.99 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json) | | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | default | 16.47 | 0.02 | 1.55 | 45.35 | 5.13 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) | | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | default | 41.46 | 0.48 | 3.84 | 47.28 | 13.25 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | default | 19.89 | 0.06 | 1.74 | 45.51 | 6.11 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json) | | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | default | 31.67 | 0.38 | 3.06 | 46.59 | 10.34 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json) | | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | default | 26.19 | 0.18 | 2.65 | 46.09 | 8.34 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_ruquad.default.json) |
e72e80ea8368bcf16baaf6c263b73ced
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/trainer_config.json).
97554240015c48217f3afc206f35f2fd
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_mnli_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.9477 - Accuracy: 0.5655
9783a523161dd74576120a278e71cc8d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.9142 | 1.0 | 31440 | 0.9328 | 0.5686 | | 0.8099 | 2.0 | 62880 | 0.9523 | 0.5752 | | 0.7371 | 3.0 | 94320 | 1.0072 | 0.5737 | | 0.6756 | 4.0 | 125760 | 1.0606 | 0.5750 | | 0.6229 | 5.0 | 157200 | 1.1116 | 0.5739 | | 0.5784 | 6.0 | 188640 | 1.1396 | 0.5795 |
e8d877365e46a7b4c1f6f67ab4e71feb
apache-2.0
['summarization', 'generated_from_trainer']
false
bart-base-finetuned-summarization-cnn-ver1.3 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.3148 - Bertscore-mean-precision: 0.8890 - Bertscore-mean-recall: 0.8603 - Bertscore-mean-f1: 0.8742 - Bertscore-median-precision: 0.8874 - Bertscore-median-recall: 0.8597 - Bertscore-median-f1: 0.8726
80256125461e32903cad6a2c3b595e00
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
6557dc06c5370266f0a993335290ee41
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 | |:-------------:|:-----:|:-----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:| | 2.3735 | 1.0 | 5742 | 2.2581 | 0.8831 | 0.8586 | 0.8705 | 0.8834 | 0.8573 | 0.8704 | | 1.744 | 2.0 | 11484 | 2.2479 | 0.8920 | 0.8620 | 0.8765 | 0.8908 | 0.8603 | 0.8752 | | 1.3643 | 3.0 | 17226 | 2.3148 | 0.8890 | 0.8603 | 0.8742 | 0.8874 | 0.8597 | 0.8726 |
c4de6a38e50fe6f20e43630b60cc6f82
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4923 - F1: 0.7205
7016ab1e7e95819c5d7c9522b8cbd4b6
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.9902 | 1.0 | 148 | 0.6183 | 0.5830 | | 0.4903 | 2.0 | 296 | 0.5232 | 0.6675 | | 0.3272 | 3.0 | 444 | 0.4923 | 0.7205 |
01bf78b5b6f763912b9f8cb2313f195e
apache-2.0
['generated_from_trainer']
false
distilgpt2-sd-prompts This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [Stable-Diffusion-Prompts](https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts). It achieves the following results on the evaluation set: - Loss: 0.9450
a0e2718d6d678d04b2bc3b7f633e36e6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 8 - mixed_precision_training: Native AMP
69f690fdd0f95ac0ae7925f6ab1a197b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5122 | 1.93 | 500 | 1.5211 | | 1.2912 | 3.86 | 1000 | 1.1045 | | 0.9313 | 5.79 | 1500 | 0.9704 | | 0.7744 | 7.72 | 2000 | 0.9450 |
bd0c25dd495968f66463b60062f40b83
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1900k']
false
MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1900k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
124a96879535336a7a618bd8d6c1cd98
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1900k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1900k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1900k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
af787d5997fb129a19fd93cd453af539
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1584 - F1: 0.8537
d793f33d6e898e58d9ad374986b83fdf
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 358 | 0.1776 | 0.8263 | | 0.2394 | 2.0 | 716 | 0.1599 | 0.8447 | | 0.2394 | 3.0 | 1074 | 0.1584 | 0.8537 |
0eeb97093de02edace40d442e071f384
apache-2.0
['generated_from_trainer']
false
whisper-small-hi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4281 - Wer: 31.9521
d8fd9aea2bb4a5db8e0c2340ea6e128b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0879 | 2.44 | 1000 | 0.2908 | 33.7933 | | 0.0216 | 4.89 | 2000 | 0.3440 | 33.0229 | | 0.0014 | 7.33 | 3000 | 0.4063 | 32.2611 | | 0.0005 | 9.78 | 4000 | 0.4281 | 31.9521 |
b45ee4826f5882056f7e7b158a2b173f
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_cola_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6807 - Matthews Correlation: 0.0
ec4cd7dc96fcbfd2394e982c94c12f9f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.8228 | 1.0 | 67 | 0.6863 | 0.0 | | 0.7969 | 2.0 | 134 | 0.6870 | 0.0 | | 0.7965 | 3.0 | 201 | 0.6834 | 0.0 | | 0.795 | 4.0 | 268 | 0.6835 | 0.0 | | 0.7939 | 5.0 | 335 | 0.6807 | 0.0 | | 0.7451 | 6.0 | 402 | 0.6986 | 0.0672 | | 0.6395 | 7.0 | 469 | 0.7051 | 0.0875 | | 0.6042 | 8.0 | 536 | 0.7293 | 0.1094 | | 0.5756 | 9.0 | 603 | 0.7376 | 0.1173 | | 0.5558 | 10.0 | 670 | 0.7879 | 0.1123 |
a929320b36fb5e9b03fc61183311aa75
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Danish - Robust This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 da dataset. It achieves the following results on the evaluation set: - Loss: 0.7926 - Wer: 32.3251
3d8ca66cca501fbf79d8b415c37f26e4
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
3e67488f4fa87cf963f1d387b9789947
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0232 | 15.15 | 1000 | 0.7538 | 35.5813 | | 0.0061 | 30.3 | 2000 | 0.7933 | 34.3766 | | 0.0016 | 45.45 | 3000 | 0.7993 | 33.5823 | | 0.0003 | 60.61 | 4000 | 0.7986 | 31.6097 | | 0.0002 | 75.76 | 5000 | 0.7901 | 32.1357 |
55e518a4d8b4f9fd99e265d1922ae971
apache-2.0
['generated_from_trainer']
false
mobilebert_add_GLUE_Experiment_logit_kd_wnli_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3452 - Accuracy: 0.5634
5b65abb607d43c1ed0599c50822bec81
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3473 | 1.0 | 5 | 0.3452 | 0.5634 | | 0.3469 | 2.0 | 10 | 0.3464 | 0.5634 | | 0.3467 | 3.0 | 15 | 0.3465 | 0.5634 | | 0.3465 | 4.0 | 20 | 0.3456 | 0.5634 | | 0.3466 | 5.0 | 25 | 0.3453 | 0.5634 | | 0.3466 | 6.0 | 30 | 0.3455 | 0.5634 |
816f47b00f6d76213d05a408747ea56d
mit
['pytorch', 'diffusers', 'unconditional-audio-generation', 'diffusion-models-class']
false
Model Card for Unit 4 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional audio generation of music in the genre Rock
e2c53179bd57638d42a8141233ae4a34
mit
['pytorch', 'diffusers', 'unconditional-audio-generation', 'diffusion-models-class']
false
Usage ```python from IPython.display import Audio from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained("StatsGary/audio-diffusion-electro-rock") output = pipe() display(output.images[0]) display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) ```
7a9882cb2ce945771e3e25f3f6a5d078
cc-by-4.0
['generated_from_trainer']
false
roberta-base-squad2-coffee20230108 This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2379
1e2302b93f4b65d82156a9aa1737cfe2
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 90 | 1.6912 | | 1.8817 | 2.0 | 180 | 1.7054 | | 1.3233 | 3.0 | 270 | 1.6376 | | 0.9894 | 4.0 | 360 | 2.1005 | | 0.7526 | 5.0 | 450 | 2.7104 | | 0.6553 | 6.0 | 540 | 2.2928 | | 0.5512 | 7.0 | 630 | 2.6380 | | 0.4148 | 8.0 | 720 | 2.8010 | | 0.2964 | 9.0 | 810 | 3.1167 | | 0.2538 | 10.0 | 900 | 3.5313 | | 0.2538 | 11.0 | 990 | 3.6620 | | 0.1918 | 12.0 | 1080 | 4.1138 | | 0.1363 | 13.0 | 1170 | 4.0901 | | 0.1606 | 14.0 | 1260 | 4.2286 | | 0.1162 | 15.0 | 1350 | 4.2379 |
c387a3ffafc58b4cc6dff9b3de7110b2
mit
['generated_from_trainer']
false
xlm-roberta-base-misogyny-sexism-indomain-mix-trans This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8397 - Accuracy: 0.797 - F1: 0.7691 - Precision: 0.8918 - Recall: 0.676 - Mae: 0.203 - Tn: 459 - Fp: 41 - Fn: 162 - Tp: 338
012fbc639903f1f5bf9bddd3c50591a5
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|:---:|:--:|:---:|:---:| | 0.2914 | 1.0 | 2711 | 0.5846 | 0.794 | 0.7726 | 0.8621 | 0.7 | 0.206 | 444 | 56 | 150 | 350 | | 0.2836 | 2.0 | 5422 | 0.6752 | 0.785 | 0.7491 | 0.8992 | 0.642 | 0.215 | 464 | 36 | 179 | 321 | | 0.2516 | 3.0 | 8133 | 0.7715 | 0.769 | 0.7214 | 0.9088 | 0.598 | 0.231 | 470 | 30 | 201 | 299 | | 0.2047 | 4.0 | 10844 | 0.8397 | 0.797 | 0.7691 | 0.8918 | 0.676 | 0.203 | 459 | 41 | 162 | 338 |
98759252c4336a99c0fe5857ddf65e04
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-singlish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the li_singlish dataset. It achieves the following results on the evaluation set: - Loss: 0.7199 - Wer: 0.3337
d2f0d18423adb57da5210057ea30b095
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.2984 | 4.76 | 400 | 2.9046 | 1.0 | | 1.1895 | 9.52 | 800 | 0.7725 | 0.4535 | | 0.1331 | 14.28 | 1200 | 0.7068 | 0.3847 | | 0.0701 | 19.05 | 1600 | 0.7547 | 0.3617 | | 0.0509 | 23.8 | 2000 | 0.7123 | 0.3444 | | 0.0385 | 28.57 | 2400 | 0.7199 | 0.3337 |
494b5c8e5bf319effc1975ca5d1c779a
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape', 'classical-art']
false
DreamBooth model for the painting of mixed style between Claude-Monet and Hokusai This is a Stable Diffusion model fine-tuned to generate mixed styled paintings between Claude-Monet and Hokusai taught to Stable Diffusion with DreamBooth. It can be used by modifying the `instance_prompt`: **a painting in $M
3a8cc7e60e2ca6d549c7ba0dd6dd02a5