license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
[]
false
Citation Info ```text @article{sun2021ernie, title={Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation}, author={Sun, Yu and Wang, Shuohuan and Feng, Shikun and Ding, Siyu and Pang, Chao and Shang, Junyuan and Liu, Jiaxiang and Chen, Xuyi and Zhao, Yanbin and Lu, Yuxiang and others}, journal={arXiv preprint arXiv:2107.02137}, year={2021} } @article{su2021ernie, title={Ernie-tiny: A progressive distillation framework for pretrained transformer compression}, author={Su, Weiyue and Chen, Xuyi and Feng, Shikun and Liu, Jiaxiang and Liu, Weixin and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2106.02241}, year={2021} } @article{wang2021ernie, title={Ernie 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation}, author={Wang, Shuohuan and Sun, Yu and Xiang, Yang and Wu, Zhihua and Ding, Siyu and Gong, Weibao and Feng, Shikun and Shang, Junyuan and Zhao, Yanbin and Pang, Chao and others}, journal={arXiv preprint arXiv:2112.12731}, year={2021} } ```
9f21c373aa7ad853aa18478e84847e74
apache-2.0
[]
false
Intended Use * Intended to be used for a wide range of use cases such as supporting human moderation and extracting polarity of review comments. * Not intended for fully automated moderation. * Not intended to make judgments about specific individuals.
1596c4fc15fa8796219a1cf40d678d6d
mit
['generated_from_trainer']
false
bart-large-cnn-finetuned-roundup-2-4 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0908 - Rouge1: 51.9961 - Rouge2: 32.3963 - Rougel: 32.1774 - Rougelsum: 50.1033 - Gen Len: 141.0
035065a38fd67f34fbb68635d63b8f44
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 167 | 1.2152 | 52.234 | 33.1104 | 33.308 | 49.5516 | 142.0 | | No log | 2.0 | 334 | 1.1054 | 52.7096 | 33.4698 | 33.9595 | 49.8736 | 140.3333 | | 1.0437 | 3.0 | 501 | 1.0796 | 51.699 | 32.4255 | 34.0294 | 49.5276 | 141.7143 | | 1.0437 | 4.0 | 668 | 1.0908 | 51.9961 | 32.3963 | 32.1774 | 50.1033 | 141.0 |
b467808452524624db6ffd7ecc2cf7c8
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Estonian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Estonian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
406738915a4549a1bcc71452d3c055f0
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !pip install jiwer ``` **Prediction** ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset import numpy as np import re import string import IPython.display as ipd chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "
1499fd2bff0149d3eafd90ec45e0479f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test[:1%]") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) max_items = np.random.randint(0, len(result), 10).tolist() for i in max_items: reference, predicted = result["sentence"][i], result["predicted"][i] print("reference:", reference) print("predicted:", predicted) print('---') ``` **Output:** ```text reference: õhulossid lagunevad ning ees ootab maapind predicted: õhulassid lagunevad ning ees ootab maapind --- reference: milliseks kiievisse pääsemise nimel võistlev muusik soome muusikamaastiku hetkeseisu hindab ning kas ta ka ennast sellel tulevikus tegutsemas näeb kuuled videost predicted: milliseks gievisse pääsemise nimel võitlev muusiks soome muusikama aastiku hetke seisu hindab ning kas ta ennast selle tulevikus tegutsemast näeb kuulad videost --- reference: näiteks kui pool seina on tehtud tekib tunne et tahaks tegelikult natuke teistsugust ja hakkame otsast peale predicted: näiteks kui pool seine on tehtud tekib tunnetahaks tegelikult matuka teistsugust jahappanna otsast peane --- reference: neuroesteetilised katsed näitavad et just nägude vaatlemine aktiveerib inimese aju esteetilist keskust predicted: neuroaisteetiliselt katsed näitaval et just nägude vaatlemine aptiveerid inimese aju est eedilist keskust --- reference: paljud inimesed kindlasti kadestavad teid kuid ei julge samamoodi vabalt võtta predicted: paljud inimesed kindlasti kadestavadteid kuid ei julge sama moodi vabalt võtta --- reference: parem on otsida pileteid inkognito veebi kaudu predicted: parem on otsida pileteid ning kognitu veebikaudu --- reference: ja vot siin ma jäin vaikseks predicted: ja vat siisma ja invaikseks --- reference: mida sa iseendale juubeli puhul soovid predicted: mida saise endale jubeli puhul soovid --- reference: kuumuse ja kõrge temperatuuri tõttu kuivas tühjadel karjamaadel rohi mis muutus kergesti süttivaks predicted: kuumuse ja kõrge temperatuuri tõttu kuivast ühjadal karjamaadel rohi mis muutus kergesti süttivaks --- reference: ilmselt on inimesi kelle jaoks on see hea lahendus predicted: ilmselt on inimesi kelle jaoks on see hea lahendus --- ```
13827239f53ead587b6c9649a1620b83
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Estonian test data of Common Voice. ```python import librosa import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import numpy as np import re import string chars_to_ignore = [ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "
13d5c8a7a37591c965dcbb3913f2a07c
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"', "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„' ] chars_to_mapping = { "\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ", } def multiple_replace(text, chars_to_mapping): pattern = "|".join(map(re.escape, chars_to_mapping.keys())) return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text)) def remove_special_characters(text, chars_to_ignore_regex): text = re.sub(chars_to_ignore_regex, '', text).lower() + " " return text def normalizer(batch, chars_to_ignore, chars_to_mapping): chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]""" text = batch["sentence"].lower().strip() text = text.replace("\u0307", " ").strip() text = multiple_replace(text, chars_to_mapping) text = remove_special_characters(text, chars_to_ignore_regex) batch["sentence"] = text return batch def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array = speech_array.squeeze().numpy() speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000) batch["speech"] = speech_array return batch def predict(batch): features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids)[0] return batch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian") model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-estonian").to(device) dataset = load_dataset("common_voice", "et", split="test") dataset = dataset.map( normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping}, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])) ) dataset = dataset.map(speech_file_to_array_fn) result = dataset.map(predict) wer = load_metric("wer") print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"]))) ``` **Test Result**: - WER: 33.93%
b265d60be3f095ba7420ccb9bc7f23b2
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training & Report The Common Voice `train`, `validation` datasets were used for training. You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_estonian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Estonian--Vmlldzo1NjA1MTI?accessToken=k2b2g3a2i12m1sdwf13q8b226pplmmyw12joxo6vk38eb4djellfzmn9fp2725fw) The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Estonian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
f37800fa04227b72e2bee258eab38bc8
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001372 - train_batch_size: 1 - eval_batch_size: 8 - seed: 1360794382 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0
da533d37b3c7c9ff6dc91ee4740d6a28
cc-by-4.0
[]
false
Readability benchmark (ES): mbert-es-paragraphs-3class This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish". You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
ade5ef70f86a3efd0f639541df066863
cc-by-4.0
[]
false
classes | |-----------------------------------------------------------------------------------------------------------|----------------|:---------:| | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 | | **[mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class)** | **paragraphs** | **3** | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 | For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
3749159e98dd0d39b47dbaa199179201
apache-2.0
['doe2vec', 'exploratory-landscape-analysis', 'autoencoders']
false
Model description DoE2Vec model that can transform any design of experiments (function landscape) to a feature vector. For different input dimensions or sample size you require a different model. Each model name is build up like doe2vec-d{dimension\}-m{sample size}-ls{latent size}-{AE or VAE}-kl{Kl loss weight} Example code of loading this huggingface model using the doe2vec package. First install the package ```zsh pip install doe2vec ``` Then import and load the model. ```python from doe2vec import doe_model obj = doe_model( 5, 8, latent_dim=24, kl_weight=0.001, model_type="VAE" ) obj.load_from_huggingface()
20e622805a5a97c64cd630c708c20f7e
apache-2.0
['doe2vec', 'exploratory-landscape-analysis', 'autoencoders']
false
Intended uses & limitations The model is intended to be used to generate feature representations for optimization function landscapes. The representations can then be used for downstream tasks such as automatic optimization pipelines and meta-learning.
1cfea6992d9ab076c3349d09030d67bb
apache-2.0
['doe2vec', 'exploratory-landscape-analysis', 'autoencoders']
false
Training procedure The model is trained using a weighed KL loss and mean squared error reconstruction loss. The model is trained using 250.000 randomly generated functions (see the dataset) over 100 epochs. - **Hardware:** 1x Tesla T4 GPU - **Optimizer:** Adam
d03e8161b9ad976f262e5370e03c9aa4
mit
['zero-shot-classification', 'nli', 'pytorch']
false
XLM-RoBERTa-large-XNLI-ANLI XLM-RoBERTa-large model finetunned over several NLI datasets, ready to use for zero-shot classification. Here are the accuracies for several test datasets: | | XNLI-es | XNLI-fr | ANLI-R1 | ANLI-R2 | ANLI-R3 | |-----------------------------|---------|---------|---------|---------|---------| | xlm-roberta-large-xnli-anli | 93.7% | 93.2% | 68.5% | 53.6% | 49.0% | The model can be loaded with the zero-shot-classification pipeline like so: ``` from transformers import pipeline classifier = pipeline("zero-shot-classification", model="vicgalle/xlm-roberta-large-xnli-anli") ``` You can then use this pipeline to classify sequences into any of the class names you specify: ``` sequence_to_classify = "Algún día iré a ver el mundo" candidate_labels = ['viaje', 'cocina', 'danza'] classifier(sequence_to_classify, candidate_labels)
851578d8b50d137857a232d4ef803674
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_no-pretraining_s208 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
932e50c4b08860cdc4dab4bae1e421d0
mit
[]
false
gen.py ``` from transformers import GPTNeoForCausalLM, AutoTokenizer import torch import sys model_name = sys.argv[1] model = GPTNeoForCausalLM.from_pretrained(model_name).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_name) def generate(model, text, temperature=0.9, min_length=256, max_length=256, no_grad=True, use_cache=False, do_sample=True, match_mesh_tf=False, **kwargs): ids = tokenizer(text, return_tensors="pt").input_ids.to("cuda") if no_grad: with torch.no_grad(): gen_tokens = model.generate( ids, do_sample=do_sample, min_length=min_length, max_length=max_length, temperature=temperature, use_cache=use_cache, **kwargs ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ``` ``` python gen.py spamtontalk_gpt_neo_xl_v9 >>> text = """Talk (anything): Example dialogue""" >>> generate(model, text, temperature=0.92) ```
795a26b6f43fc46d4e5027176af7b774
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0679 - Precision: 0.9364 - Recall: 0.9488 - F1: 0.9426 - Accuracy: 0.9855
0557474048100e2e31b7b9716c287184
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0884 | 1.0 | 1756 | 0.0662 | 0.9083 | 0.9317 | 0.9198 | 0.9824 | | 0.04 | 2.0 | 3512 | 0.0613 | 0.9341 | 0.9493 | 0.9417 | 0.9856 | | 0.0187 | 3.0 | 5268 | 0.0679 | 0.9364 | 0.9488 | 0.9426 | 0.9855 |
c44859d1d4e545f971eb499783ae2666
apache-2.0
['generated_from_trainer']
false
beit-base-patch16-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0067 - Accuracy: 1.0
d9fdfd69093cf5d9aa631b11e4cb1ccf
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4792 | 0.95 | 15 | 0.0402 | 0.985 | | 0.0481 | 1.95 | 30 | 0.0067 | 1.0 | | 0.0561 | 2.95 | 45 | 0.0086 | 0.995 |
e48bd767f2eb126daf097f4556c3e2de
apache-2.0
['translation']
false
opus-mt-fr-mfe * source languages: fr * target languages: mfe * OPUS readme: [fr-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mfe/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.eval.txt)
f9b7c172699de75f64dd32473eefa750
mit
[]
false
green-tent on Stable Diffusion This is the `<green-tent>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<green-tent> 0](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/1.jpeg) ![<green-tent> 1](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/5.jpeg) ![<green-tent> 2](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/0.jpeg) ![<green-tent> 3](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/4.jpeg) ![<green-tent> 4](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/2.jpeg) ![<green-tent> 5](https://huggingface.co/sd-concepts-library/green-tent/resolve/main/concept_images/3.jpeg)
38ce1f96fd7bcf64e9133a63162569b8
apache-2.0
['translation']
false
opus-mt-de-fi * source languages: de * target languages: fi * OPUS readme: [de-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-fi/opus-2020-01-08.eval.txt)
678c3b446dfe34b569e19e3c16247a5b
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1348 - F1: 0.8658
9f43f64cb144e2112601346215917de3
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.254 | 1.0 | 525 | 0.1647 | 0.8200 | | 0.1285 | 2.0 | 1050 | 0.1454 | 0.8443 | | 0.0808 | 3.0 | 1575 | 0.1348 | 0.8658 |
cbe034e7eee0beb7b280a7876990ad97
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1664 - F1: 0.8556
612066482228b710bdf5520cf2382629
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2846 | 1.0 | 715 | 0.1837 | 0.8247 | | 0.1446 | 2.0 | 1430 | 0.1617 | 0.8409 | | 0.0923 | 3.0 | 2145 | 0.1664 | 0.8556 |
713d93aa6a9bd592cb40003855ed9299
apache-2.0
[]
false
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer to our paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). This repository contains code to pre-train ELECTRA, including small ELECTRA models on a single GPU. It also supports fine-tuning ELECTRA on downstream tasks including classification tasks (e.g,. [GLUE](https://gluebenchmark.com/)), QA tasks (e.g., [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)), and sequence tagging tasks (e.g., [text chunking](https://www.clips.uantwerpen.be/conll2000/chunking/)).
063671491f06e307b4ed8b7748dfe217
apache-2.0
[]
false
How to use the discriminator in `transformers` ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-base-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ```
96c620ddab2eb78583e0800b8ad64469
creativeml-openrail-m
['text-to-image']
false
[![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/MultiversexPeeps/JemandtheHolograms)
17e7d00af4dee032b4f32ebfbd314544
creativeml-openrail-m
['text-to-image']
false
Jem and the Holograms Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk duskgem (use that on your prompt)
8c24013a84b03030c460e0cdbba506d9
apache-2.0
['tapas', 'sequence-classification']
false
TAPAS base model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_base` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors.
cd173e7c35a70ad4b103220c524a764e
apache-2.0
['tapas', 'sequence-classification']
false
Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on TabFact.
5096e630d29f093c957d9dc15885c4f5
apache-2.0
['tapas', 'sequence-classification']
false
Intended uses & limitations You can use this model for classifying whether a sentence is supported or refuted by the contents of a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
582e0f4b4bf23421f9b6b37129389cc2
apache-2.0
['tapas', 'sequence-classification']
false
Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
0f86907008bb4842c705f22c3b91d57c
apache-2.0
['tapas', 'sequence-classification']
false
BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ```
fe63eecb1c4786c6360ad376691ebb65
mit
['generated_from_trainer']
false
practical_panini This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
9123cfe358228ddb8fbc3efdb940bb4e
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 4096}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'gpt3_kwargs': {'model_name': 'davinci'}, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'practical_panini', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
a5f06ac4aeafe29d38166d4fba363205
apache-2.0
['generated_from_trainer']
false
wav2vec2-libri-train360-colab This model is a fine-tuned version of [GW12/wav2vec2-libri-train100-colab](https://huggingface.co/GW12/wav2vec2-libri-train100-colab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1101 - Wer: 0.1002
3aeac6e031047c2b355716c4e41aa0be
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 - mixed_precision_training: Native AMP
d2fb3aa871e8db8567339334759c0117
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:------:|:---------------:|:------:| | 3.1196 | 0.02 | 500 | 0.2020 | 0.1494 | | 0.1695 | 0.04 | 1000 | 0.1600 | 0.1462 | | 0.1726 | 0.06 | 1500 | 0.1996 | 0.1457 | | 0.1654 | 0.08 | 2000 | 0.1531 | 0.1448 | | 0.1665 | 0.1 | 2500 | 0.1582 | 0.1491 | | 0.1555 | 0.12 | 3000 | 0.1566 | 0.1478 | | 0.1562 | 0.13 | 3500 | 0.1555 | 0.1501 | | 0.1604 | 0.15 | 4000 | 0.1465 | 0.1422 | | 0.1522 | 0.17 | 4500 | 0.1423 | 0.1452 | | 0.1534 | 0.19 | 5000 | 0.1375 | 0.1431 | | 0.1576 | 0.21 | 5500 | 0.1872 | 0.1421 | | 0.1543 | 0.23 | 6000 | 0.1547 | 0.1381 | | 0.1501 | 0.25 | 6500 | 0.1446 | 0.1381 | | 0.1508 | 0.27 | 7000 | 0.2108 | 0.1507 | | 0.1479 | 0.29 | 7500 | 0.1495 | 0.1364 | | 0.1474 | 0.31 | 8000 | 0.1571 | 0.1406 | | 0.1475 | 0.33 | 8500 | 0.1570 | 0.1390 | | 0.1453 | 0.35 | 9000 | 0.1547 | 0.1377 | | 0.1465 | 0.37 | 9500 | 0.1633 | 0.1336 | | 0.1424 | 0.38 | 10000 | 0.1344 | 0.1358 | | 0.1417 | 0.4 | 10500 | 0.2518 | 0.1515 | | 0.1427 | 0.42 | 11000 | 0.1697 | 0.1409 | | 0.1434 | 0.44 | 11500 | 0.1649 | 0.1373 | | 0.1384 | 0.46 | 12000 | 0.1743 | 0.1403 | | 0.1394 | 0.48 | 12500 | 0.1485 | 0.1407 | | 0.1392 | 0.5 | 13000 | 0.1421 | 0.1352 | | 2.3614 | 0.52 | 13500 | 0.9494 | 0.1673 | | 0.1621 | 0.54 | 14000 | 0.4273 | 0.1539 | | 0.1454 | 0.56 | 14500 | 0.1764 | 0.1399 | | 0.1453 | 0.58 | 15000 | 0.1750 | 0.1414 | | 0.1375 | 0.6 | 15500 | 0.1845 | 0.1410 | | 0.1436 | 0.62 | 16000 | 0.1583 | 0.1413 | | 0.1405 | 0.63 | 16500 | 0.1893 | 0.1413 | | 0.139 | 0.65 | 17000 | 0.2281 | 0.1619 | | 0.1374 | 0.67 | 17500 | 0.1863 | 0.1413 | | 0.1386 | 0.69 | 18000 | 0.2301 | 0.1479 | | 0.1435 | 0.71 | 18500 | 0.2349 | 0.1579 | | 0.1293 | 0.73 | 19000 | 0.1878 | 0.1461 | | 0.1311 | 0.75 | 19500 | 0.2092 | 0.1342 | | 0.1357 | 0.77 | 20000 | 0.1788 | 0.1421 | | 0.1258 | 0.79 | 20500 | 0.1336 | 0.1302 | | 0.1284 | 0.81 | 21000 | 0.1459 | 0.1306 | | 0.1452 | 0.83 | 21500 | 0.1316 | 0.1319 | | 0.1241 | 0.85 | 22000 | 0.1497 | 0.1285 | | 0.1292 | 0.87 | 22500 | 0.1417 | 0.1318 | | 0.1255 | 0.88 | 23000 | 0.1262 | 0.1305 | | 0.1239 | 0.9 | 23500 | 0.1417 | 0.1302 | | 0.1237 | 0.92 | 24000 | 0.1704 | 0.1309 | | 0.1231 | 0.94 | 24500 | 0.1466 | 0.1308 | | 0.1303 | 0.96 | 25000 | 0.2085 | 0.1392 | | 0.1252 | 0.98 | 25500 | 0.1514 | 0.1441 | | 0.1244 | 1.0 | 26000 | 0.1353 | 0.1282 | | 0.1034 | 1.02 | 26500 | 0.1306 | 0.1279 | | 0.1035 | 1.04 | 27000 | 0.1785 | 0.1288 | | 0.1063 | 1.06 | 27500 | 0.1742 | 0.1311 | | 0.1065 | 1.08 | 28000 | 0.1505 | 0.1269 | | 0.1093 | 1.1 | 28500 | 0.1394 | 0.1264 | | 0.1115 | 1.12 | 29000 | 0.1490 | 0.1325 | | 0.1044 | 1.13 | 29500 | 0.5477 | 0.1736 | | 0.1003 | 1.15 | 30000 | 0.2347 | 0.1351 | | 0.1049 | 1.17 | 30500 | 0.2001 | 0.1347 | | 0.1068 | 1.19 | 31000 | 0.1528 | 0.1255 | | 0.1069 | 1.21 | 31500 | 0.1528 | 0.1266 | | 0.1042 | 1.23 | 32000 | 0.2272 | 0.1318 | | 0.1073 | 1.25 | 32500 | 0.5753 | 0.1869 | | 0.1021 | 1.27 | 33000 | 0.3459 | 0.1477 | | 0.1023 | 1.29 | 33500 | 0.2412 | 0.1362 | | 0.0988 | 1.31 | 34000 | 0.2124 | 0.1319 | | 0.1047 | 1.33 | 34500 | 0.3733 | 0.1497 | | 0.1078 | 1.35 | 35000 | 0.1553 | 0.1281 | | 0.0988 | 1.37 | 35500 | 0.1364 | 0.1239 | | 0.0957 | 1.38 | 36000 | 0.1484 | 0.1278 | | 0.1038 | 1.4 | 36500 | 0.1723 | 0.1253 | | 0.1001 | 1.42 | 37000 | 0.3668 | 0.1648 | | 0.101 | 1.44 | 37500 | 0.2136 | 0.1339 | | 0.1022 | 1.46 | 38000 | 0.1140 | 0.1162 | | 0.0989 | 1.48 | 38500 | 0.1628 | 0.1265 | | 0.0982 | 1.5 | 39000 | 0.2204 | 0.1376 | | 0.1012 | 1.52 | 39500 | 0.1716 | 0.1297 | | 0.1067 | 1.54 | 40000 | 0.1362 | 0.1234 | | 0.1022 | 1.56 | 40500 | 0.1170 | 0.1178 | | 0.1011 | 1.58 | 41000 | 0.1578 | 0.1240 | | 0.0845 | 1.6 | 41500 | 0.1659 | 0.1243 | | 0.0929 | 1.62 | 42000 | 0.1813 | 0.1310 | | 0.0904 | 1.63 | 42500 | 0.1309 | 0.1215 | | 0.0885 | 1.65 | 43000 | 0.1964 | 0.1359 | | 0.0895 | 1.67 | 43500 | 0.1309 | 0.1179 | | 0.0855 | 1.69 | 44000 | 0.1472 | 0.1258 | | 0.0876 | 1.71 | 44500 | 0.1189 | 0.1190 | | 0.0925 | 1.73 | 45000 | 0.1477 | 0.1209 | | 0.0866 | 1.75 | 45500 | 0.2537 | 0.1428 | | 0.0938 | 1.77 | 46000 | 0.1406 | 0.1240 | | 0.0901 | 1.79 | 46500 | 0.1416 | 0.1201 | | 0.0839 | 1.81 | 47000 | 0.1323 | 0.1201 | | 0.0866 | 1.83 | 47500 | 0.1176 | 0.1149 | | 0.0876 | 1.85 | 48000 | 0.1141 | 0.1139 | | 0.0857 | 1.87 | 48500 | 0.2148 | 0.1297 | | 0.089 | 1.88 | 49000 | 0.1707 | 0.1231 | | 0.0861 | 1.9 | 49500 | 0.1457 | 0.1183 | | 0.0855 | 1.92 | 50000 | 0.4576 | 0.1654 | | 0.0808 | 1.94 | 50500 | 0.2264 | 0.1285 | | 0.0859 | 1.96 | 51000 | 0.1630 | 0.1201 | | 0.0859 | 1.98 | 51500 | 0.1613 | 0.1165 | | 0.086 | 2.0 | 52000 | 0.1529 | 0.1196 | | 0.0769 | 2.02 | 52500 | 0.1258 | 0.1139 | | 0.0783 | 2.04 | 53000 | 0.1105 | 0.1136 | | 0.0775 | 2.06 | 53500 | 0.1177 | 0.1128 | | 0.08 | 2.08 | 54000 | 0.1328 | 0.1156 | | 0.0765 | 2.1 | 54500 | 0.1229 | 0.1137 | | 0.0791 | 2.12 | 55000 | 0.1218 | 0.1121 | | 0.0831 | 2.13 | 55500 | 0.1106 | 0.1135 | | 0.0769 | 2.15 | 56000 | 0.1466 | 0.1166 | | 0.0761 | 2.17 | 56500 | 0.1177 | 0.1126 | | 0.0779 | 2.19 | 57000 | 0.1249 | 0.1120 | | 0.0749 | 2.21 | 57500 | 0.1258 | 0.1130 | | 0.0746 | 2.23 | 58000 | 0.1268 | 0.1122 | | 0.074 | 2.25 | 58500 | 0.1141 | 0.1153 | | 0.0726 | 2.27 | 59000 | 0.1231 | 0.1107 | | 0.0771 | 2.29 | 59500 | 0.1393 | 0.1125 | | 0.0776 | 2.31 | 60000 | 0.1224 | 0.1115 | | 0.0756 | 2.33 | 60500 | 0.1071 | 0.1085 | | 0.0753 | 2.35 | 61000 | 0.1072 | 0.1089 | | 0.0698 | 2.37 | 61500 | 0.1129 | 0.1094 | | 0.0726 | 2.38 | 62000 | 0.1109 | 0.1106 | | 0.0758 | 2.4 | 62500 | 0.1052 | 0.1103 | | 0.0743 | 2.42 | 63000 | 0.1079 | 0.1106 | | 0.0765 | 2.44 | 63500 | 0.1248 | 0.1108 | | 0.0724 | 2.46 | 64000 | 0.1248 | 0.1076 | | 0.0659 | 2.48 | 64500 | 0.1099 | 0.1088 | | 0.0674 | 2.5 | 65000 | 0.1156 | 0.1098 | | 0.0691 | 2.52 | 65500 | 0.1122 | 0.1093 | | 0.0677 | 2.54 | 66000 | 0.1228 | 0.1082 | | 0.0695 | 2.56 | 66500 | 0.1049 | 0.1066 | | 0.0687 | 2.58 | 67000 | 0.1025 | 0.1062 | | 0.0682 | 2.6 | 67500 | 0.1080 | 0.1064 | | 0.0663 | 2.61 | 68000 | 0.1009 | 0.1058 | | 0.0654 | 2.63 | 68500 | 0.1145 | 0.1071 | | 0.0641 | 2.65 | 69000 | 0.1178 | 0.1082 | | 0.0662 | 2.67 | 69500 | 0.1106 | 0.1084 | | 0.0623 | 2.69 | 70000 | 0.1086 | 0.1057 | | 0.0692 | 2.71 | 70500 | 0.1048 | 0.1071 | | 0.0663 | 2.73 | 71000 | 0.1119 | 0.1069 | | 0.0639 | 2.75 | 71500 | 0.1147 | 0.1062 | | 0.0597 | 2.77 | 72000 | 0.1121 | 0.1072 | | 0.0688 | 2.79 | 72500 | 0.1149 | 0.1060 | | 0.0616 | 2.81 | 73000 | 0.1126 | 0.1069 | | 0.0633 | 2.83 | 73500 | 0.1302 | 0.1074 | | 0.0651 | 2.85 | 74000 | 0.1260 | 0.1066 | | 0.0637 | 2.86 | 74500 | 0.1233 | 0.1075 | | 0.0641 | 2.88 | 75000 | 0.1199 | 0.1066 | | 0.0655 | 2.9 | 75500 | 0.1249 | 0.1075 | | 0.065 | 2.92 | 76000 | 0.1192 | 0.1061 | | 0.0626 | 2.94 | 76500 | 0.1267 | 0.1069 | | 0.0622 | 2.96 | 77000 | 0.1289 | 0.1094 | | 0.0608 | 2.98 | 77500 | 0.1502 | 0.1096 | | 0.0631 | 3.0 | 78000 | 0.1493 | 0.1099 | | 0.0535 | 3.02 | 78500 | 0.1220 | 0.1064 | | 0.0582 | 3.04 | 79000 | 0.1274 | 0.1077 | | 0.052 | 3.06 | 79500 | 0.1296 | 0.1072 | | 0.0562 | 3.08 | 80000 | 0.1160 | 0.1050 | | 0.0533 | 3.1 | 80500 | 0.1066 | 0.1031 | | 0.0564 | 3.11 | 81000 | 0.1300 | 0.1078 | | 0.0589 | 3.13 | 81500 | 0.1167 | 0.1056 | | 0.0582 | 3.15 | 82000 | 0.1129 | 0.1025 | | 0.0594 | 3.17 | 82500 | 0.1255 | 0.1054 | | 0.0559 | 3.19 | 83000 | 0.1258 | 0.1045 | | 0.0535 | 3.21 | 83500 | 0.1150 | 0.1029 | | 0.0538 | 3.23 | 84000 | 0.1043 | 0.1017 | | 0.0537 | 3.25 | 84500 | 0.1073 | 0.1028 | | 0.0534 | 3.27 | 85000 | 0.1011 | 0.1011 | | 0.0527 | 3.29 | 85500 | 0.0987 | 0.1010 | | 0.0549 | 3.31 | 86000 | 0.1008 | 0.1015 | | 0.0516 | 3.33 | 86500 | 0.1031 | 0.1017 | | 0.0549 | 3.35 | 87000 | 0.1103 | 0.1028 | | 0.056 | 3.36 | 87500 | 0.0980 | 0.1008 | | 0.0528 | 3.38 | 88000 | 0.1045 | 0.1020 | | 0.0555 | 3.4 | 88500 | 0.0979 | 0.1005 | | 0.0517 | 3.42 | 89000 | 0.0948 | 0.0992 | | 0.0495 | 3.44 | 89500 | 0.0974 | 0.1002 | | 0.0496 | 3.46 | 90000 | 0.1035 | 0.1013 | | 0.0497 | 3.48 | 90500 | 0.1167 | 0.1035 | | 0.0485 | 3.5 | 91000 | 0.1098 | 0.1009 | | 0.0465 | 3.52 | 91500 | 0.1168 | 0.1009 | | 0.05 | 3.54 | 92000 | 0.1088 | 0.1005 | | 0.0514 | 3.56 | 92500 | 0.1116 | 0.1000 | | 0.0467 | 3.58 | 93000 | 0.1053 | 0.0998 | | 0.045 | 3.6 | 93500 | 0.1099 | 0.1012 | | 0.0507 | 3.61 | 94000 | 0.1186 | 0.1012 | | 0.0452 | 3.63 | 94500 | 0.1119 | 0.0998 | | 0.0452 | 3.65 | 95000 | 0.1099 | 0.1002 | | 0.0452 | 3.67 | 95500 | 0.1228 | 0.1015 | | 0.0448 | 3.69 | 96000 | 0.1271 | 0.1025 | | 0.0485 | 3.71 | 96500 | 0.1338 | 0.1037 | | 0.048 | 3.73 | 97000 | 0.1288 | 0.1030 | | 0.0476 | 3.75 | 97500 | 0.1183 | 0.1012 | | 0.0457 | 3.77 | 98000 | 0.1171 | 0.1007 | | 0.0492 | 3.79 | 98500 | 0.1142 | 0.1004 | | 0.049 | 3.81 | 99000 | 0.1141 | 0.1006 | | 0.046 | 3.83 | 99500 | 0.1165 | 0.1007 | | 0.0444 | 3.85 | 100000 | 0.1173 | 0.1010 | | 0.0456 | 3.86 | 100500 | 0.1150 | 0.1004 | | 0.0467 | 3.88 | 101000 | 0.1130 | 0.1003 | | 0.0465 | 3.9 | 101500 | 0.1137 | 0.1003 | | 0.0451 | 3.92 | 102000 | 0.1127 | 0.1004 | | 0.0445 | 3.94 | 102500 | 0.1118 | 0.1003 | | 0.0453 | 3.96 | 103000 | 0.1112 | 0.1002 | | 0.0458 | 3.98 | 103500 | 0.1103 | 0.1002 | | 0.0454 | 4.0 | 104000 | 0.1101 | 0.1002 |
92b346f6735b108ccaa6dd709643f22c
mit
[]
false
Danish Offensive Text Detection based on XLM-Roberta-Base This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a dataset consisting of approximately 5 million Facebook comments on [DR](https://dr.dk/)'s public Facebook pages. The labels have been automatically generated using weak supervision, based on the [Snorkel](https://www.snorkel.org/) framework. The model achieves SOTA on a test set consisting of 600 Facebook comments annotated using majority vote by three annotators, of which 35.8% were labelled as offensive: | **Model** | **Precision** | **Recall** | **F1-score** | **F2-score** | | :-------- | :------------ | :--------- | :----------- | :----------- | | `alexandrainst/da-offensive-detection-base` (this) | 74.81% | **89.77%** | **81.61%** | **86.32%** | | [`alexandrainst/da-offensive-detection-base`](https://huggingface.co/alexandrainst/da-offensive-detection-base) | 74.13% | 89.30% | 81.01% | 85.79% | | [`A&ttack`](https://github.com/ogtal/A-ttack) | **97.32%** | 50.70% | 66.67% | 56.07% | | [`alexandrainst/da-hatespeech-detection-small`](https://huggingface.co/alexandrainst/da-hatespeech-detection-small) | 86.43% | 56.28% | 68.17% | 60.50% | | [`Guscode/DKbert-hatespeech-detection`](https://huggingface.co/Guscode/DKbert-hatespeech-detection) | 75.41% | 42.79% | 54.60% | 46.84% |
fcc0fc85ae2bb80fb45c5931b438c26a
mit
[]
false
Using the model You can use the model simply by running the following: ```python >>> from transformers import pipeline >>> offensive_text_pipeline = pipeline(model="alexandrainst/da-offensive-detection-base") >>> offensive_text_pipeline("Din store idiot") [{'label': 'Offensive', 'score': 0.9997463822364807}] ``` Processing multiple documents at the same time can be done as follows: ```python >>> offensive_text_pipeline(["Din store idiot", "ej hvor godt :)"]) [{'label': 'Offensive', 'score': 0.9997463822364807}, {'label': 'Not offensive', 'score': 0.9996451139450073}] ```
5fd66bdc741ea2270735bb3bfabaa55d
mit
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - gradient_accumulation_steps: 1 - total_train_batch_size: 32 - seed: 4242 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - max_steps: 500000 - fp16: True - eval_steps: 1000 - early_stopping_patience: 100
75ca1cfc2d055cf3273fdfd035e33113
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large v2 Lithuanian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3421 - Wer: 29.9321
90bad81f8374afbaa38431520dc83b27
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4255 | 0.09 | 100 | 0.4323 | 37.0310 | | 0.2976 | 0.18 | 200 | 0.3421 | 29.9321 |
0734700c9f3e8145fc9b0f20f0afa6fa
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_vp-it_s132 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
1207489c3ce2d4f3777cd8ebaf4d52a6
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6236 - Accuracy: 0.8480 - F1: 0.8946
d263bd2214c2e5f4c18ec12aeae21626
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.4371 | 0.8137 | 0.8746 | | No log | 2.0 | 460 | 0.4117 | 0.8431 | 0.8940 | | 0.4509 | 3.0 | 690 | 0.3943 | 0.8431 | 0.8908 | | 0.4509 | 4.0 | 920 | 0.5686 | 0.8382 | 0.8893 | | 0.1915 | 5.0 | 1150 | 0.6236 | 0.8480 | 0.8946 |
ae565170a181ab24e5bbe68dd5155c5d
mit
['generated_from_trainer']
false
xlm-r-base-leyzer-en-intent This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1995 - Accuracy: 0.9624 - F1: 0.9624
40b9cd72b789f9531d919d826c3ec033
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.9235 | 1.0 | 1061 | 1.5991 | 0.6680 | 0.6680 | | 0.8738 | 2.0 | 2122 | 0.7982 | 0.8359 | 0.8359 | | 0.4406 | 3.0 | 3183 | 0.4689 | 0.9132 | 0.9132 | | 0.2534 | 4.0 | 4244 | 0.3165 | 0.9360 | 0.9360 | | 0.1593 | 5.0 | 5305 | 0.2434 | 0.9507 | 0.9507 | | 0.108 | 6.0 | 6366 | 0.2104 | 0.9599 | 0.9599 | | 0.0914 | 7.0 | 7427 | 0.1995 | 0.9624 | 0.9624 |
68769dcbdc3503cf4b9feec8abc04b2d
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-clasificacion-de-texto-supervisado This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2290 - Accuracy: 0.9337
5969d651c093224adca02dbc09331d80
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1955 | 1.0 | 1250 | 0.1809 | 0.9307 | | 0.0979 | 2.0 | 2500 | 0.2290 | 0.9337 |
c5505bd14971314ebb7c802258d13266
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Intro This is a collection of models related to the "Picture of the Week" contest on Stable Diffusion discord. I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "PoW Style" and I balance the learning on the low side, so that it doesn't just replicate creations. I also make smaller quality models to help make pictures for the contest itself, based on the theme.
832bf40cc9f6c6c241ada849c5707b6e
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Theme : Burgers and Fries Welcome to the VERY FIRST edition of the most Stable Kitchen in the universe! On today’s menu will be Sandwiches & Frie. Since you’re here for the first time, I will explain how it works! You can generate your orders and we will make them for you. Take a seat, flip through the menu, bring all of your favorite ingredients~ * The sandwich with the most cheddar? 5 beef burgers? An infinite fries generator? * Serve us your best sandwich and fries combo! Not even the sky's the limit my friend, You want it? You have it! As long as it's delicious, of course! We’ll see you on the chopping block for this week’s Stable Kitchen! ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/images/theme.png)
1d70eabe0b42a1ac5f8a3d69bdf4f9ad
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Burgy ![Burgy](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/images/showcase_burgy.jpg) * Burgers, burgers burgers * training: 40 pictures, 6 epochs of 40 repeats, batch size 6, LR1e-6, EveryDream * balance : Strong, burgers * **Activation token :** `Burgy` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/ckpts/Burgy.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/291122/dataset_Burgy.zip)
05708b5be3a39a6b336e0c40aa6f2024
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Theme : Imaginary Friend Do you remember putting your hands into what seemed as if it were just plain air and giggling like a child? Having conversations with someone who “wasn’t there”? Nowadays the term “Imaginary Friend” isn’t as frequently used as it used to be, right? Let’s bring it back. * Can you build your Imaginary Friends actualized? * What traits do you recall of them? Are they still young? Have they grown up now? Do they resemble you, or a creature that isn’t human? * Where would you find this Imaginary Friend? Where do they reside? What do they stand for? Our prompt for this event was created by @Andrekerygma "a boy drinking tea with a cute monster on the bedroom, disney infinity character design, pixar, artstation, vinyl, toy, figurine, 3 d model, cinema 4 d, substance 3 d painter, vray, unreal engine 5, octane render, cinematic" ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/images/theme.png)
f3ff2863527e6a61635a8f40b02e4d18
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
PoW ArtStyle 22-11-22 ![PoW ArtStyle](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/images/showcase_pow_imaginary_friend.jpg) * based on all the submissions to the PoW * training: 73 pictures, 6000 steps on batch 6, 1e-6 polynomial LR. * balance : a little lighter on the style than last week, still manages to reproduce most participants * **Activation token :** `PoW ArtStyle` * Other noticable tokens : Your Discord username, if you participated. Also TMNT,NikeAir Shoes and Sid, Ice Age movie * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/ckpts/PoWArtStyle_ImaginaryFriend.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/221122/PoW_221122_dataset.zip)
f91831d941252e570a3726922d9fd46a
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
CharacterChan Style ![CharacterChan Style](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_CharacterChanStyle-v1.jpg) * based on the "Character" dreamer community of the Stable Diffusion Discord * training: 50 pictures, 160 total repeat, LR1e-6 * balance : correct, but some sub concepts have overtrain a little, like the clown. * **Activation token :** `CharacterChan Style` * [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CharacterChanStyle-v1.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CharacterChanStyle-v1.zip) * [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection
b3354e57af049c6cd8852b11322a1ee7
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
CreatureChan Style ![CreatureChan Style](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_CreatureChanStyle-v1.jpg) * based on the "Creature" dreamer community of the Stable Diffusion Discord * training: 50 pictures, 160 total repeat, LR1e-6 * balance : good * **Activation token :** `CreatureChan Style` * [CKPT](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CreatureChanStyle-v1.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CreatureChanStyle-v1.zip) * [Model page](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection
becc768d4ee2e3c41069ef92acdbe713
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Theme : The Never-Ending Loop It is a passed-down proverb that lines represent the flow of time itself. They converge and take shape. They twist, tangle, sometimes unravel, break, and then connect again. * Without words, how are we able to accurately represent this flow of time with only lines? geometrically, intricately, asymmetricaly, seamlessly, ornately... * Think of a never-ending pattern, texture, or shape– looping on and on for what feels infinite. * Just how detailed are you able to get with your patterns? Our prompt for this event was created by @Asukii ! "the fractal flow of time stretches towards the horizon, surreal fractal intertwined looping pathways, dramatic cinematic perspective, detailed delicate intricate ornate linework, geometric abstract masterwork digital art, quantum wavetracing, ink drawing, optical illusion" ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/theme1.png) ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/theme2.png)
f44d50c6e59dc995df28a2f0f1f661a5
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
PoW Style 14-11-22 ![PoW Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/showcase_PoW_neverendingloop.jpg) * based on all the submissions to the PoW * training: 101 pictures, 9000 steps on batch 6, 1e-6 polynomial LR. * balance : a little strong on the style but it made it possible to differentiate each participants * **Activation token :** `PoW Style` * Other noticable tokens : Your Discord username, if you participated. Also Rick Roll and "fullbody shot" * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/PoWStyle_NeverEndingLoop.ckpt) * [Diffusers : Guizmus/SD_PoW_Collection/141122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/141122/diffusers/) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_2_dataset.zip)
eb61c12529d5ed0d39ed52bb3fc99a3f
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Fractime Style ![Fractime Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/images/showcase_FractimeStyle.jpg) * based on the suggested prompt and theme * training: 50 pictures, 1750 steps on batch 6, 1e-6 polynomial LR. * balance : correct, but the style doesn't apply to every subject * **Activation token :** `Fractime Style` * Other noticable tokens : intricate, nebula, illusion, person, road, tree, boat * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/ckpts/FractimeStyle.ckpt) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/141122/PoW_141122_1_dataset.zip)
b26f8e80aa8ce717175ad8d2d67d14d2
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Theme : Abstract Realities Glitch, warp, static, shape, flicker, break, bend, mend Have you ever felt your reality shift out from under your feet? Our perception falters and repairs itself in the blink of an eye. Just how much do our brains influence what we perceive? How much control do we have over molding these realities? With the introduction of AI and its rapid pace taking the world by storm, we are seeing single-handedly just how these realities can bring worlds into fruition. * Can you show us your altered reality? * Are these realities truly broken, or only bent? Our example prompt for this event was created by @Aether ! "household objects floating in space, bedroom, furniture, home living, warped reality, cosmic horror, nightmare, retrofuturism, surrealism, abstract, illustrations by alan nasmith" ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/AETHER.png) ![PoW](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/aether2.png)
6e583b431539c9738012bf6e6c523939
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
PoW Style 09-11-22 ![PoW Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_pow_final.jpg) * Main model based on all the results from the PoW * training: 51 pictures, 3000 steps on 1e-6 polynomial LR. * balanced on the light side, add attention/weight on the activation token * **Activation token :** `PoW Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_Abstralities.ckpt) * [Diffusers : Guizmus/SD_PoW_Collection/091122/diffusers](https://huggingface.co/Guizmus/SD_PoW_Collection/tree/main/091122/diffusers/) * [Dataset](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/dataset.zip)
552bb0c46c6ab676615e0ca200d9601f
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
Bendstract Style ![Bendstract Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_bendstract.jpg) * based on the suggested prompt * training: 100 pictures, 7500 steps on 1e-6 polynomial LR. overtrained * **Activation token :** `Bendstract Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/Bendstract-v1.ckpt)
08f23644b28633e70a145b87b030ebcb
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
endingReality Style ![BendingReality Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_bendingreality.jpg) * based on the suggested prompt * training: 68 pictures, 6000 steps on 1e-6 polynomial LR. overtrained * **Activation token :** `BendingReality Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/BendingReality_Style-v1.ckpt)
c97671ea6720690088ad554475ccb709
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
PoW Style mid-submissions 09-11-22 ![PoW Style](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/images/showcase_pow_midrun.jpg) * based on the first few submissions * training: 24 pictures, 2400 steps on 1e-6 polynomial LR. a little too trained * **Activation token :** `PoW Style` * [CKPT](https://huggingface.co/Guizmus/SD_PoW_Collection/resolve/main/091122/ckpts/PoWStyle_midrun.ckpt)
29c7839e199fa50088f231e3d67c85bf
creativeml-openrail-m
['stable-diffusion', 'text-to-image', 'image-to-image']
false
License These models are open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
1875cf4287ab1f2a0d06c2cc1cbe36df
mit
['generated_from_trainer']
false
deberta-v3-base-goemotions This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7610 - F1: 0.4468
c524a61f3d6a50dbe9e4942d9be5a442
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.5709 | 1.0 | 6164 | 1.5211 | 0.4039 | | 1.3689 | 2.0 | 12328 | 1.5466 | 0.4198 | | 1.1819 | 3.0 | 18492 | 1.5670 | 0.4520 | | 1.0059 | 4.0 | 24656 | 1.6673 | 0.4479 | | 0.8129 | 5.0 | 30820 | 1.7610 | 0.4468 |
067c20964d956ecfb8ff1153421f1137
apache-2.0
['text-classification', 'generated_from_trainer']
false
categorizacion_comercios_v_0.0.7 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.4673 - Accuracy: 0.9125
098dd4f607f3c92b9f0b128bc94ba00b
mit
[]
false
Lolo on Stable Diffusion This is the `<lolo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<lolo> 0](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/1.jpeg) ![<lolo> 1](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/2.jpeg) ![<lolo> 2](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/3.jpeg) ![<lolo> 3](https://huggingface.co/sd-concepts-library/lolo/resolve/main/concept_images/0.jpeg)
967112b2484c806412a7fe6d213a2615
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1800k']
false
MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
db1d480ac0aab3dde46b6976bd4e53dd
apache-2.0
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1800k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1800k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
cfa0cc52eeec489c2487357ae55fc51f
mit
[]
false
cologne on Stable Diffusion This is the `<cologne-dom>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cologne-dom> 0](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/3.jpeg) ![<cologne-dom> 1](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/0.jpeg) ![<cologne-dom> 2](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/2.jpeg) ![<cologne-dom> 3](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/1.jpeg) ![<cologne-dom> 4](https://huggingface.co/sd-concepts-library/cologne/resolve/main/concept_images/4.jpeg)
3f74fad58496827fa049208d0cf2226f
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0294 - Rouge1: 16.497 - Rouge2: 8.0618 - Rougel: 16.2979 - Rougelsum: 16.1465
d6bd84dba43277a1bd21abb6ca60a0c6
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 6.5928 | 1.0 | 1209 | 3.3005 | 14.7843 | 6.5518 | 14.2805 | 14.2951 | | 3.9024 | 2.0 | 2418 | 3.1399 | 16.8202 | 8.6739 | 16.1194 | 16.0844 | | 3.5806 | 3.0 | 3627 | 3.0869 | 18.1223 | 9.3051 | 17.7533 | 17.7254 | | 3.4201 | 4.0 | 4836 | 3.0590 | 17.654 | 9.0154 | 17.1853 | 17.1769 | | 3.3202 | 5.0 | 6045 | 3.0598 | 17.612 | 8.6707 | 17.4662 | 17.2963 | | 3.2436 | 6.0 | 7254 | 3.0409 | 16.7938 | 8.3054 | 16.6141 | 16.4853 | | 3.2079 | 7.0 | 8463 | 3.0332 | 16.7246 | 8.2362 | 16.5065 | 16.3611 | | 3.1801 | 8.0 | 9672 | 3.0294 | 16.497 | 8.0618 | 16.2979 | 16.1465 |
4c0304ce336471794636974e2086605e
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Colorjizz-512px v.1.0 for Stable Diffusion 1.5 Colorjizz Image Pack brought to you using 130 training images (512 resolution) 8000 training steps, 30% Training text modeled with permission using creations inspired by Destiny K (Twitter: @destinykrainbow)
4bf22b0b50f511e5cd9133c3dc1a43cc
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
NOTE: Colorjizz-768px version recommended for higher resolution and available [HERE](https://huggingface.co/plasmo/colorjizz-768px) Sample pictures of this concept (512px model): ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00223.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00224.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00225.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00226.jpg) ![0](https://huggingface.co/plasmo/colorjizz-512px/resolve/main/sample_images/00227.jpg)
2c03d93d6304d84884f47dfb31ca6a30
cc-by-4.0
[]
false
Readability benchmark (ES): mbert-es-paragraphs-2class This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish". You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
c44c2b6ba05794b7f35daf0efdb6eb8b
cc-by-4.0
[]
false
classes | |-----------------------------------------------------------------------------------------------------------|----------------|:---------:| | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 | | **[mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class)** | **paragraphs** | **2** | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 | | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 | | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 | | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 | For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
635d008fd4d2e5974ea829ba45dafde2
mit
[]
false
naf on Stable Diffusion This is the `<nal>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<nal> 0](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/3.jpeg) ![<nal> 1](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/0.jpeg) ![<nal> 2](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/2.jpeg) ![<nal> 3](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/1.jpeg) ![<nal> 4](https://huggingface.co/sd-concepts-library/naf/resolve/main/concept_images/4.jpeg)
b80847748158b2c7303387b1c1291c7d
apache-2.0
['generated_from_trainer']
false
t5-end2end-questions-generation-full This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5588
7adaecc742bfad3b6583e9b02d757959
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7
a075678103cb47416f29497f7622f612
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5811 | 0.34 | 100 | 1.8916 | | 1.9668 | 0.68 | 200 | 1.7116 | | 1.8274 | 1.02 | 300 | 1.6512 | | 1.7424 | 1.36 | 400 | 1.6294 | | 1.7076 | 1.69 | 500 | 1.6024 | | 1.7001 | 2.03 | 600 | 1.5916 | | 1.6266 | 2.37 | 700 | 1.5881 | | 1.6275 | 2.71 | 800 | 1.5772 | | 1.6146 | 3.05 | 900 | 1.5824 | | 1.5699 | 3.39 | 1000 | 1.5776 | | 1.5635 | 3.73 | 1100 | 1.5710 | | 1.5484 | 4.07 | 1200 | 1.5698 | | 1.5199 | 4.41 | 1300 | 1.5616 | | 1.5352 | 4.75 | 1400 | 1.5661 | | 1.5174 | 5.08 | 1500 | 1.5633 | | 1.4955 | 5.42 | 1600 | 1.5603 | | 1.4904 | 5.76 | 1700 | 1.5631 | | 1.5033 | 6.1 | 1800 | 1.5572 | | 1.4853 | 6.44 | 1900 | 1.5588 | | 1.4679 | 6.78 | 2000 | 1.5588 |
9fe0ebade9c1c03b8929d2b3fda74c1c
other
['generated_from_trainer']
false
6.7b-dalio-book-handwritten-io-constant-3e-7-v2 This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5293 - Accuracy: 0.2725
0811fbde8e067f97a0b716d2e2a4439a
other
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0
50a72d2c5beff552b33f360646098931
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5856 | 0.08 | 6 | 2.5957 | 0.2697 | | 2.6027 | 0.16 | 12 | 2.5938 | 0.2698 | | 2.619 | 0.24 | 18 | 2.5879 | 0.2700 | | 2.6121 | 0.32 | 24 | 2.5840 | 0.2702 | | 2.6024 | 0.4 | 30 | 2.5762 | 0.2706 | | 2.5878 | 0.48 | 36 | 2.5703 | 0.2707 | | 2.5541 | 0.56 | 42 | 2.5625 | 0.2710 | | 2.5207 | 0.64 | 48 | 2.5566 | 0.2713 | | 2.4577 | 0.72 | 54 | 2.5488 | 0.2715 | | 2.5614 | 0.8 | 60 | 2.5430 | 0.2718 | | 2.6959 | 0.88 | 66 | 2.5352 | 0.2722 | | 2.5084 | 0.96 | 72 | 2.5293 | 0.2725 |
b0f36dde3e6ce00086f099c8a243567c
apache-2.0
['generated_from_trainer']
false
test_trainer This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4375 - Rmse: 0.6614
c99c08528e1b1e3d62b33eb355993efb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0663 | 1.0 | 2639 | 0.5119 | 0.7155 | | 0.3704 | 2.0 | 5278 | 0.4375 | 0.6614 |
6b0196b782724aa3062a1f289b74927c
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'pt', 'robust-speech-event']
false
sew-tiny-portuguese-cv7 This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4232 - Wer: 0.2745
508bcacd5132f07de5dd64fc936009ec
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'pt', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 40000 - mixed_precision_training: Native AMP
cab31171817eb9b2d028f36f9e23eebb
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'pt', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | No log | 2.6 | 1000 | 1.0034 | 0.7308 | | 4.1307 | 5.19 | 2000 | 0.6274 | 0.4721 | | 4.1307 | 7.79 | 3000 | 0.5541 | 0.4130 | | 1.3117 | 10.39 | 4000 | 0.5302 | 0.3880 | | 1.3117 | 12.99 | 5000 | 0.5082 | 0.3644 | | 1.2047 | 15.58 | 6000 | 0.4818 | 0.3539 | | 1.2047 | 18.18 | 7000 | 0.4822 | 0.3477 | | 1.14 | 20.78 | 8000 | 0.4781 | 0.3428 | | 1.14 | 23.38 | 9000 | 0.4840 | 0.3401 | | 1.0818 | 25.97 | 10000 | 0.4613 | 0.3251 | | 1.0818 | 28.57 | 11000 | 0.4569 | 0.3257 | | 1.0451 | 31.17 | 12000 | 0.4494 | 0.3132 | | 1.0451 | 33.77 | 13000 | 0.4560 | 0.3201 | | 1.011 | 36.36 | 14000 | 0.4687 | 0.3174 | | 1.011 | 38.96 | 15000 | 0.4397 | 0.3122 | | 0.9785 | 41.56 | 16000 | 0.4605 | 0.3173 | | 0.9785 | 44.16 | 17000 | 0.4380 | 0.3064 | | 0.9458 | 46.75 | 18000 | 0.4372 | 0.3048 | | 0.9458 | 49.35 | 19000 | 0.4426 | 0.3039 | | 0.9126 | 51.95 | 20000 | 0.4317 | 0.2962 | | 0.9126 | 54.54 | 21000 | 0.4345 | 0.2960 | | 0.8926 | 57.14 | 22000 | 0.4365 | 0.2948 | | 0.8926 | 59.74 | 23000 | 0.4306 | 0.2940 | | 0.8654 | 62.34 | 24000 | 0.4303 | 0.2928 | | 0.8654 | 64.93 | 25000 | 0.4351 | 0.2915 | | 0.8373 | 67.53 | 26000 | 0.4340 | 0.2909 | | 0.8373 | 70.13 | 27000 | 0.4279 | 0.2907 | | 0.83 | 72.73 | 28000 | 0.4214 | 0.2867 | | 0.83 | 75.32 | 29000 | 0.4256 | 0.2849 | | 0.8062 | 77.92 | 30000 | 0.4281 | 0.2826 | | 0.8062 | 80.52 | 31000 | 0.4398 | 0.2865 | | 0.7846 | 83.12 | 32000 | 0.4218 | 0.2812 | | 0.7846 | 85.71 | 33000 | 0.4227 | 0.2791 | | 0.7697 | 88.31 | 34000 | 0.4200 | 0.2767 | | 0.7697 | 90.91 | 35000 | 0.4285 | 0.2791 | | 0.7539 | 93.51 | 36000 | 0.4238 | 0.2777 | | 0.7539 | 96.1 | 37000 | 0.4288 | 0.2757 | | 0.7413 | 98.7 | 38000 | 0.4205 | 0.2748 | | 0.7413 | 101.3 | 39000 | 0.4241 | 0.2761 | | 0.7348 | 103.89 | 40000 | 0.4232 | 0.2745 |
759aa68e4ab166c1bbbd4ac00d14ae5b
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
base Turkish Whisper (bTW) This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Ermetal Meetings dataset. It achieves the following results on the evaluation set: - Loss: 0.8800 - Wer: 0.8060 - Cer: 0.7585
e24b940e25c44a913787b0ca9cf876a8
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.8904 | 1.32 | 100 | 1.5873 | 0.8893 | 0.5437 | | 0.8039 | 2.63 | 200 | 0.9239 | 0.9076 | 0.5721 | | 0.5988 | 3.95 | 300 | 0.7970 | 0.7850 | 0.4821 | | 0.384 | 5.26 | 400 | 0.7586 | 0.7164 | 0.5206 | | 0.2643 | 6.58 | 500 | 0.7578 | 0.9130 | 0.6843 | | 0.2026 | 7.89 | 600 | 0.7627 | 0.9147 | 0.7228 | | 0.1091 | 9.21 | 700 | 0.8043 | 0.8363 | 0.8283 | | 0.0623 | 10.53 | 800 | 0.8342 | 0.7615 | 0.7619 | | 0.0436 | 11.84 | 900 | 0.8577 | 0.7079 | 0.6824 | | 0.0348 | 13.16 | 1000 | 0.8800 | 0.8060 | 0.7585 |
917565c5219cb96a6d27a830ca904b9e
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Model Dreambooth concept Noah_Titan_5000_8e-7 được train bởi hr16 bằng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br> Test concept bằng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br> Hoặc test bằng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Ảnh mẫu của concept: WIP
8c7c6d49c812fb009fe355982afa9950