license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['indogpt', 'indobenchmark', 'indonlg']
false
IndoBART-v2 Model [IndoBART-v2](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the BART model. The pretrained model is trained using the BART training objective.
b462bb4388f10ba4f9a2f712d157481e
mit
['indogpt', 'indobenchmark', 'indonlg']
false
params | Training data | |--------------------------------|--------------------------------|-----------------------------------| | `indobenchmark/indobart-v2` | 132M | Indo4B-Plus (26 GB of text) |
fee5adef5af70c111604f07734b19d3c
mit
['indogpt', 'indobenchmark', 'indonlg']
false
Authors <b>IndoBART</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
17f5c7810f923888237de5a779f853c1
mit
['indogpt', 'indobenchmark', 'indonlg']
false
Citation If you use our work, please cite: ```bibtex @article{cahyawijaya2021indonlg, title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation}, author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others}, journal={arXiv preprint arXiv:2104.08200}, year={2021} } ```
7feb2596466e4a0195678a16ce26e421
mit
['conversational']
false
Finetuned DialoGPT model on Spanish Conversations This model was finetuned from the original [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model on subtitles from Spanish movies and telenovelas from the awesome [OpenSubtitle dataset](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles). DialoGPT paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536) Some example dialog from the model: |Role | Response | |---------|--------| |User | ¿Cómo está el clima hoy? | | Bot | Muy bien. | |User | Eso es bueno escuchar. ¿Vas a ir a la playa? | | Bot | No, también puedo regresar. | |User | Oh bien, bueno, iré a la playa más tarde. | | Bot | ¡No! | |User | ¿Qué está mal con eso? | | Bot | ¡Cómo que no! | |User | Estoy confundido, ¿por qué no puedo ir a la playa? | | Bot | ¡Cómo que no! | |User | Explícamelo por favor. | | Bot | ¡No! |
c80622cfb9f8db7125450304c66be4b2
mit
['conversational']
false
Using the model Example code for trying out the model (taken directly from the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) model card): ```python from transformers import AutoModelWithLMHead, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("ncoop57/DiGPTame-medium") model = AutoModelWithLMHead.from_pretrained("ncoop57/DiGPTame-medium")
d63737faf6510353372a9b9aa489a46f
mit
['conversational']
false
Training your own model If you would like to finetune your own model or finetune this Spanish model, please checkout my blog post on that exact topic! https://nathancooper.io/i-am-a-nerd/chatbot/deep-learning/gpt2/2020/05/12/chatbot-part-1.html
2ffd9f6e1f1a10ef5aa537acaa09473d
apache-2.0
['automatic-speech-recognition', 'pl']
false
exp_w2v2t_pl_xls-r_s235 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
e75f366932c3ab7dd6cc92938b536c93
apache-2.0
['generated_from_trainer']
false
google_electra-small-discriminator_squad This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the **squadV1** dataset. - "eval_exact_match": 76.95364238410596 - "eval_f1": 84.98869246841396 - "eval_samples": 10784
f53e9d3b5141dac59a90991cbc898ba1
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
306aff88cb3e57b34127b4df976d6459
apache-2.0
['translation']
false
opus-mt-kwn-en * source languages: kwn * target languages: en * OPUS readme: [kwn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwn-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.eval.txt)
fdb734bb80c1bb63f8967c9279276f13
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab11 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6269 - Wer: 0.7418
0e4787cacd35c62d970d13bfd3a8664f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.6439 | 7.04 | 500 | 3.3083 | 1.0 | | 2.3763 | 14.08 | 1000 | 1.5059 | 0.8146 | | 1.0161 | 21.13 | 1500 | 1.5101 | 0.7488 | | 0.6195 | 28.17 | 2000 | 1.6269 | 0.7418 |
f20ba01de8becb77cc0b95c63ba2099d
creativeml-openrail-m
['text-to-image']
false
A stable diffusion model used to generate Marco's pictures by the prompt **'mkmk woman'** Based on runwayml/stable-diffusion-v1-5 trained by Dreambooth Trained on 39 pics, 3000 steps What is Marco like? <img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/IMG_2683.jpeg" width="512" height="512"/> <img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/IMG_0537.jpeg" width="512" height="512"/> Some samples generated by this model: <img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/0.png" width="512" height="512"/> <img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/1.png" width="512" height="512"/> <img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/2.png" width="512" height="512"/> <img src="https://huggingface.co/AkiKagura/mkgen-diffusion/resolve/main/samples/3.png" width="512" height="512"/>
6591fac356e34dc6adc7fe3ef6568cc5
mit
[]
false
Trigger Studio on Stable Diffusion This is the `<Trigger Studio>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Trigger Studio> 0](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/8.jpeg) ![<Trigger Studio> 1](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/14.jpeg) ![<Trigger Studio> 2](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/10.jpeg) ![<Trigger Studio> 3](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/1.jpeg) ![<Trigger Studio> 4](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/16.jpeg) ![<Trigger Studio> 5](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/15.jpeg) ![<Trigger Studio> 6](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/12.jpeg) ![<Trigger Studio> 7](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/11.jpeg) ![<Trigger Studio> 8](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/9.jpeg) ![<Trigger Studio> 9](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/5.jpeg) ![<Trigger Studio> 10](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/0.jpeg) ![<Trigger Studio> 11](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/4.jpeg) ![<Trigger Studio> 12](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/13.jpeg) ![<Trigger Studio> 13](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/2.jpeg) ![<Trigger Studio> 14](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/3.jpeg) ![<Trigger Studio> 15](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/6.jpeg) ![<Trigger Studio> 16](https://huggingface.co/sd-concepts-library/trigger-studio/resolve/main/concept_images/7.jpeg)
55a2957cb61cbf83bc12e16f292613ba
apache-2.0
['image-classification', 'pytorch']
false
Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/cspdarknet53").eval() img = Image.open(path_to_an_image).convert("RGB")
fa8d7f1e3cd1dd6c1ad3cd5594ba8616
apache-2.0
['image-classification', 'pytorch']
false
Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-1911-11929, author = {Chien{-}Yao Wang and Hong{-}Yuan Mark Liao and I{-}Hau Yeh and Yueh{-}Hua Wu and Ping{-}Yang Chen and Jun{-}Wei Hsieh}, title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}}, journal = {CoRR}, volume = {abs/1911.11929}, year = {2019}, url = {http://arxiv.org/abs/1911.11929}, eprinttype = {arXiv}, eprint = {1911.11929}, timestamp = {Tue, 03 Dec 2019 20:41:07 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
da27b0add15e0623d1850a21fbaed058
apache-2.0
['generated_from_trainer']
false
Tagged_One_50v6_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one50v6_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6728 - Precision: 0.0625 - Recall: 0.0005 - F1: 0.0010 - Accuracy: 0.7775
ade5de7a6b1123f98a61bc450175bbd7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 16 | 0.7728 | 0.0 | 0.0 | 0.0 | 0.7773 | | No log | 2.0 | 32 | 0.6898 | 0.04 | 0.0002 | 0.0005 | 0.7774 | | No log | 3.0 | 48 | 0.6728 | 0.0625 | 0.0005 | 0.0010 | 0.7775 |
8241b0a4f4edb24f36b01fef6976f32a
cc-by-sa-4.0
['generated_from_trainer']
false
t5-base-TEDxJP-10front-1body-10rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4366 - Wer: 0.1693 - Mer: 0.1636 - Wil: 0.2493 - Wip: 0.7507 - Hits: 55904 - Substitutions: 6304 - Deletions: 2379 - Insertions: 2249 - Cer: 0.1332
b1c28fd9fa21d89d76b07dcb774937d7
cc-by-sa-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10
e27f315ffac778a7a2de389a41270179
cc-by-sa-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.6166 | 1.0 | 1457 | 0.4595 | 0.2096 | 0.1979 | 0.2878 | 0.7122 | 54866 | 6757 | 2964 | 3819 | 0.1793 | | 0.4985 | 2.0 | 2914 | 0.4190 | 0.1769 | 0.1710 | 0.2587 | 0.7413 | 55401 | 6467 | 2719 | 2241 | 0.1417 | | 0.4787 | 3.0 | 4371 | 0.4130 | 0.1728 | 0.1670 | 0.2534 | 0.7466 | 55677 | 6357 | 2553 | 2249 | 0.1368 | | 0.4299 | 4.0 | 5828 | 0.4085 | 0.1726 | 0.1665 | 0.2530 | 0.7470 | 55799 | 6381 | 2407 | 2357 | 0.1348 | | 0.3855 | 5.0 | 7285 | 0.4130 | 0.1702 | 0.1644 | 0.2501 | 0.7499 | 55887 | 6309 | 2391 | 2292 | 0.1336 | | 0.3109 | 6.0 | 8742 | 0.4182 | 0.1732 | 0.1668 | 0.2525 | 0.7475 | 55893 | 6317 | 2377 | 2494 | 0.1450 | | 0.3027 | 7.0 | 10199 | 0.4256 | 0.1691 | 0.1633 | 0.2486 | 0.7514 | 55949 | 6273 | 2365 | 2283 | 0.1325 | | 0.2729 | 8.0 | 11656 | 0.4252 | 0.1709 | 0.1649 | 0.2503 | 0.7497 | 55909 | 6283 | 2395 | 2362 | 0.1375 | | 0.2531 | 9.0 | 13113 | 0.4329 | 0.1696 | 0.1639 | 0.2499 | 0.7501 | 55870 | 6322 | 2395 | 2235 | 0.1334 | | 0.2388 | 10.0 | 14570 | 0.4366 | 0.1693 | 0.1636 | 0.2493 | 0.7507 | 55904 | 6304 | 2379 | 2249 | 0.1332 |
9d703124663008b8ec6749436795004b
mit
[]
false
kairuno on Stable Diffusion This is the `kairuno` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![kairuno 0](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/10.jpeg) ![kairuno 1](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/6.jpeg) ![kairuno 2](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/11.jpeg) ![kairuno 3](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/8.jpeg) ![kairuno 4](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/3.jpeg) ![kairuno 5](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/2.jpeg) ![kairuno 6](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/4.jpeg) ![kairuno 7](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/7.jpeg) ![kairuno 8](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/12.jpeg) ![kairuno 9](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/9.jpeg) ![kairuno 10](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/5.jpeg) ![kairuno 11](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/0.jpeg) ![kairuno 12](https://huggingface.co/sd-concepts-library/kairuno/resolve/main/concept_images/1.jpeg)
a9ad20a55798497ccfc129eb7a871b17
apache-2.0
['translation']
false
Download the pretrained model for English-Vietnamese available on the hub model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-mixed") tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-mixed")
3c4eefa610d09f7381d0a56beee329bc
apache-2.0
['translation']
false
This token is needed to identify the target language input_sentence = "<2indo> " + sentence translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True)) output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ```
c753be371652ef76c3e7173349e1a5e0
apache-2.0
['translation']
false
Training results MIXED | Epoch | Bleu | |:-----:|:-------:| | 1.0 | 24.2579 | | 2.0 | 30.6287 | | 3.0 | 34.4417 | | 4.0 | 36.2577 | | 5.0 | 37.3488 | FINETUNING | Epoch | Bleu | |:-----:|:-------:| | 6.0 | 34.1676 | | 7.0 | 35.2320 | | 8.0 | 36.7110 | | 9.0 | 37.3195 | | 10.0 | 37.9461 |
80833135065d3acc029c49e4b823faab
mit
['generated_from_trainer']
false
paraphraser-spanish-t5-small This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 1.1079 - eval_runtime: 4.9573 - eval_samples_per_second: 365.924 - eval_steps_per_second: 36.713 - epoch: 0.83 - step: 43141
5f76881cfc061bb2c45729c0657a37f6
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4
35400bdc5d0db1bde9a28a542dc4850a
apache-2.0
['text2text-generation']
false
TL;DR If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. As mentioned in the first few lines of the abstract : > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
a0c3330108d3e559cb4855a55ab0c0fb
apache-2.0
['text2text-generation']
false
Model Description - **Model type:** Language model - **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian - **License:** Apache 2.0 - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5) - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md
6974152edc627d90403f6fe5888a4462
apache-2.0
['text2text-generation']
false
flan-t5-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2210.11416.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
b3ffb41287eea97a15ff90b56c9a9742
apache-2.0
['text2text-generation']
false
Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details>
fa5749823d9aa001ced4f693818608f7
apache-2.0
['text2text-generation']
false
pip install accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details>
8e766aad92520348749aed96de83cbdb
apache-2.0
['text2text-generation']
false
pip install accelerate import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", torch_dtype=torch.float16) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details>
ab2b82ec81492b11850637fc15f632f3
apache-2.0
['text2text-generation']
false
pip install bitsandbytes accelerate from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details>
e539dba7a31902435f5f4f83b1ae718a
apache-2.0
['text2text-generation']
false
Direct Use and Downstream Use The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that: > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
a8f17f0f8cb4cfcf535f80f4d564ffc1
apache-2.0
['text2text-generation']
false
Bias, Risks, and Limitations The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
1a2c222a686143d3b784293577c34957
apache-2.0
['text2text-generation']
false
Ethical considerations and risks > Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
f6969a18d07ffef4e18389d5791f0904
apache-2.0
['text2text-generation']
false
Training Data The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2): ![table.png](https://s3.amazonaws.com/moonup/production/uploads/1666363265279-62441d1d9fdefb55a0b7d12c.png)
0f4d86bae2f890c269b4413e1d1dcbb7
apache-2.0
['text2text-generation']
false
Training Procedure According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf): > These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size. The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
d44e0bf60fcc81cd01322f1e3d8f3b11
apache-2.0
['text2text-generation']
false
Testing Data, Factors & Metrics The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation: ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1668072995230-62441d1d9fdefb55a0b7d12c.png) For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
e2b526659eb83c373b52a9821edab3a6
apache-2.0
['text2text-generation']
false
compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4. - **Hours used:** More information needed - **Cloud Provider:** GCP - **Compute Region:** More information needed - **Carbon Emitted:** More information needed
8455f15ea3ecfaf4002185e59983163c
apache-2.0
['text2text-generation']
false
Citation **BibTeX:** ```bibtex @misc{https://doi.org/10.48550/arxiv.2210.11416, doi = {10.48550/ARXIV.2210.11416}, url = {https://arxiv.org/abs/2210.11416}, author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason}, keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Scaling Instruction-Finetuned Language Models}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
8c27ed0b0f16758233655b7f7ebf16b0
apache-2.0
['image-classification', 'generated_from_trainer']
false
vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0427 - Accuracy: 0.9925
e19e2e26b4850f83608361bcb3a15242
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1378 | 1.54 | 100 | 0.1444 | 0.9549 | | 0.0334 | 3.08 | 200 | 0.0427 | 0.9925 |
e456076a1ac2ee87c978eb4c0c4f52ce
mit
['image-to-text']
false
Vit2-DistilGPT2 This model takes in an image and outputs a caption. It was trained using the Coco dataset and the full training script can be found in [this kaggle kernel](https://www.kaggle.com/sachin/visionencoderdecoder-model-training)
b1b0009af7e2384b9d3b393a31ad5580
mit
['image-to-text']
false
Usage ```python import Image from transformers import AutoModel, GPT2Tokenizer, ViTFeatureExtractor model = AutoModel.from_pretrained("sachin/vit2distilgpt2") vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
a4b2e13d527fe9e0c3b342c3be312e05
mit
['image-to-text']
false
make sure GPT2 appends EOS in begin and end def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id] return outputs GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens gpt2_tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
3d3e0b256da08dc2d5ade93424f2e021
mit
['image-to-text']
false
set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token image = (Image.open(image_path).convert("RGB"), return_tensors="pt").pixel_values encoder_outputs = model.generate(image.unsqueeze(0)) generated_sentences = gpt2_tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True) ``` Note that the output sentence may be repeated, hence a post processing step may be required.
0a736959438db4dec0d35ffe2c4fadee
agpl-3.0
['art']
false
Introduction: - It's an AI art model for converting text to images, images to images, inpainting, and outpainting using Stable Diffusion. - The AI art model is developed with a focus on the ability to draw anime characters relatively well through fine-tuning using Dreambooth. - The model is aimed at everyone and has limitless usage potential.
4fbc4e4fda6f97576f95ac27fdd2b7db
agpl-3.0
['art']
false
Used: - You can use it with any library that is supported, I recommend using the "stable-diffusion-web-ui" by Automatic1111. - You should use it as a supportive tool for creating works of art, and not rely on it completely. - It can be used as a tool for upscaling or rendering anime-style images from 3D modeling software (Blender). - Create an image from a sketch you created from a pure drawing program. (MS Paint) - The `masterpiece` and `best quality` tags are not necessary, as it sometimes leads to contradictory results, but if it is distorted or discolored, add them now.
488e110829fcc6d1d370f622990ef127
agpl-3.0
['art']
false
Training: - **Data**: The model is trained based on a database of various sources from the Internet provided by my friend and images created by another AI. - **Schedule**: Euler Ancestral Discrete. - **Optimizer**: AdamW. - **Precision**: BF16. - **Hardware**: Google Colaboratory Pro - NVIDIA A100 40GB VRAM.
cfa31f7a815ef0548b6030d10443235c
agpl-3.0
['art']
false
**Limitations:** - Loss of detail, errors, bad human-like (six-fingered hand) details, deformation, blurring, and unclear images are inevitable. - Complex tasks cannot be handled. - ⚠️Content may not be appropriate for all ages: As it is trained on data that includes adult content, the generated images may contain content not suitable for children (depending on your country there will be a specific regulation about it). If you do not want to appear adult content, make sure you have additional safety measures in place, such as adding "nsfw" to the negative prompt. - The results generated by the model are considered impressive. But unfortunately, currently, it only supports the English language, to use multilingual, consider using third-party translation programs. - The model is trained on the `Danbooru` and `Nai` tagging system, so the long text may result in poor results. - Dark, grayscale, white balance loss. The fix is to use image editing software like Photoshop. In the future, I need a more colorful dataset. - My amount of money: 0 USD =((. ![](money-wallet.gif)
d2a6142efaa92cd31afd1b8e1757ad07
agpl-3.0
['art']
false
<p style="color:red">⚠️Prohibited behaviors:<p> - Using for political, terrorist, subversive, racist, disrespectful of law, and lawless purposes. - Stealing, copying, or reproducing someone else's work without permission for commercial or malicious purposes. - Spreading false information.
ccbddc407405001b7ec7d66aa77e65f4
agpl-3.0
['art']
false
**Desires:** As it is a version made only by myself and my small associates, the model will not be perfect and may differ from what people expect. Any contributions from everyone will be respected. Want to support me? Thank you, please help me make it better. ❤️
1d9e91c4737dcc4bc3806df0b1be62b8
agpl-3.0
['art']
false
Special Thank: This wouldn't have happened if they hadn't made a breakthrough. - [Runwayml](https://huggingface.co/runwayml/): Base model. - [d8ahazard](https://github.com/d8ahazard/.sd_dreambooth_extension) : Dreambooth. - [Automatic1111](https://github.com/AUTOMATIC1111/) : Web UI. - [Mikubill](https://github.com/Mikubill/): Where my ideas started. - Chat-GPT: Help me do crazy things that I thought I would never do. - Novel AI: Dataset images. An AI made me thousands of pictures without worrying about copyright or dispute. - Danbooru: Help me write the correct tag. - My friend and others. - YOU! Yes, is you 🫵
5b1624d5c4f52a03aaa5b1b2cef4aa7a
agpl-3.0
['art']
false
Copyright: This license allows anyone to copy, modify, publish, and commercialize the model, but please follow the terms of the GNU General Public License. You can learn more about the GNU General Public License at [here](LICENSE.txt). If any part of the model does not comply with the terms of the GNU General Public License, the copyright and other rights of the model will still be valid. We will not be held responsible for any legal issues you cause. Don't forget me.
a3c87d4bb96b31b769049555dcb6c486
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Usage ```sh pip install transformers accelerate>=0.14.0 diffusers>=0.7.2 ``` ```python import torch from diffusers import StableDiffusionPipeline repo = "Bingsu/my-k-anything-v3-0" pipe = StableDiffusionPipeline.from_pretrained( repo, torch_dtype=torch.float16, ) pipe.to("cuda") pipe.safety_checker = None ``` ```python from typing import Optional import torch def gen_image( prompt: str, negative_prompt: Optional[str] = None, seed: Optional[int] = None, scale: float = 7.5, steps: int = 30, ): if seed is not None: generator = torch.Generator("cuda").manual_seed(seed) else: generator = None image = pipe( prompt=prompt, negative_prompt=negative_prompt, generator=generator, guidance_scale=scale, num_inference_steps=steps, ).images[0] return image ``` ```python prompt = "파란색 포니테일 헤어, 브로치, 정장을 입은 성인 여성, 고퀄리티, 최고품질" negative = "저화질, 저품질, 텍스트" seed = 42467781 scale = 12.0 gen_image(prompt, negative, seed, scale) ``` ![Imgur](https://i.imgur.com/24G8n1m.png)
340b8782c9d29e8e3a63a87c545957e9
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
d6e50bc1c017dbfcc434ccc32ab91dbb
apache-2.0
[]
false
Model Description This model is fine-tuned version of [ruRoberta-large](https://huggingface.co/sberbank-ai/ruRoberta-large). The code for the fine-tuned process can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/spellchecker/ml_ranging/models/med_ru_roberta_large/fine_tune_ru_roberta_large.py). The model is fine-tuned on a specially collected dataset of over 30,000 medical anamneses in Russian. The collected dataset can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/data/anamnesis/processed/all_anamnesis.csv). This model was created as part of a master's project to develop a method for correcting typos in medical histories using BERT models as a ranking of candidates. The project is open source and can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker).
8330856140b283034dac4f3e5676d004
apache-2.0
[]
false
How to Get Started With the Model You can use the model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> pipeline = pipeline('fill-mask', model='DmitryPogrebnoy/MedRuRobertaLarge') >>> pipeline("У пациента <mask> боль в грудине.") [{'score': 0.2467374950647354, 'token': 9233, 'token_str': ' сильный', 'sequence': 'У пациента сильный боль в грудине.'}, {'score': 0.16476310789585114, 'token': 27876, 'token_str': ' постоянный', 'sequence': 'У пациента постоянный боль в грудине.'}, {'score': 0.07211139053106308, 'token': 19551, 'token_str': ' острый', 'sequence': 'У пациента острый боль в грудине.'}, {'score': 0.0616639070212841, 'token': 18840, 'token_str': ' сильная', 'sequence': 'У пациента сильная боль в грудине.'}, {'score': 0.029712719842791557, 'token': 40176, 'token_str': ' острая', 'sequence': 'У пациента острая боль в грудине.'}] ``` Or you can load the model and tokenizer and do what you need to do: ```python >>> from transformers import AutoTokenizer, AutoModelForMaskedLM >>> tokenizer = AutoTokenizer.from_pretrained("DmitryPogrebnoy/MedRuRobertaLarge") >>> model = AutoModelForMaskedLM.from_pretrained("DmitryPogrebnoy/MedRuRobertaLarge") ```
6037ffb0dd9fdb8ac2679b331eb8733f
apache-2.0
[]
false
Zabanshenas - Language Detector Zabanshenas is a Transformer-based solution for identifying the most likely language of a written document/text. Zabanshenas is a Persian word that has two meanings: - A person who studies linguistics. - A way to identify the type of written language.
a7880e1008a059063a24f39c463f2078
apache-2.0
[]
false
By Paragraph | language | precision | recall | f1-score | |:--------------------------------------:|:---------:|:--------:|:--------:| | Achinese (ace) | 1.000000 | 0.982143 | 0.990991 | | Afrikaans (afr) | 1.000000 | 1.000000 | 1.000000 | | Alemannic German (als) | 1.000000 | 0.946429 | 0.972477 | | Amharic (amh) | 1.000000 | 0.982143 | 0.990991 | | Old English (ang) | 0.981818 | 0.964286 | 0.972973 | | Arabic (ara) | 0.846154 | 0.982143 | 0.909091 | | Aragonese (arg) | 1.000000 | 1.000000 | 1.000000 | | Egyptian Arabic (arz) | 0.979592 | 0.857143 | 0.914286 | | Assamese (asm) | 0.981818 | 0.964286 | 0.972973 | | Asturian (ast) | 0.964912 | 0.982143 | 0.973451 | | Avar (ava) | 0.941176 | 0.905660 | 0.923077 | | Aymara (aym) | 0.964912 | 0.982143 | 0.973451 | | South Azerbaijani (azb) | 0.965517 | 1.000000 | 0.982456 | | Azerbaijani (aze) | 1.000000 | 1.000000 | 1.000000 | | Bashkir (bak) | 1.000000 | 0.978261 | 0.989011 | | Bavarian (bar) | 0.843750 | 0.964286 | 0.900000 | | Central Bikol (bcl) | 1.000000 | 0.982143 | 0.990991 | | Belarusian (Taraschkewiza) (be-tarask) | 1.000000 | 0.875000 | 0.933333 | | Belarusian (bel) | 0.870968 | 0.964286 | 0.915254 | | Bengali (ben) | 0.982143 | 0.982143 | 0.982143 | | Bhojpuri (bho) | 1.000000 | 0.928571 | 0.962963 | | Banjar (bjn) | 0.981132 | 0.945455 | 0.962963 | | Tibetan (bod) | 1.000000 | 0.982143 | 0.990991 | | Bosnian (bos) | 0.552632 | 0.375000 | 0.446809 | | Bishnupriya (bpy) | 1.000000 | 0.982143 | 0.990991 | | Breton (bre) | 1.000000 | 0.964286 | 0.981818 | | Bulgarian (bul) | 1.000000 | 0.964286 | 0.981818 | | Buryat (bxr) | 0.946429 | 0.946429 | 0.946429 | | Catalan (cat) | 0.982143 | 0.982143 | 0.982143 | | Chavacano (cbk) | 0.914894 | 0.767857 | 0.834951 | | Min Dong (cdo) | 1.000000 | 0.982143 | 0.990991 | | Cebuano (ceb) | 1.000000 | 1.000000 | 1.000000 | | Czech (ces) | 1.000000 | 1.000000 | 1.000000 | | Chechen (che) | 1.000000 | 1.000000 | 1.000000 | | Cherokee (chr) | 1.000000 | 0.963636 | 0.981481 | | Chuvash (chv) | 0.938776 | 0.958333 | 0.948454 | | Central Kurdish (ckb) | 1.000000 | 1.000000 | 1.000000 | | Cornish (cor) | 1.000000 | 1.000000 | 1.000000 | | Corsican (cos) | 1.000000 | 0.982143 | 0.990991 | | Crimean Tatar (crh) | 1.000000 | 0.946429 | 0.972477 | | Kashubian (csb) | 1.000000 | 0.963636 | 0.981481 | | Welsh (cym) | 1.000000 | 1.000000 | 1.000000 | | Danish (dan) | 1.000000 | 1.000000 | 1.000000 | | German (deu) | 0.828125 | 0.946429 | 0.883333 | | Dimli (diq) | 0.964912 | 0.982143 | 0.973451 | | Dhivehi (div) | 1.000000 | 1.000000 | 1.000000 | | Lower Sorbian (dsb) | 1.000000 | 0.982143 | 0.990991 | | Doteli (dty) | 0.940000 | 0.854545 | 0.895238 | | Emilian (egl) | 1.000000 | 0.928571 | 0.962963 | | Modern Greek (ell) | 1.000000 | 1.000000 | 1.000000 | | English (eng) | 0.588889 | 0.946429 | 0.726027 | | Esperanto (epo) | 1.000000 | 0.982143 | 0.990991 | | Estonian (est) | 0.963636 | 0.946429 | 0.954955 | | Basque (eus) | 1.000000 | 0.982143 | 0.990991 | | Extremaduran (ext) | 0.982143 | 0.982143 | 0.982143 | | Faroese (fao) | 1.000000 | 1.000000 | 1.000000 | | Persian (fas) | 0.948276 | 0.982143 | 0.964912 | | Finnish (fin) | 1.000000 | 1.000000 | 1.000000 | | French (fra) | 0.710145 | 0.875000 | 0.784000 | | Arpitan (frp) | 1.000000 | 0.946429 | 0.972477 | | Western Frisian (fry) | 0.982143 | 0.982143 | 0.982143 | | Friulian (fur) | 1.000000 | 0.982143 | 0.990991 | | Gagauz (gag) | 0.981132 | 0.945455 | 0.962963 | | Scottish Gaelic (gla) | 0.982143 | 0.982143 | 0.982143 | | Irish (gle) | 0.949153 | 1.000000 | 0.973913 | | Galician (glg) | 1.000000 | 1.000000 | 1.000000 | | Gilaki (glk) | 0.981132 | 0.945455 | 0.962963 | | Manx (glv) | 1.000000 | 1.000000 | 1.000000 | | Guarani (grn) | 1.000000 | 0.964286 | 0.981818 | | Gujarati (guj) | 1.000000 | 0.982143 | 0.990991 | | Hakka Chinese (hak) | 0.981818 | 0.964286 | 0.972973 | | Haitian Creole (hat) | 1.000000 | 1.000000 | 1.000000 | | Hausa (hau) | 1.000000 | 0.945455 | 0.971963 | | Serbo-Croatian (hbs) | 0.448276 | 0.464286 | 0.456140 | | Hebrew (heb) | 1.000000 | 0.982143 | 0.990991 | | Fiji Hindi (hif) | 0.890909 | 0.890909 | 0.890909 | | Hindi (hin) | 0.981481 | 0.946429 | 0.963636 | | Croatian (hrv) | 0.500000 | 0.636364 | 0.560000 | | Upper Sorbian (hsb) | 0.955556 | 1.000000 | 0.977273 | | Hungarian (hun) | 1.000000 | 1.000000 | 1.000000 | | Armenian (hye) | 1.000000 | 0.981818 | 0.990826 | | Igbo (ibo) | 0.918033 | 1.000000 | 0.957265 | | Ido (ido) | 1.000000 | 1.000000 | 1.000000 | | Interlingue (ile) | 1.000000 | 0.962264 | 0.980769 | | Iloko (ilo) | 0.947368 | 0.964286 | 0.955752 | | Interlingua (ina) | 1.000000 | 1.000000 | 1.000000 | | Indonesian (ind) | 0.761905 | 0.872727 | 0.813559 | | Icelandic (isl) | 1.000000 | 1.000000 | 1.000000 | | Italian (ita) | 0.861538 | 1.000000 | 0.925620 | | Jamaican Patois (jam) | 1.000000 | 0.946429 | 0.972477 | | Javanese (jav) | 0.964912 | 0.982143 | 0.973451 | | Lojban (jbo) | 1.000000 | 1.000000 | 1.000000 | | Japanese (jpn) | 1.000000 | 1.000000 | 1.000000 | | Karakalpak (kaa) | 0.965517 | 1.000000 | 0.982456 | | Kabyle (kab) | 1.000000 | 0.964286 | 0.981818 | | Kannada (kan) | 0.982143 | 0.982143 | 0.982143 | | Georgian (kat) | 1.000000 | 0.964286 | 0.981818 | | Kazakh (kaz) | 0.980769 | 0.980769 | 0.980769 | | Kabardian (kbd) | 1.000000 | 0.982143 | 0.990991 | | Central Khmer (khm) | 0.960784 | 0.875000 | 0.915888 | | Kinyarwanda (kin) | 0.981132 | 0.928571 | 0.954128 | | Kirghiz (kir) | 1.000000 | 1.000000 | 1.000000 | | Komi-Permyak (koi) | 0.962264 | 0.910714 | 0.935780 | | Konkani (kok) | 0.964286 | 0.981818 | 0.972973 | | Komi (kom) | 1.000000 | 0.962264 | 0.980769 | | Korean (kor) | 1.000000 | 1.000000 | 1.000000 | | Karachay-Balkar (krc) | 1.000000 | 0.982143 | 0.990991 | | Ripuarisch (ksh) | 1.000000 | 0.964286 | 0.981818 | | Kurdish (kur) | 1.000000 | 0.964286 | 0.981818 | | Ladino (lad) | 1.000000 | 1.000000 | 1.000000 | | Lao (lao) | 0.961538 | 0.909091 | 0.934579 | | Latin (lat) | 0.877193 | 0.943396 | 0.909091 | | Latvian (lav) | 0.963636 | 0.946429 | 0.954955 | | Lezghian (lez) | 1.000000 | 0.964286 | 0.981818 | | Ligurian (lij) | 1.000000 | 0.964286 | 0.981818 | | Limburgan (lim) | 0.938776 | 1.000000 | 0.968421 | | Lingala (lin) | 0.980769 | 0.927273 | 0.953271 | | Lithuanian (lit) | 0.982456 | 1.000000 | 0.991150 | | Lombard (lmo) | 1.000000 | 1.000000 | 1.000000 | | Northern Luri (lrc) | 1.000000 | 0.928571 | 0.962963 | | Latgalian (ltg) | 1.000000 | 0.982143 | 0.990991 | | Luxembourgish (ltz) | 0.949153 | 1.000000 | 0.973913 | | Luganda (lug) | 1.000000 | 1.000000 | 1.000000 | | Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 | | Maithili (mai) | 0.931034 | 0.964286 | 0.947368 | | Malayalam (mal) | 1.000000 | 0.982143 | 0.990991 | | Banyumasan (map-bms) | 0.977778 | 0.785714 | 0.871287 | | Marathi (mar) | 0.949153 | 1.000000 | 0.973913 | | Moksha (mdf) | 0.980000 | 0.890909 | 0.933333 | | Eastern Mari (mhr) | 0.981818 | 0.964286 | 0.972973 | | Minangkabau (min) | 1.000000 | 1.000000 | 1.000000 | | Macedonian (mkd) | 1.000000 | 0.981818 | 0.990826 | | Malagasy (mlg) | 0.981132 | 1.000000 | 0.990476 | | Maltese (mlt) | 0.982456 | 1.000000 | 0.991150 | | Min Nan Chinese (nan) | 1.000000 | 1.000000 | 1.000000 | | Mongolian (mon) | 1.000000 | 0.981818 | 0.990826 | | Maori (mri) | 1.000000 | 1.000000 | 1.000000 | | Western Mari (mrj) | 0.982456 | 1.000000 | 0.991150 | | Malay (msa) | 0.862069 | 0.892857 | 0.877193 | | Mirandese (mwl) | 1.000000 | 0.982143 | 0.990991 | | Burmese (mya) | 1.000000 | 1.000000 | 1.000000 | | Erzya (myv) | 0.818182 | 0.964286 | 0.885246 | | Mazanderani (mzn) | 0.981481 | 1.000000 | 0.990654 | | Neapolitan (nap) | 1.000000 | 0.981818 | 0.990826 | | Navajo (nav) | 1.000000 | 1.000000 | 1.000000 | | Classical Nahuatl (nci) | 0.981481 | 0.946429 | 0.963636 | | Low German (nds) | 0.982143 | 0.982143 | 0.982143 | | West Low German (nds-nl) | 1.000000 | 1.000000 | 1.000000 | | Nepali (macrolanguage) (nep) | 0.881356 | 0.928571 | 0.904348 | | Newari (new) | 1.000000 | 0.909091 | 0.952381 | | Dutch (nld) | 0.982143 | 0.982143 | 0.982143 | | Norwegian Nynorsk (nno) | 1.000000 | 1.000000 | 1.000000 | | Bokmål (nob) | 1.000000 | 1.000000 | 1.000000 | | Narom (nrm) | 0.981818 | 0.964286 | 0.972973 | | Northern Sotho (nso) | 1.000000 | 1.000000 | 1.000000 | | Occitan (oci) | 0.903846 | 0.839286 | 0.870370 | | Livvi-Karelian (olo) | 0.982456 | 1.000000 | 0.991150 | | Oriya (ori) | 0.964912 | 0.982143 | 0.973451 | | Oromo (orm) | 0.982143 | 0.982143 | 0.982143 | | Ossetian (oss) | 0.982143 | 1.000000 | 0.990991 | | Pangasinan (pag) | 0.980000 | 0.875000 | 0.924528 | | Pampanga (pam) | 0.928571 | 0.896552 | 0.912281 | | Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 | | Papiamento (pap) | 1.000000 | 0.964286 | 0.981818 | | Picard (pcd) | 0.849057 | 0.849057 | 0.849057 | | Pennsylvania German (pdc) | 0.854839 | 0.946429 | 0.898305 | | Palatine German (pfl) | 0.946429 | 0.946429 | 0.946429 | | Western Panjabi (pnb) | 0.981132 | 0.962963 | 0.971963 | | Polish (pol) | 0.933333 | 1.000000 | 0.965517 | | Portuguese (por) | 0.774648 | 0.982143 | 0.866142 | | Pushto (pus) | 1.000000 | 0.910714 | 0.953271 | | Quechua (que) | 0.962963 | 0.928571 | 0.945455 | | Tarantino dialect (roa-tara) | 1.000000 | 0.964286 | 0.981818 | | Romansh (roh) | 1.000000 | 0.928571 | 0.962963 | | Romanian (ron) | 0.965517 | 1.000000 | 0.982456 | | Rusyn (rue) | 0.946429 | 0.946429 | 0.946429 | | Aromanian (rup) | 0.962963 | 0.928571 | 0.945455 | | Russian (rus) | 0.859375 | 0.982143 | 0.916667 | | Yakut (sah) | 1.000000 | 0.982143 | 0.990991 | | Sanskrit (san) | 0.982143 | 0.982143 | 0.982143 | | Sicilian (scn) | 1.000000 | 1.000000 | 1.000000 | | Scots (sco) | 0.982143 | 0.982143 | 0.982143 | | Samogitian (sgs) | 1.000000 | 0.982143 | 0.990991 | | Sinhala (sin) | 0.964912 | 0.982143 | 0.973451 | | Slovak (slk) | 1.000000 | 0.982143 | 0.990991 | | Slovene (slv) | 1.000000 | 0.981818 | 0.990826 | | Northern Sami (sme) | 0.962264 | 0.962264 | 0.962264 | | Shona (sna) | 0.933333 | 1.000000 | 0.965517 | | Sindhi (snd) | 1.000000 | 1.000000 | 1.000000 | | Somali (som) | 0.948276 | 1.000000 | 0.973451 | | Spanish (spa) | 0.739130 | 0.910714 | 0.816000 | | Albanian (sqi) | 0.982143 | 0.982143 | 0.982143 | | Sardinian (srd) | 1.000000 | 0.982143 | 0.990991 | | Sranan (srn) | 1.000000 | 1.000000 | 1.000000 | | Serbian (srp) | 1.000000 | 0.946429 | 0.972477 | | Saterfriesisch (stq) | 1.000000 | 0.964286 | 0.981818 | | Sundanese (sun) | 1.000000 | 0.977273 | 0.988506 | | Swahili (macrolanguage) (swa) | 1.000000 | 1.000000 | 1.000000 | | Swedish (swe) | 1.000000 | 1.000000 | 1.000000 | | Silesian (szl) | 1.000000 | 0.981481 | 0.990654 | | Tamil (tam) | 0.982143 | 1.000000 | 0.990991 | | Tatar (tat) | 1.000000 | 1.000000 | 1.000000 | | Tulu (tcy) | 0.982456 | 1.000000 | 0.991150 | | Telugu (tel) | 1.000000 | 0.920000 | 0.958333 | | Tetum (tet) | 1.000000 | 0.964286 | 0.981818 | | Tajik (tgk) | 1.000000 | 1.000000 | 1.000000 | | Tagalog (tgl) | 1.000000 | 1.000000 | 1.000000 | | Thai (tha) | 0.932203 | 0.982143 | 0.956522 | | Tongan (ton) | 1.000000 | 0.964286 | 0.981818 | | Tswana (tsn) | 1.000000 | 1.000000 | 1.000000 | | Turkmen (tuk) | 1.000000 | 0.982143 | 0.990991 | | Turkish (tur) | 0.901639 | 0.982143 | 0.940171 | | Tuvan (tyv) | 1.000000 | 0.964286 | 0.981818 | | Udmurt (udm) | 1.000000 | 0.982143 | 0.990991 | | Uighur (uig) | 1.000000 | 0.982143 | 0.990991 | | Ukrainian (ukr) | 0.963636 | 0.946429 | 0.954955 | | Urdu (urd) | 1.000000 | 0.982143 | 0.990991 | | Uzbek (uzb) | 1.000000 | 1.000000 | 1.000000 | | Venetian (vec) | 1.000000 | 0.982143 | 0.990991 | | Veps (vep) | 0.982456 | 1.000000 | 0.991150 | | Vietnamese (vie) | 0.964912 | 0.982143 | 0.973451 | | Vlaams (vls) | 1.000000 | 0.982143 | 0.990991 | | Volapük (vol) | 1.000000 | 1.000000 | 1.000000 | | Võro (vro) | 0.964286 | 0.964286 | 0.964286 | | Waray (war) | 1.000000 | 0.982143 | 0.990991 | | Walloon (wln) | 1.000000 | 1.000000 | 1.000000 | | Wolof (wol) | 0.981481 | 0.963636 | 0.972477 | | Wu Chinese (wuu) | 0.981481 | 0.946429 | 0.963636 | | Xhosa (xho) | 1.000000 | 0.964286 | 0.981818 | | Mingrelian (xmf) | 1.000000 | 0.964286 | 0.981818 | | Yiddish (yid) | 1.000000 | 1.000000 | 1.000000 | | Yoruba (yor) | 0.964912 | 0.982143 | 0.973451 | | Zeeuws (zea) | 1.000000 | 0.982143 | 0.990991 | | Cantonese (zh-yue) | 0.981481 | 0.946429 | 0.963636 | | Standard Chinese (zho) | 0.932203 | 0.982143 | 0.956522 | | accuracy | 0.963055 | 0.963055 | 0.963055 | | macro avg | 0.966424 | 0.963216 | 0.963891 | | weighted avg | 0.966040 | 0.963055 | 0.963606 |
194af85c38752debe1c396b97953b784
apache-2.0
[]
false
By Sentence | language | precision | recall | f1-score | |:--------------------------------------:|:---------:|:--------:|:--------:| | Achinese (ace) | 0.754545 | 0.873684 | 0.809756 | | Afrikaans (afr) | 0.708955 | 0.940594 | 0.808511 | | Alemannic German (als) | 0.870130 | 0.752809 | 0.807229 | | Amharic (amh) | 1.000000 | 0.820000 | 0.901099 | | Old English (ang) | 0.966667 | 0.906250 | 0.935484 | | Arabic (ara) | 0.907692 | 0.967213 | 0.936508 | | Aragonese (arg) | 0.921569 | 0.959184 | 0.940000 | | Egyptian Arabic (arz) | 0.964286 | 0.843750 | 0.900000 | | Assamese (asm) | 0.964286 | 0.870968 | 0.915254 | | Asturian (ast) | 0.880000 | 0.795181 | 0.835443 | | Avar (ava) | 0.864198 | 0.843373 | 0.853659 | | Aymara (aym) | 1.000000 | 0.901961 | 0.948454 | | South Azerbaijani (azb) | 0.979381 | 0.989583 | 0.984456 | | Azerbaijani (aze) | 0.989899 | 0.960784 | 0.975124 | | Bashkir (bak) | 0.837209 | 0.857143 | 0.847059 | | Bavarian (bar) | 0.741935 | 0.766667 | 0.754098 | | Central Bikol (bcl) | 0.962963 | 0.928571 | 0.945455 | | Belarusian (Taraschkewiza) (be-tarask) | 0.857143 | 0.733333 | 0.790419 | | Belarusian (bel) | 0.775510 | 0.752475 | 0.763819 | | Bengali (ben) | 0.861111 | 0.911765 | 0.885714 | | Bhojpuri (bho) | 0.965517 | 0.933333 | 0.949153 | | Banjar (bjn) | 0.891566 | 0.880952 | 0.886228 | | Tibetan (bod) | 1.000000 | 1.000000 | 1.000000 | | Bosnian (bos) | 0.375000 | 0.323077 | 0.347107 | | Bishnupriya (bpy) | 0.986301 | 1.000000 | 0.993103 | | Breton (bre) | 0.951613 | 0.893939 | 0.921875 | | Bulgarian (bul) | 0.945055 | 0.877551 | 0.910053 | | Buryat (bxr) | 0.955556 | 0.843137 | 0.895833 | | Catalan (cat) | 0.692308 | 0.750000 | 0.720000 | | Chavacano (cbk) | 0.842857 | 0.641304 | 0.728395 | | Min Dong (cdo) | 0.972973 | 1.000000 | 0.986301 | | Cebuano (ceb) | 0.981308 | 0.954545 | 0.967742 | | Czech (ces) | 0.944444 | 0.915385 | 0.929687 | | Chechen (che) | 0.875000 | 0.700000 | 0.777778 | | Cherokee (chr) | 1.000000 | 0.970588 | 0.985075 | | Chuvash (chv) | 0.875000 | 0.836957 | 0.855556 | | Central Kurdish (ckb) | 1.000000 | 0.983051 | 0.991453 | | Cornish (cor) | 0.979592 | 0.969697 | 0.974619 | | Corsican (cos) | 0.986842 | 0.925926 | 0.955414 | | Crimean Tatar (crh) | 0.958333 | 0.907895 | 0.932432 | | Kashubian (csb) | 0.920354 | 0.904348 | 0.912281 | | Welsh (cym) | 0.971014 | 0.943662 | 0.957143 | | Danish (dan) | 0.865169 | 0.777778 | 0.819149 | | German (deu) | 0.721311 | 0.822430 | 0.768559 | | Dimli (diq) | 0.915966 | 0.923729 | 0.919831 | | Dhivehi (div) | 1.000000 | 0.991228 | 0.995595 | | Lower Sorbian (dsb) | 0.898876 | 0.879121 | 0.888889 | | Doteli (dty) | 0.821429 | 0.638889 | 0.718750 | | Emilian (egl) | 0.988095 | 0.922222 | 0.954023 | | Modern Greek (ell) | 0.988636 | 0.966667 | 0.977528 | | English (eng) | 0.522727 | 0.784091 | 0.627273 | | Esperanto (epo) | 0.963855 | 0.930233 | 0.946746 | | Estonian (est) | 0.922222 | 0.873684 | 0.897297 | | Basque (eus) | 1.000000 | 0.941176 | 0.969697 | | Extremaduran (ext) | 0.925373 | 0.885714 | 0.905109 | | Faroese (fao) | 0.855072 | 0.887218 | 0.870849 | | Persian (fas) | 0.879630 | 0.979381 | 0.926829 | | Finnish (fin) | 0.952830 | 0.943925 | 0.948357 | | French (fra) | 0.676768 | 0.943662 | 0.788235 | | Arpitan (frp) | 0.867925 | 0.807018 | 0.836364 | | Western Frisian (fry) | 0.956989 | 0.890000 | 0.922280 | | Friulian (fur) | 1.000000 | 0.857143 | 0.923077 | | Gagauz (gag) | 0.939024 | 0.802083 | 0.865169 | | Scottish Gaelic (gla) | 1.000000 | 0.879121 | 0.935673 | | Irish (gle) | 0.989247 | 0.958333 | 0.973545 | | Galician (glg) | 0.910256 | 0.922078 | 0.916129 | | Gilaki (glk) | 0.964706 | 0.872340 | 0.916201 | | Manx (glv) | 1.000000 | 0.965517 | 0.982456 | | Guarani (grn) | 0.983333 | 1.000000 | 0.991597 | | Gujarati (guj) | 1.000000 | 0.991525 | 0.995745 | | Hakka Chinese (hak) | 0.955224 | 0.955224 | 0.955224 | | Haitian Creole (hat) | 0.833333 | 0.666667 | 0.740741 | | Hausa (hau) | 0.936709 | 0.913580 | 0.925000 | | Serbo-Croatian (hbs) | 0.452830 | 0.410256 | 0.430493 | | Hebrew (heb) | 0.988235 | 0.976744 | 0.982456 | | Fiji Hindi (hif) | 0.936709 | 0.840909 | 0.886228 | | Hindi (hin) | 0.965517 | 0.756757 | 0.848485 | | Croatian (hrv) | 0.443820 | 0.537415 | 0.486154 | | Upper Sorbian (hsb) | 0.951613 | 0.830986 | 0.887218 | | Hungarian (hun) | 0.854701 | 0.909091 | 0.881057 | | Armenian (hye) | 1.000000 | 0.816327 | 0.898876 | | Igbo (ibo) | 0.974359 | 0.926829 | 0.950000 | | Ido (ido) | 0.975000 | 0.987342 | 0.981132 | | Interlingue (ile) | 0.880597 | 0.921875 | 0.900763 | | Iloko (ilo) | 0.882353 | 0.821918 | 0.851064 | | Interlingua (ina) | 0.952381 | 0.895522 | 0.923077 | | Indonesian (ind) | 0.606383 | 0.695122 | 0.647727 | | Icelandic (isl) | 0.978261 | 0.882353 | 0.927835 | | Italian (ita) | 0.910448 | 0.910448 | 0.910448 | | Jamaican Patois (jam) | 0.988764 | 0.967033 | 0.977778 | | Javanese (jav) | 0.903614 | 0.862069 | 0.882353 | | Lojban (jbo) | 0.943878 | 0.929648 | 0.936709 | | Japanese (jpn) | 1.000000 | 0.764706 | 0.866667 | | Karakalpak (kaa) | 0.940171 | 0.901639 | 0.920502 | | Kabyle (kab) | 0.985294 | 0.837500 | 0.905405 | | Kannada (kan) | 0.975806 | 0.975806 | 0.975806 | | Georgian (kat) | 0.953704 | 0.903509 | 0.927928 | | Kazakh (kaz) | 0.934579 | 0.877193 | 0.904977 | | Kabardian (kbd) | 0.987952 | 0.953488 | 0.970414 | | Central Khmer (khm) | 0.928571 | 0.829787 | 0.876404 | | Kinyarwanda (kin) | 0.953125 | 0.938462 | 0.945736 | | Kirghiz (kir) | 0.927632 | 0.881250 | 0.903846 | | Komi-Permyak (koi) | 0.750000 | 0.776786 | 0.763158 | | Konkani (kok) | 0.893491 | 0.872832 | 0.883041 | | Komi (kom) | 0.734177 | 0.690476 | 0.711656 | | Korean (kor) | 0.989899 | 0.989899 | 0.989899 | | Karachay-Balkar (krc) | 0.928571 | 0.917647 | 0.923077 | | Ripuarisch (ksh) | 0.915789 | 0.896907 | 0.906250 | | Kurdish (kur) | 0.977528 | 0.935484 | 0.956044 | | Ladino (lad) | 0.985075 | 0.904110 | 0.942857 | | Lao (lao) | 0.896552 | 0.812500 | 0.852459 | | Latin (lat) | 0.741935 | 0.831325 | 0.784091 | | Latvian (lav) | 0.710526 | 0.878049 | 0.785455 | | Lezghian (lez) | 0.975309 | 0.877778 | 0.923977 | | Ligurian (lij) | 0.951807 | 0.897727 | 0.923977 | | Limburgan (lim) | 0.909091 | 0.921053 | 0.915033 | | Lingala (lin) | 0.942857 | 0.814815 | 0.874172 | | Lithuanian (lit) | 0.892857 | 0.925926 | 0.909091 | | Lombard (lmo) | 0.766234 | 0.951613 | 0.848921 | | Northern Luri (lrc) | 0.972222 | 0.875000 | 0.921053 | | Latgalian (ltg) | 0.895349 | 0.865169 | 0.880000 | | Luxembourgish (ltz) | 0.882353 | 0.750000 | 0.810811 | | Luganda (lug) | 0.946429 | 0.883333 | 0.913793 | | Literary Chinese (lzh) | 1.000000 | 1.000000 | 1.000000 | | Maithili (mai) | 0.893617 | 0.823529 | 0.857143 | | Malayalam (mal) | 1.000000 | 0.975000 | 0.987342 | | Banyumasan (map-bms) | 0.924242 | 0.772152 | 0.841379 | | Marathi (mar) | 0.874126 | 0.919118 | 0.896057 | | Moksha (mdf) | 0.771242 | 0.830986 | 0.800000 | | Eastern Mari (mhr) | 0.820000 | 0.860140 | 0.839590 | | Minangkabau (min) | 0.973684 | 0.973684 | 0.973684 | | Macedonian (mkd) | 0.895652 | 0.953704 | 0.923767 | | Malagasy (mlg) | 1.000000 | 0.966102 | 0.982759 | | Maltese (mlt) | 0.987952 | 0.964706 | 0.976190 | | Min Nan Chinese (nan) | 0.975000 | 1.000000 | 0.987342 | | Mongolian (mon) | 0.954545 | 0.933333 | 0.943820 | | Maori (mri) | 0.985294 | 1.000000 | 0.992593 | | Western Mari (mrj) | 0.966292 | 0.914894 | 0.939891 | | Malay (msa) | 0.770270 | 0.695122 | 0.730769 | | Mirandese (mwl) | 0.970588 | 0.891892 | 0.929577 | | Burmese (mya) | 1.000000 | 0.964286 | 0.981818 | | Erzya (myv) | 0.535714 | 0.681818 | 0.600000 | | Mazanderani (mzn) | 0.968750 | 0.898551 | 0.932331 | | Neapolitan (nap) | 0.892308 | 0.865672 | 0.878788 | | Navajo (nav) | 0.984375 | 0.984375 | 0.984375 | | Classical Nahuatl (nci) | 0.901408 | 0.761905 | 0.825806 | | Low German (nds) | 0.896226 | 0.913462 | 0.904762 | | West Low German (nds-nl) | 0.873563 | 0.835165 | 0.853933 | | Nepali (macrolanguage) (nep) | 0.704545 | 0.861111 | 0.775000 | | Newari (new) | 0.920000 | 0.741935 | 0.821429 | | Dutch (nld) | 0.925926 | 0.872093 | 0.898204 | | Norwegian Nynorsk (nno) | 0.847059 | 0.808989 | 0.827586 | | Bokmål (nob) | 0.861386 | 0.852941 | 0.857143 | | Narom (nrm) | 0.966667 | 0.983051 | 0.974790 | | Northern Sotho (nso) | 0.897436 | 0.921053 | 0.909091 | | Occitan (oci) | 0.958333 | 0.696970 | 0.807018 | | Livvi-Karelian (olo) | 0.967742 | 0.937500 | 0.952381 | | Oriya (ori) | 0.933333 | 1.000000 | 0.965517 | | Oromo (orm) | 0.977528 | 0.915789 | 0.945652 | | Ossetian (oss) | 0.958333 | 0.841463 | 0.896104 | | Pangasinan (pag) | 0.847328 | 0.909836 | 0.877470 | | Pampanga (pam) | 0.969697 | 0.780488 | 0.864865 | | Panjabi (pan) | 1.000000 | 1.000000 | 1.000000 | | Papiamento (pap) | 0.876190 | 0.920000 | 0.897561 | | Picard (pcd) | 0.707317 | 0.568627 | 0.630435 | | Pennsylvania German (pdc) | 0.827273 | 0.827273 | 0.827273 | | Palatine German (pfl) | 0.882353 | 0.914634 | 0.898204 | | Western Panjabi (pnb) | 0.964286 | 0.931034 | 0.947368 | | Polish (pol) | 0.859813 | 0.910891 | 0.884615 | | Portuguese (por) | 0.535714 | 0.833333 | 0.652174 | | Pushto (pus) | 0.989362 | 0.902913 | 0.944162 | | Quechua (que) | 0.979167 | 0.903846 | 0.940000 | | Tarantino dialect (roa-tara) | 0.964912 | 0.901639 | 0.932203 | | Romansh (roh) | 0.914894 | 0.895833 | 0.905263 | | Romanian (ron) | 0.880597 | 0.880597 | 0.880597 | | Rusyn (rue) | 0.932584 | 0.805825 | 0.864583 | | Aromanian (rup) | 0.783333 | 0.758065 | 0.770492 | | Russian (rus) | 0.517986 | 0.765957 | 0.618026 | | Yakut (sah) | 0.954023 | 0.922222 | 0.937853 | | Sanskrit (san) | 0.866667 | 0.951220 | 0.906977 | | Sicilian (scn) | 0.984375 | 0.940299 | 0.961832 | | Scots (sco) | 0.851351 | 0.900000 | 0.875000 | | Samogitian (sgs) | 0.977011 | 0.876289 | 0.923913 | | Sinhala (sin) | 0.406154 | 0.985075 | 0.575163 | | Slovak (slk) | 0.956989 | 0.872549 | 0.912821 | | Slovene (slv) | 0.907216 | 0.854369 | 0.880000 | | Northern Sami (sme) | 0.949367 | 0.892857 | 0.920245 | | Shona (sna) | 0.936508 | 0.855072 | 0.893939 | | Sindhi (snd) | 0.984962 | 0.992424 | 0.988679 | | Somali (som) | 0.949153 | 0.848485 | 0.896000 | | Spanish (spa) | 0.584158 | 0.746835 | 0.655556 | | Albanian (sqi) | 0.988095 | 0.912088 | 0.948571 | | Sardinian (srd) | 0.957746 | 0.931507 | 0.944444 | | Sranan (srn) | 0.985714 | 0.945205 | 0.965035 | | Serbian (srp) | 0.950980 | 0.889908 | 0.919431 | | Saterfriesisch (stq) | 0.962500 | 0.875000 | 0.916667 | | Sundanese (sun) | 0.778846 | 0.910112 | 0.839378 | | Swahili (macrolanguage) (swa) | 0.915493 | 0.878378 | 0.896552 | | Swedish (swe) | 0.989247 | 0.958333 | 0.973545 | | Silesian (szl) | 0.944444 | 0.904255 | 0.923913 | | Tamil (tam) | 0.990000 | 0.970588 | 0.980198 | | Tatar (tat) | 0.942029 | 0.902778 | 0.921986 | | Tulu (tcy) | 0.980519 | 0.967949 | 0.974194 | | Telugu (tel) | 0.965986 | 0.965986 | 0.965986 | | Tetum (tet) | 0.898734 | 0.855422 | 0.876543 | | Tajik (tgk) | 0.974684 | 0.939024 | 0.956522 | | Tagalog (tgl) | 0.965909 | 0.934066 | 0.949721 | | Thai (tha) | 0.923077 | 0.882353 | 0.902256 | | Tongan (ton) | 0.970149 | 0.890411 | 0.928571 | | Tswana (tsn) | 0.888889 | 0.926316 | 0.907216 | | Turkmen (tuk) | 0.968000 | 0.889706 | 0.927203 | | Turkish (tur) | 0.871287 | 0.926316 | 0.897959 | | Tuvan (tyv) | 0.948454 | 0.859813 | 0.901961 | | Udmurt (udm) | 0.989362 | 0.894231 | 0.939394 | | Uighur (uig) | 1.000000 | 0.953333 | 0.976109 | | Ukrainian (ukr) | 0.893617 | 0.875000 | 0.884211 | | Urdu (urd) | 1.000000 | 1.000000 | 1.000000 | | Uzbek (uzb) | 0.636042 | 0.886700 | 0.740741 | | Venetian (vec) | 1.000000 | 0.941176 | 0.969697 | | Veps (vep) | 0.858586 | 0.965909 | 0.909091 | | Vietnamese (vie) | 1.000000 | 0.940476 | 0.969325 | | Vlaams (vls) | 0.885714 | 0.898551 | 0.892086 | | Volapük (vol) | 0.975309 | 0.975309 | 0.975309 | | Võro (vro) | 0.855670 | 0.864583 | 0.860104 | | Waray (war) | 0.972222 | 0.909091 | 0.939597 | | Walloon (wln) | 0.742138 | 0.893939 | 0.810997 | | Wolof (wol) | 0.882979 | 0.954023 | 0.917127 | | Wu Chinese (wuu) | 0.961538 | 0.833333 | 0.892857 | | Xhosa (xho) | 0.934066 | 0.867347 | 0.899471 | | Mingrelian (xmf) | 0.958333 | 0.929293 | 0.943590 | | Yiddish (yid) | 0.984375 | 0.875000 | 0.926471 | | Yoruba (yor) | 0.868421 | 0.857143 | 0.862745 | | Zeeuws (zea) | 0.879518 | 0.793478 | 0.834286 | | Cantonese (zh-yue) | 0.896552 | 0.812500 | 0.852459 | | Standard Chinese (zho) | 0.906250 | 0.935484 | 0.920635 | | accuracy | 0.881051 | 0.881051 | 0.881051 | | macro avg | 0.903245 | 0.880618 | 0.888996 | | weighted avg | 0.894174 | 0.881051 | 0.884520 |
4e2da79c41dfb8f8e9815ed3543e4ee7
apache-2.0
[]
false
By Token (3 to 5) | language | precision | recall | f1-score | |:--------------------------------------:|:---------:|:--------:|:--------:| | Achinese (ace) | 0.873846 | 0.827988 | 0.850299 | | Afrikaans (afr) | 0.638060 | 0.732334 | 0.681954 | | Alemannic German (als) | 0.673780 | 0.547030 | 0.603825 | | Amharic (amh) | 0.997743 | 0.954644 | 0.975717 | | Old English (ang) | 0.840816 | 0.693603 | 0.760148 | | Arabic (ara) | 0.768737 | 0.840749 | 0.803132 | | Aragonese (arg) | 0.493671 | 0.505181 | 0.499360 | | Egyptian Arabic (arz) | 0.823529 | 0.741935 | 0.780606 | | Assamese (asm) | 0.948454 | 0.893204 | 0.920000 | | Asturian (ast) | 0.490000 | 0.508299 | 0.498982 | | Avar (ava) | 0.813636 | 0.655678 | 0.726166 | | Aymara (aym) | 0.795833 | 0.779592 | 0.787629 | | South Azerbaijani (azb) | 0.832836 | 0.863777 | 0.848024 | | Azerbaijani (aze) | 0.867470 | 0.800000 | 0.832370 | | Bashkir (bak) | 0.851852 | 0.750000 | 0.797688 | | Bavarian (bar) | 0.560897 | 0.522388 | 0.540958 | | Central Bikol (bcl) | 0.708229 | 0.668235 | 0.687651 | | Belarusian (Taraschkewiza) (be-tarask) | 0.615635 | 0.526462 | 0.567568 | | Belarusian (bel) | 0.539952 | 0.597855 | 0.567430 | | Bengali (ben) | 0.830275 | 0.885086 | 0.856805 | | Bhojpuri (bho) | 0.723118 | 0.691517 | 0.706965 | | Banjar (bjn) | 0.619586 | 0.726269 | 0.668699 | | Tibetan (bod) | 0.999537 | 0.991728 | 0.995617 | | Bosnian (bos) | 0.330849 | 0.403636 | 0.363636 | | Bishnupriya (bpy) | 0.941634 | 0.949020 | 0.945312 | | Breton (bre) | 0.772222 | 0.745308 | 0.758527 | | Bulgarian (bul) | 0.771505 | 0.706897 | 0.737789 | | Buryat (bxr) | 0.741935 | 0.753149 | 0.747500 | | Catalan (cat) | 0.528716 | 0.610136 | 0.566516 | | Chavacano (cbk) | 0.409449 | 0.312625 | 0.354545 | | Min Dong (cdo) | 0.951264 | 0.936057 | 0.943599 | | Cebuano (ceb) | 0.888298 | 0.876640 | 0.882431 | | Czech (ces) | 0.806045 | 0.758294 | 0.781441 | | Chechen (che) | 0.857143 | 0.600000 | 0.705882 | | Cherokee (chr) | 0.997840 | 0.952577 | 0.974684 | | Chuvash (chv) | 0.874346 | 0.776744 | 0.822660 | | Central Kurdish (ckb) | 0.984848 | 0.953545 | 0.968944 | | Cornish (cor) | 0.747596 | 0.807792 | 0.776529 | | Corsican (cos) | 0.673913 | 0.708571 | 0.690808 | | Crimean Tatar (crh) | 0.498801 | 0.700337 | 0.582633 | | Kashubian (csb) | 0.797059 | 0.794721 | 0.795888 | | Welsh (cym) | 0.829609 | 0.841360 | 0.835443 | | Danish (dan) | 0.649789 | 0.622222 | 0.635707 | | German (deu) | 0.559406 | 0.763514 | 0.645714 | | Dimli (diq) | 0.835580 | 0.763547 | 0.797941 | | Dhivehi (div) | 1.000000 | 0.980645 | 0.990228 | | Lower Sorbian (dsb) | 0.740484 | 0.694805 | 0.716918 | | Doteli (dty) | 0.616314 | 0.527132 | 0.568245 | | Emilian (egl) | 0.822993 | 0.769625 | 0.795414 | | Modern Greek (ell) | 0.972043 | 0.963753 | 0.967880 | | English (eng) | 0.260492 | 0.724346 | 0.383183 | | Esperanto (epo) | 0.766764 | 0.716621 | 0.740845 | | Estonian (est) | 0.698885 | 0.673835 | 0.686131 | | Basque (eus) | 0.882716 | 0.841176 | 0.861446 | | Extremaduran (ext) | 0.570605 | 0.511628 | 0.539510 | | Faroese (fao) | 0.773987 | 0.784017 | 0.778970 | | Persian (fas) | 0.709836 | 0.809346 | 0.756332 | | Finnish (fin) | 0.866261 | 0.796089 | 0.829694 | | French (fra) | 0.496263 | 0.700422 | 0.580927 | | Arpitan (frp) | 0.663366 | 0.584302 | 0.621329 | | Western Frisian (fry) | 0.750000 | 0.756148 | 0.753061 | | Friulian (fur) | 0.713555 | 0.675545 | 0.694030 | | Gagauz (gag) | 0.728125 | 0.677326 | 0.701807 | | Scottish Gaelic (gla) | 0.831601 | 0.817996 | 0.824742 | | Irish (gle) | 0.868852 | 0.801296 | 0.833708 | | Galician (glg) | 0.469816 | 0.454315 | 0.461935 | | Gilaki (glk) | 0.703883 | 0.687204 | 0.695444 | | Manx (glv) | 0.873047 | 0.886905 | 0.879921 | | Guarani (grn) | 0.848580 | 0.793510 | 0.820122 | | Gujarati (guj) | 0.995643 | 0.926978 | 0.960084 | | Hakka Chinese (hak) | 0.898403 | 0.904971 | 0.901675 | | Haitian Creole (hat) | 0.719298 | 0.518987 | 0.602941 | | Hausa (hau) | 0.815353 | 0.829114 | 0.822176 | | Serbo-Croatian (hbs) | 0.343465 | 0.244589 | 0.285714 | | Hebrew (heb) | 0.891304 | 0.933941 | 0.912125 | | Fiji Hindi (hif) | 0.662577 | 0.664615 | 0.663594 | | Hindi (hin) | 0.782301 | 0.778169 | 0.780229 | | Croatian (hrv) | 0.360308 | 0.374000 | 0.367026 | | Upper Sorbian (hsb) | 0.745763 | 0.611111 | 0.671756 | | Hungarian (hun) | 0.876812 | 0.846154 | 0.861210 | | Armenian (hye) | 0.988201 | 0.917808 | 0.951705 | | Igbo (ibo) | 0.825397 | 0.696429 | 0.755448 | | Ido (ido) | 0.760479 | 0.814103 | 0.786378 | | Interlingue (ile) | 0.701299 | 0.580645 | 0.635294 | | Iloko (ilo) | 0.688356 | 0.844538 | 0.758491 | | Interlingua (ina) | 0.577889 | 0.588235 | 0.583016 | | Indonesian (ind) | 0.415879 | 0.514019 | 0.459770 | | Icelandic (isl) | 0.855263 | 0.790754 | 0.821745 | | Italian (ita) | 0.474576 | 0.561247 | 0.514286 | | Jamaican Patois (jam) | 0.826087 | 0.791667 | 0.808511 | | Javanese (jav) | 0.670130 | 0.658163 | 0.664093 | | Lojban (jbo) | 0.896861 | 0.917431 | 0.907029 | | Japanese (jpn) | 0.931373 | 0.848214 | 0.887850 | | Karakalpak (kaa) | 0.790393 | 0.827744 | 0.808637 | | Kabyle (kab) | 0.828571 | 0.759162 | 0.792350 | | Kannada (kan) | 0.879357 | 0.847545 | 0.863158 | | Georgian (kat) | 0.916399 | 0.907643 | 0.912000 | | Kazakh (kaz) | 0.900901 | 0.819672 | 0.858369 | | Kabardian (kbd) | 0.923345 | 0.892256 | 0.907534 | | Central Khmer (khm) | 0.976667 | 0.816156 | 0.889226 | | Kinyarwanda (kin) | 0.824324 | 0.726190 | 0.772152 | | Kirghiz (kir) | 0.674766 | 0.779698 | 0.723447 | | Komi-Permyak (koi) | 0.652830 | 0.633700 | 0.643123 | | Konkani (kok) | 0.778865 | 0.728938 | 0.753075 | | Komi (kom) | 0.737374 | 0.572549 | 0.644592 | | Korean (kor) | 0.984615 | 0.967603 | 0.976035 | | Karachay-Balkar (krc) | 0.869416 | 0.857627 | 0.863481 | | Ripuarisch (ksh) | 0.709859 | 0.649485 | 0.678331 | | Kurdish (kur) | 0.883777 | 0.862884 | 0.873206 | | Ladino (lad) | 0.660920 | 0.576441 | 0.615797 | | Lao (lao) | 0.986175 | 0.918455 | 0.951111 | | Latin (lat) | 0.581250 | 0.636986 | 0.607843 | | Latvian (lav) | 0.824513 | 0.797844 | 0.810959 | | Lezghian (lez) | 0.898955 | 0.793846 | 0.843137 | | Ligurian (lij) | 0.662903 | 0.677100 | 0.669927 | | Limburgan (lim) | 0.615385 | 0.581818 | 0.598131 | | Lingala (lin) | 0.836207 | 0.763780 | 0.798354 | | Lithuanian (lit) | 0.756329 | 0.804714 | 0.779772 | | Lombard (lmo) | 0.556818 | 0.536986 | 0.546722 | | Northern Luri (lrc) | 0.838574 | 0.753296 | 0.793651 | | Latgalian (ltg) | 0.759531 | 0.755102 | 0.757310 | | Luxembourgish (ltz) | 0.645062 | 0.614706 | 0.629518 | | Luganda (lug) | 0.787535 | 0.805797 | 0.796562 | | Literary Chinese (lzh) | 0.921951 | 0.949749 | 0.935644 | | Maithili (mai) | 0.777778 | 0.761658 | 0.769634 | | Malayalam (mal) | 0.993377 | 0.949367 | 0.970874 | | Banyumasan (map-bms) | 0.531429 | 0.453659 | 0.489474 | | Marathi (mar) | 0.748744 | 0.818681 | 0.782152 | | Moksha (mdf) | 0.728745 | 0.800000 | 0.762712 | | Eastern Mari (mhr) | 0.790323 | 0.760870 | 0.775316 | | Minangkabau (min) | 0.953271 | 0.886957 | 0.918919 | | Macedonian (mkd) | 0.816399 | 0.849722 | 0.832727 | | Malagasy (mlg) | 0.925187 | 0.918317 | 0.921739 | | Maltese (mlt) | 0.869421 | 0.890017 | 0.879599 | | Min Nan Chinese (nan) | 0.743707 | 0.820707 | 0.780312 | | Mongolian (mon) | 0.852194 | 0.838636 | 0.845361 | | Maori (mri) | 0.934726 | 0.937173 | 0.935948 | | Western Mari (mrj) | 0.818792 | 0.827119 | 0.822934 | | Malay (msa) | 0.508065 | 0.376119 | 0.432247 | | Mirandese (mwl) | 0.650407 | 0.685225 | 0.667362 | | Burmese (mya) | 0.995968 | 0.972441 | 0.984064 | | Erzya (myv) | 0.475783 | 0.503012 | 0.489019 | | Mazanderani (mzn) | 0.775362 | 0.701639 | 0.736661 | | Neapolitan (nap) | 0.628993 | 0.595349 | 0.611708 | | Navajo (nav) | 0.955882 | 0.937500 | 0.946602 | | Classical Nahuatl (nci) | 0.679758 | 0.589005 | 0.631136 | | Low German (nds) | 0.669789 | 0.690821 | 0.680143 | | West Low German (nds-nl) | 0.513889 | 0.504545 | 0.509174 | | Nepali (macrolanguage) (nep) | 0.640476 | 0.649758 | 0.645084 | | Newari (new) | 0.928571 | 0.745902 | 0.827273 | | Dutch (nld) | 0.553763 | 0.553763 | 0.553763 | | Norwegian Nynorsk (nno) | 0.569277 | 0.519231 | 0.543103 | | Bokmål (nob) | 0.519856 | 0.562500 | 0.540338 | | Narom (nrm) | 0.691275 | 0.605882 | 0.645768 | | Northern Sotho (nso) | 0.950276 | 0.815166 | 0.877551 | | Occitan (oci) | 0.483444 | 0.366834 | 0.417143 | | Livvi-Karelian (olo) | 0.816850 | 0.790780 | 0.803604 | | Oriya (ori) | 0.981481 | 0.963636 | 0.972477 | | Oromo (orm) | 0.885714 | 0.829218 | 0.856536 | | Ossetian (oss) | 0.822006 | 0.855219 | 0.838284 | | Pangasinan (pag) | 0.842105 | 0.715655 | 0.773748 | | Pampanga (pam) | 0.770000 | 0.435028 | 0.555957 | | Panjabi (pan) | 0.996154 | 0.984791 | 0.990440 | | Papiamento (pap) | 0.674672 | 0.661670 | 0.668108 | | Picard (pcd) | 0.407895 | 0.356322 | 0.380368 | | Pennsylvania German (pdc) | 0.487047 | 0.509485 | 0.498013 | | Palatine German (pfl) | 0.614173 | 0.570732 | 0.591656 | | Western Panjabi (pnb) | 0.926267 | 0.887417 | 0.906426 | | Polish (pol) | 0.797059 | 0.734417 | 0.764457 | | Portuguese (por) | 0.500914 | 0.586724 | 0.540434 | | Pushto (pus) | 0.941489 | 0.898477 | 0.919481 | | Quechua (que) | 0.854167 | 0.797665 | 0.824950 | | Tarantino dialect (roa-tara) | 0.669794 | 0.724138 | 0.695906 | | Romansh (roh) | 0.745527 | 0.760649 | 0.753012 | | Romanian (ron) | 0.805486 | 0.769048 | 0.786845 | | Rusyn (rue) | 0.718543 | 0.645833 | 0.680251 | | Aromanian (rup) | 0.288482 | 0.730245 | 0.413580 | | Russian (rus) | 0.530120 | 0.690583 | 0.599805 | | Yakut (sah) | 0.853521 | 0.865714 | 0.859574 | | Sanskrit (san) | 0.931343 | 0.896552 | 0.913616 | | Sicilian (scn) | 0.734139 | 0.618321 | 0.671271 | | Scots (sco) | 0.571429 | 0.540816 | 0.555701 | | Samogitian (sgs) | 0.829167 | 0.748120 | 0.786561 | | Sinhala (sin) | 0.909474 | 0.935065 | 0.922092 | | Slovak (slk) | 0.738235 | 0.665782 | 0.700139 | | Slovene (slv) | 0.671123 | 0.662269 | 0.666667 | | Northern Sami (sme) | 0.800676 | 0.825784 | 0.813036 | | Shona (sna) | 0.761702 | 0.724696 | 0.742739 | | Sindhi (snd) | 0.950172 | 0.946918 | 0.948542 | | Somali (som) | 0.849462 | 0.802030 | 0.825065 | | Spanish (spa) | 0.325234 | 0.413302 | 0.364017 | | Albanian (sqi) | 0.875899 | 0.832479 | 0.853637 | | Sardinian (srd) | 0.750000 | 0.711061 | 0.730012 | | Sranan (srn) | 0.888889 | 0.771084 | 0.825806 | | Serbian (srp) | 0.824561 | 0.814356 | 0.819427 | | Saterfriesisch (stq) | 0.790087 | 0.734417 | 0.761236 | | Sundanese (sun) | 0.764192 | 0.631769 | 0.691700 | | Swahili (macrolanguage) (swa) | 0.763496 | 0.796247 | 0.779528 | | Swedish (swe) | 0.838284 | 0.723647 | 0.776758 | | Silesian (szl) | 0.819788 | 0.750809 | 0.783784 | | Tamil (tam) | 0.985765 | 0.955172 | 0.970228 | | Tatar (tat) | 0.469780 | 0.795349 | 0.590674 | | Tulu (tcy) | 0.893300 | 0.873786 | 0.883436 | | Telugu (tel) | 1.000000 | 0.913690 | 0.954899 | | Tetum (tet) | 0.765116 | 0.744344 | 0.754587 | | Tajik (tgk) | 0.828418 | 0.813158 | 0.820717 | | Tagalog (tgl) | 0.751468 | 0.757396 | 0.754420 | | Thai (tha) | 0.933884 | 0.807143 | 0.865900 | | Tongan (ton) | 0.920245 | 0.923077 | 0.921659 | | Tswana (tsn) | 0.873397 | 0.889070 | 0.881164 | | Turkmen (tuk) | 0.898438 | 0.837887 | 0.867107 | | Turkish (tur) | 0.666667 | 0.716981 | 0.690909 | | Tuvan (tyv) | 0.857143 | 0.805063 | 0.830287 | | Udmurt (udm) | 0.865517 | 0.756024 | 0.807074 | | Uighur (uig) | 0.991597 | 0.967213 | 0.979253 | | Ukrainian (ukr) | 0.771341 | 0.702778 | 0.735465 | | Urdu (urd) | 0.877647 | 0.855505 | 0.866434 | | Uzbek (uzb) | 0.655652 | 0.797040 | 0.719466 | | Venetian (vec) | 0.611111 | 0.527233 | 0.566082 | | Veps (vep) | 0.672862 | 0.688213 | 0.680451 | | Vietnamese (vie) | 0.932406 | 0.914230 | 0.923228 | | Vlaams (vls) | 0.594427 | 0.501305 | 0.543909 | | Volapük (vol) | 0.765625 | 0.942308 | 0.844828 | | Võro (vro) | 0.797203 | 0.740260 | 0.767677 | | Waray (war) | 0.930876 | 0.930876 | 0.930876 | | Walloon (wln) | 0.636804 | 0.693931 | 0.664141 | | Wolof (wol) | 0.864220 | 0.845601 | 0.854809 | | Wu Chinese (wuu) | 0.848921 | 0.830986 | 0.839858 | | Xhosa (xho) | 0.837398 | 0.759214 | 0.796392 | | Mingrelian (xmf) | 0.943396 | 0.874126 | 0.907441 | | Yiddish (yid) | 0.955729 | 0.897311 | 0.925599 | | Yoruba (yor) | 0.812010 | 0.719907 | 0.763190 | | Zeeuws (zea) | 0.617737 | 0.550409 | 0.582133 | | Cantonese (zh-yue) | 0.859649 | 0.649007 | 0.739623 | | Standard Chinese (zho) | 0.845528 | 0.781955 | 0.812500 | | accuracy | 0.749527 | 0.749527 | 0.749527 | | macro avg | 0.762866 | 0.742101 | 0.749261 | | weighted avg | 0.762006 | 0.749527 | 0.752910 |
fa9f60aaf78c2b83520e854c1097df68
creativeml-openrail-m
['text-to-image', 'isometric', 'art', 'stable diffusion', 'stable diffusion 1.5', 'duskfallcrew']
false
[![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Duskfallcrew/isometric-dreams-sd-1-5)
35ee6623996d14090f606e469da7c247
creativeml-openrail-m
['text-to-image', 'isometric', 'art', 'stable diffusion', 'stable diffusion 1.5', 'duskfallcrew']
false
Isometric Dreams SD 1.5 trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
8d79f8725c452fead7328273132fdb76
creativeml-openrail-m
['text-to-image', 'isometric', 'art', 'stable diffusion', 'stable diffusion 1.5', 'duskfallcrew']
false
If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk duskametrick15 (use that on your prompt)
0e0042af2a0a99f5a5fc0fcf456cede2
mit
['generated_from_keras_callback']
false
nandysoham/Pub-clustered This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3449 - Train End Logits Accuracy: 0.9097 - Train Start Logits Accuracy: 0.875 - Validation Loss: 0.8311 - Validation End Logits Accuracy: 0.7692 - Validation Start Logits Accuracy: 0.8462 - Epoch: 0
629ce181366b482015eb83c7387f5352
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3449 | 0.9097 | 0.875 | 0.8311 | 0.7692 | 0.8462 | 0 |
be94841eb878f3dc2f2e55b48bfb9809
cc-by-4.0
['espnet', 'audio', 'speech-translation']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 77fce65312877a132bbae01917ad26b74f6e2e14 pip install -e . cd egs2/iwslt22_dialect/st1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix ``` <!-- Generated by scripts/utils/show_st_results.sh -->
bb6838f3909743780072ae53223bdd9e
cc-by-4.0
['espnet', 'audio', 'speech-translation']
false
Environments - date: `Tue Feb 8 13:29:21 EST 2022` - python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.8.1` - Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14` - Commit date: `Tue Feb 8 10:48:10 2022 -0500`
80a28c2e7055f7360971909b2d4435d7
cc-by-4.0
['espnet', 'audio', 'speech-translation']
false
ST config <details><summary>expand</summary> ``` config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 36641 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 3 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 16000000 valid_batch_bins: null train_shape_file: - exp/st_stats_raw_bpe1000_sp/train/speech_shape - exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe - exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe valid_shape_file: - exp/st_stats_raw_bpe1000_sp/valid/speech_shape - exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe - exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - /scratch/iwslt22dump//raw/train_sp/wav.scp - speech - kaldi_ark - - /scratch/iwslt22dump//raw/train_sp/text.tc.en - text - text - - /scratch/iwslt22dump//raw/train_sp/text.tc.rm.ta - src_text - text valid_data_path_and_name_and_type: - - /scratch/iwslt22dump//raw/dev/wav.scp - speech - kaldi_ark - - /scratch/iwslt22dump//raw/dev/text.tc.en - text - text - - /scratch/iwslt22dump//raw/dev/text.tc.rm.ta - src_text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 12.5 scheduler: noamlr scheduler_conf: model_size: 256 warmup_steps: 25000 token_list: - <blank> - <unk> - s - ▁ - apo - '&' - ; - ▁i - ▁you - t - ▁it - ▁the - ▁and - ▁to - ▁that - ▁a - n - a - ▁he - ▁me - m - d - ▁yes - ▁she - ▁no - ▁in - ▁what - ▁for - ▁we - ing - ll - ▁they - re - ▁are - ▁did - ▁god - ▁is - e - ed - ▁so - ▁her - ▁do - ▁have - ▁of - ▁with - ▁go - ▁know - ▁not - ▁was - ▁on - ▁don - y - ▁him - ▁one - ▁like - ▁there - '%' - ▁pw - ▁be - ▁at - ▁told - ▁good - ▁will - ▁my - ▁all - ▁or - c - er - p - ▁how - ▁ah - r - ▁but - ▁them - ▁see - ▁get - ▁can - i - ▁when - ▁going - ▁about - ▁mean - ▁this - k - ▁your - ▁by - ▁if - u - ▁come - ▁up - ▁tell - g - ▁said - ▁then - ▁now - ▁yeah - o - ▁out - al - ra - ▁because - ▁time - ▁well - ▁would - ▁p - ▁from - h - ar - f - ▁swear - ▁went - b - ▁really - or - ▁want - ri - ▁home - ▁work - ve - ▁take - ▁got - ▁just - l - ▁uh - ▁why - en - ▁even - ▁am - ▁who - ▁make - ▁day - '-' - in - ▁something - ▁some - ou - ▁us - ▁okay - ▁where - ▁does - ▁has - ▁thank - ▁c - ▁his - th - ▁back - ▁fine - ▁today - ly - ▁b - ▁oh - ▁doing - ▁everything - ▁here - le - ▁thing - ▁two - ▁anyway - li - ▁had - ▁still - ▁say - ro - ▁after - ce - ▁hello - ▁ma - ▁call - w - ▁listen - il - ▁should - ▁girl - ▁f - z - ▁too - ▁let - ▁understand - ▁may - ▁much - ▁think - ch - ir - ha - ▁other - ▁tomorrow - ▁were - ▁people - es - ▁year - di - ba - ▁right - el - ▁things - ▁house - v - ▁actually - un - ▁an - ▁give - ▁only - ▁better - pe - ▁need - ▁buy - ▁de - ne - ▁ha - ur - ion - ▁made - la - ▁willing - ▁nothing - ▁called - ▁night - ▁yesterday - se - ▁came - ▁lot - ter - ▁g - po - ▁find - ry - ▁car - ▁over - ic - ▁stay - ▁eat - ent - ▁always - ▁very - 'on' - ▁put - ▁ramadan - ▁those - ▁hear - is - ▁talk - ▁three - ▁anything - ▁mo - ▁little - ▁been - ▁already - fi - ation - ke - ▁first - ▁look - it - ▁won - ▁mom - ▁way - ▁before - ▁ok - ▁last - fa - ▁cook - vi - ▁hi - ▁same - ▁thought - ▁also - um - ate - ▁money - ▁start - ▁place - us - ▁morning - ▁could - ▁ask - ▁bring - ▁bit - ▁lo - ▁leave - ▁man - ▁left - ine - ▁days - ge - ▁la - ▁week - ▁friend - ▁problem - ▁sister - ▁allah - ▁feel - ▁every - ▁more - fe - ▁long - ▁hundred - ▁j - ▁eh - ho - ca - em - ▁talking - ▁exam - ▁next - ▁new - ▁fun - ▁took - ▁alright - co - ▁w - ▁um - ▁eid - ▁brother - ▁our - gh - ow - ▁o - ▁four - ni - wa - ▁else - ▁finish - bo - ▁sleep - ▁bless - ▁dear - ▁since - ▁play - ▁name - hi - ▁coming - ▁many - et - ▁usual - ▁con - ▁maybe - ▁off - bi - ▁than - ▁any - ▁mother - ▁son - om - ▁their - ▁keep - ▁dinner - ▁ten - ▁half - ▁help - ▁bad - and - ▁pass - ▁hot - ▁guy - ▁least - ▁down - ▁bought - ▁dinars - ▁working - ▁around - ▁normal - ▁poor - ▁stuff - ▁hope - ▁used - ▁again - ▁bro - ul - ▁phone - ▁ex - ▁done - ▁six - ▁na - ▁month - ▁tired - ▁check - ▁show - ▁together - oo - ▁later - ▁past - ▁five - ▁watch - ya - ▁coffee - ment - ut - ▁plan - ▁great - ▁daughter - j - ▁another - side - ▁change - ▁yet - ting - ▁until - ▁honestly - ▁whole - ol - ▁care - ▁sure - able - id - ▁big - ▁spend - ▁exactly - ▁boy - ▁course - ▁end - ▁please - ▁started - he - up - ▁found - ▁saw - ▁family - ▁asked - ▁enough - ▁during - ▁rest - ▁which - ▁gave - ▁true - ▁while - ▁job - ▁el - ▁each - ▁away - ▁kids - ▁goes - less - ▁twenty - ▁eight - ▁someone - ▁cha - ▁clothes - ah - ▁myself - ▁nice - ▁late - ▁old - ▁real - age - ant - ▁fast - ▁add - ▁hard - ▁these - ful - im - ▁close - ive - ▁dad - ▁pay - ies - ▁dude - ▁alone - ▁far - ance - ▁dis - ▁seven - ▁isn - ▁pro - our - ▁thousand - ▁break - ▁hour - ▁wait - ▁brought - ▁open - ▁un - ▁wedding - ▁walk - ▁father - ▁ka - ▁second - x - ▁saturday - ▁salad - ▁win - ▁everyone - ▁water - ▁tunis - ▁remember - ity - ▁wake - ▁minute - ▁school - ▁sunday - ▁own - ▁shop - ▁cold - ▁meet - ▁wear - ever - ▁send - ▁early - ▁gra - tic - ▁short - ▁use - ▁sometimes - hou - ▁love - ▁prepare - ▁sea - ▁study - ure - ▁com - qui - ▁hand - ▁both - ja - ▁summer - ▁wrong - ▁wanted - che - ▁miss - ▁try - ▁iftar - ▁yourself - q - ▁live - war - ▁expensive - ▁getting - ▁waiting - ▁once - ▁kh - ▁forgot - ▁nine - ▁anymore - ▁soup - ▁uncle - ▁beach - ▁saying - ▁into - ▁having - ▁brik - ▁room - ▁food - ▁visit - ▁matter - ▁thirty - ▁taking - ▁rain - ▁aunt - ▁never - ▁pick - ▁tunisia - ▁health - ▁head - ▁cut - ▁fasting - ▁sick - ▁friday - ▁forget - ▁monday - ▁become - ▁dress - ated - ▁most - wi - ▁hang - ▁life - ▁fish - ▁happy - ▁delicious - ▁deal - ▁finished - ble - ▁studying - ▁weather - ▁making - ▁cost - ▁bl - ▁stayed - ▁guess - ▁teach - ▁stop - ▁near - ▁watching - ▁without - ▁imagine - ▁seriously - fl - ▁speak - ▁idea - ▁must - ▁normally - ▁turn - ize - ▁clean - ▁tv - ▁meat - ▁woke - ▁example - ▁easy - ▁sent - ▁sell - over - ▁fifty - ▁amazing - ▁beautiful - ▁whatever - ▁enjoy - ▁talked - ▁believe - ▁thinking - ▁count - ▁almost - ▁longer - ▁afternoon - ▁hair - ▁front - ▁earlier - ▁mind - ▁kind - ▁tea - ▁best - ▁rent - ▁picture - ▁cooked - ▁price - ight - ▁soon - ▁woman - ▁otherwise - ▁happened - ▁story - ▁luck - ▁high - ▁happen - ▁arrive - ▁paper - ga - ▁quickly - ▁looking - ub - ▁number - ▁staying - ▁sit - man - ack - ▁important - ▁either - ▁person - ▁small - ▁free - ▁crazy - ▁playing - ▁kept - ▁part - ▁game - law - ▁till - uck - ▁ready - ▁might - ▁gone - ▁full - ▁fix - ▁subject - ▁laugh - ▁doctor - ▁welcome - ▁eleven - ▁sleeping - ▁heat - ▁probably - ▁such - ▁café - ▁fat - ▁sweet - ▁married - ▁drink - ▁move - ▁outside - ▁especially - ▁group - ji - ▁market - ▁through - ▁train - ▁protect - ▁turned - ▁red - ▁busy - ▁light - ▁noise - ▁street - ▁manage - ▁piece - ▁sitting - gue - ▁sake - ▁party - ish - ▁young - ▁case - ▁cool - huh - ▁marwa - ▁drive - ▁pray - clock - ▁couscous - ▁spent - ▁felt - ▁hopefully - ▁everybody - ▁living - ▁pain - line - ▁between - ▁match - ▁prayer - que - ian - ▁facebook - ▁spi - ▁eye - ▁children - ▁tonight - ▁mohamed - ▁understood - ▁black - ▁husband - ▁rid - ▁kitchen - ▁face - ▁swim - ▁kid - ▁invite - ▁cup - ▁grilled - ▁wife - ▁cousin - ▁drop - ▁wow - ▁table - ▁du - ▁bored - ▁neighborhood - ▁agree - ▁bread - ▁hamma - ▁straight - ▁tuesday - ▁anyone - ▁lunch - ade - ▁himself - ▁gather - ▁wish - ▁fifteen - ▁wednesday - ▁die - ▁thursday - ▁color - ▁asleep - ▁different - ▁whether - ▁ago - ▁middle - ▁class - ▁cake - shirt - ▁fight - ▁clear - ▁test - ▁plus - ▁sousse - ▁beginning - ▁result - ▁learn - ▁crowded - ▁slept - ▁shoes - ▁august - ▁pretty - ▁white - ▁apparently - ▁reach - ▁mariem - ▁return - ▁road - ▁million - ▁stand - ▁paid - ▁word - ious - ▁few - ▁breakfast - ▁post - ▁kilo - ▁chicken - ▁grade - ▁read - ▁accept - ▁birthday - ▁exhaust - ▁point - ▁july - ▁patience - ▁studies - ▁trouble - ▁along - ▁worry - ▁follow - ▁hurt - ▁afraid - ▁trip - ▁ahmed - ▁remain - ▁succeed - ▁mercy - ▁difficult - ▁weekend - ▁answer - ▁cheap - ▁repeat - ▁auntie - ▁sign - ▁hold - ▁under - ▁olive - ▁mahdi - ▁sfax - ▁annoy - ▁dishes - ▁message - ▁business - ▁french - ▁serious - ▁travel - ▁office - ▁wonder - ▁student - ▁internship - ▁pepper - ▁knew - ▁kill - ▁sauce - ▁herself - ▁hammamet - ▁damn - ▁mix - ▁suit - ▁medicine - ▁remove - ▁gonna - ▁company - ▁quarter - ▁shopping - ▁correct - ▁throw - ▁grow - ▁voice - ▁series - gotten - ▁taste - ▁driving - ▁hospital - ▁sorry - ▁aziz - ▁milk - ▁green - ▁baccalaureate - ▁running - ▁lord - ▁explain - ▁angry - ▁build - ▁fruit - ▁photo - é - ▁crying - ▁baby - ▁store - ▁project - ▁france - ▁twelve - ▁decide - ▁swimming - ▁world - ▁preparing - ▁special - ▁session - ▁behind - ▁vegetable - ▁strong - ▁fatma - ▁treat - ▁cream - ▁situation - ▁settle - ▁totally - ▁stopped - ▁book - ▁honest - ▁solution - ▁vacation - ▁cheese - ▁ahead - ▁sami - ▁focus - ▁scared - ▁club - ▁consider - ▁final - ▁naturally - ▁barely - ▁issue - ▁floor - ▁birth - ▁almighty - ▁engagement - ▁blue - ▁empty - ▁soccer - ▁prophet - ▁ticket - ▁indeed - ▁write - ▁present - ▁patient - ▁available - ▁holiday - ▁leaving - ▁became - ▁reason - ▁apart - ▁impossible - ▁shame - ▁worried - ▁body - ▁continue - ▁program - ▁stress - ▁arabic - ▁round - ▁taxi - ▁transport - ▁third - ▁certain - ▁downstairs - ▁neighbor - ▁directly - ▁giving - ▁june - ▁mini - ▁upstairs - ▁mistake - ▁period - ▁catch - ▁buddy - ▁success - ▁tajine - ▁excuse - ▁organize - ▁question - ▁suffer - ▁remind - ▁university - ▁downtown - ▁sugar - ▁twice - ▁women - ▁couple - ▁everyday - ▁condition - ▁obvious - ▁nobody - ▁complete - ▁stomach - ▁account - ▁september - ▁choose - ▁bottle - ▁figure - ▁instead - ▁salary - '0' - '1' - '3' - '2' - '5' - '7' - '4' - '9' - '8' - / - ° - '6' - è - $ - ï - <sos/eos> src_token_list: - <blank> - <unk> - ّ - ي - ا - ِ - ل - َ - و - ه - ة - م - ر - ك - ▁ما - ُ - ب - ش - د - ت - ▁في - َّ - ▁ن - ▁ي - ▁ت - ن - ▁لا - ح - ▁ه - س - وا - ▁م - ف - ▁إي - ع - ▁ب - ها - ط - ى - ق - ▁الل - ▁أ - ج - ▁والل - ▁و - ▁إيه - ▁ا - ▁يا - ز - ▁تو - ▁بش - ص - ▁أه - خ - ات - ▁إنت - ▁أنا - نا - ▁شن - ▁ق - ▁ش - ▁ك - يت - ين - ▁ف - ار - ▁قال - ▁باهي - ▁ع - ▁من - ▁ل - ▁مش - ▁كان - ▁حت - ▁ول - هم - ▁ر - ان - ▁س - ض - ني - ▁بال - ▁على - ▁متاع - ▁كي - ▁ال - ▁ح - ▁كل - ▁آنا - ▁الم - ▁خ - ▁الس - ▁وال - ون - ور - ▁أم - ▁هك - ▁آش - ▁الد - ▁عاد - ▁ج - ▁معناها - ▁مع - اش - ▁الص - ▁نهار - ▁لل - لها - ▁تي - ▁رب - ▁خاطر - ▁أكهو - غ - ▁شي - الل - ام - تها - ▁ون - ▁آك - ▁فهمت - وم - ▁موش - مشي - ▁ص - ▁اليوم - ▁مر - ست - ▁الب - ▁لاباس - تلي - ▁الكل - ▁عال - ذ - ▁فم - ▁الك - ▁حاجة - ▁شوي - اكا - ▁ياخي - ▁هاني - ▁صح - اس - ▁آه - ▁برشة - ▁الن - ▁وت - ▁الج - لك - ▁راهو - سم - ▁الح - مت - ▁الت - ▁بعد - اج - عد - ▁انشا - وش - لت - ▁وين - ث - ▁ولا - ▁باش - ▁فيها - نت - ▁إ - ▁الأ - ▁الف - ▁إم - ▁واحد - ▁ألو - ▁عندي - ▁أك - ▁خل - ▁وي - ▁تعمل - أ - ▁ريت - ▁وأ - ▁تعرف - بت - ▁الع - ▁مشيت - ▁وه - ▁حاصيلو - ▁بالل - ▁نعمل - ▁غ - ▁تجي - ▁يجي - ▁كيفاش - ▁عملت - ظ - اك - ▁هاو - ▁اش - ▁قد - ▁نق - ▁د - ▁زادا - ▁فيه - رة - ▁بر - ▁الش - ▁ز - ▁كيما - ▁الا - ند - عم - ▁نح - ▁بنتي - ▁نمشي - ▁عليك - ▁نعرفش - ▁كهو - ▁وم - ▁ط - تي - ▁خير - ▁آ - مش - ▁عليه - له - حت - ▁إيا - ▁أحنا - ▁تع - الا - عب - ▁ديما - ▁تت - ▁جو - ▁مالا - ▁أو - ▁قلتلك - ▁معنتها - لنا - ▁شكون - ▁تحب - بر - ▁الر - ▁وا - ▁الق - اء - ▁عل - ▁البارح - ▁وخ - ▁سافا - ▁هوما - ▁ولدي - ▁ - ▁نعرف - يف - رت - ▁وب - ▁روح - ▁علاش - ▁هاذاك - ▁رو - وس - ▁جا - ▁كيف - طر - ▁غادي - يكا - عمل - ▁نحب - ▁عندك - ▁وما - ▁فر - اني - ▁قلتله - ▁الط - فر - ▁دار - ▁عليها - ▁يعمل - ▁نت - ▁تح - باح - ▁ماهو - ▁وكل - ▁وع - قت - ▁فهمتك - عر - ▁وس - ▁تر - ▁سي - يلة - ▁قلت - ▁رمضان - صل - ▁آما - ▁الواحد - ▁بيه - ▁ثلاثة - ▁فهمتني - ▁ها - بط - ▁مازال - قل - ▁بالك - ▁معناتها - ▁ور - ▁قلتلها - ▁يس - رب - ▁ام - ▁وبعد - ▁الث - ▁وإنت - ▁بحذا - ▁لازم - ْ - ▁بن - قرا - سك - ▁يت - خل - ▁فه - عت - ▁هاك - ▁تق - ▁قبل - ▁وك - ▁نقول - ▁الز - حم - ▁عادش - حكي - وها - بة - نس - طل - ▁علاه - ذا - ▁سا - ▁طل - الي - ▁يق - ▁دو - حوا - حد - ▁نشوف - نة - ▁لي - ▁تك - ▁نا - ▁هاذ - ▁خويا - ▁المر - ▁وينك - ▁البر - ▁أتو - ينا - ▁حل - ولي - ▁ثم - ▁عم - ▁آي - ▁قر - از - ▁وح - كش - بعة - ▁كيفاه - ▁نع - ▁الحمدلله - ▁ياسر - ▁الخ - ▁معاك - ▁معاه - ▁تقول - دة - ▁حكاية - تش - ▁حس - ▁غدوا - ▁بالحق - روا - وز - ▁تخ - ▁العيد - رجع - ▁بالي - ▁جات - ▁وج - حة - ▁وش - ▁آخر - ▁طا - ▁مت - لقا - تك - ▁مس - ▁راني - كون - ▁صاحب - ▁هاكا - ▁قول - ▁عر - ▁عنده - ▁يلزم - ▁هاذا - ▁يخ - ▁وقتاش - ▁وقت - بع - ▁العش - ▁هاذي - هاش - ينة - ▁هاذاكا - عطي - ▁تنج - ▁باهية - نيا - فت - ▁يحب - ▁تف - ▁أهلا - وف - ▁غدوة - ▁بيك - ▁بد - عن - ▁در - ▁ننج - هار - ▁الحكاية - مون - وق - ▁نورمال - ▁عندها - خر - ▁بو - ▁حب - ▁آكا - ▁وف - ▁هاذيكا - ▁ديجا - ▁وق - ▁طي - لتل - بعث - ▁تص - رك - ▁مانيش - ▁العادة - ▁شوف - ضر - ▁يمشي - ▁نعملوا - ▁عرفت - ▁زال - ▁متع - ▁عمل - ▁بيها - ▁نحكي - اع - ▁نج - معة - ▁والكل - عناها - ▁يعي - ▁نجي - ستن - ▁هاذيك - ▁عام - ▁فلوس - قة - تين - ▁بالقدا - لهم - ▁تخدم - ▁ٱ - ▁شيء - ▁راهي - ▁جاب - ولاد - ابل - ▁ماك - عة - ▁نمشيوا - وني - شري - بار - انس - ▁وقتها - ▁جديد - ▁يز - ▁كر - ▁حاسيلو - ▁شق - ▁اه - ▁سايي - ▁انشالل - رج - مني - ▁بلا - ▁صحيح - ▁غير - ▁يخدم - مان - وكا - ▁عند - ▁قاعدة - ▁تس - ربة - ▁راس - ▁حط - ▁نكل - تني - ▁الو - سيون - ▁عندنا - ▁لو - ▁ست - صف - ▁ض - ▁كامل - ▁نخدم - ▁يبدا - ▁دونك - ▁أمور - رات - ▁تونس - بدا - ▁تحكي - ▁سو - ▁جاي - ▁وحدة - ▁ساعة - حنا - ▁بكري - ▁إل - ▁وبر - ▁كم - ▁تبدا - ارة - ادي - رق - لوا - ▁يمكن - ▁خاط - ▁وص - جين - ▁هاذاي - ▁هز - قد - ▁قل - ▁وكهو - ▁نص - ▁دي - لقى - ▁وأنا - سين - ▁يح - ▁ماشي - ▁شو - ▁خذيت - امات - ▁كنت - خرج - ▁لقيت - رتاح - كس - ▁حاجات - ▁مريق - ▁مل - ليفون - اوا - ▁شفت - ▁عاملة - ▁تن - ▁والا - سأل - ▁حد - ▁قاللك - ▁العباد - ▁عالاخ - ▁وآك - ▁ماني - ▁ناخذ - ▁حم - ▁الإ - ▁ماضي - ▁ث - الة - ▁أخرى - رين - ▁تشوف - ▁نخرج - ▁أربعة - ▁ألف - نيش - ▁هاي - آ - ▁فيك - رشة - ولة - فلة - ▁بابا - ▁أما - ▁روحي - ▁فيهم - ▁رج - ▁ليك - ونس - يرة - ▁وأكهو - ندي - ▁صار - شك - ▁نرو - ▁آكهو - ▁تش - ▁غاديكا - ▁معاها - ▁لب - ▁أذاكا - ▁آني - ▁يوم - عملوا - ▁نقعد - دوا - ▁عد - سمع - متني - ▁الخدمة - ▁مازلت - ▁قعدت - ايا - ▁برك - قعد - ▁خرجت - ضح - ▁قالل - ▁يقول - ▁وفي - ▁حق - ختي - ▁يعني - خدم - ▁جيت - ▁نرمال - طف - ▁عجب - ▁تقعد - ▁مشينا - اية - ▁خدمة - لدي - روف - ▁الفطر - ▁مشكل - ▁سل - ▁وآنا - الط - ▁بالس - ▁هانا - ▁أوه - ▁أذيكا - ▁وإ - ▁عليهم - ▁حالة - جت - قضي - ▁لق - ▁ونصف - سعة - عطيه - عاو - خانة - ▁مخ - ▁شبيك - بيعة - ▁أهوك - يني - ▁تعد - ▁خال - ▁قريب - ▁راك - ▁قالت - ▁لتو - ▁أكثر - اعة - ▁يظهرلي - ▁ماشية - سمعني - ▁نسيت - ▁ينج - ▁الحمدلل - هدي - ▁وشن - ▁تطي - ▁هنا - ▁نسمع - ▁إنتوما - ▁نحكيلك - ▁قاعد - ▁اسمعني - خرين - إ - ماعة - ▁بالر - ▁دا - ▁عمر - ▁نشري - ▁قهوة - ▁تبارك - ▁صب - ▁مشات - غر - ▁شريت - ▁عامل - ▁زوج - ثنين - ▁برب - ريق - ▁نكم - ▁لم - بيب - ▁مياة - ▁مالل - ▁قعد - ▁سخون - قس - ▁وحده - ▁اسمع - ▁خمسة - ▁غالي - ▁الأو - رلي - ▁العظيم - ▁ترو - تهم - كري - ▁نجيب - ▁جملة - قول - ▁قلتلي - ▁إيجا - ▁يقعد - ▁إيام - ▁يعطيك - ▁نخل - ▁دب - يمة - رهبة - ▁نهز - ▁محم - ▁بين - غار - ▁نحنا - ▁بون - ▁الغ - ▁شهر - ▁بار - رقة - ▁نطي - ئ - ترو - ▁ملا - ▁الكرهبة - ▁باه - ▁عالإخ - ▁عباد - ▁بلاصة - ▁مشى - بيع - ▁نفس - ▁عملنا - ▁واح - ▁أحلاه - ▁بحذاك - ▁لأ - ▁دخ - باب - ▁ودر - ▁غالب - ▁ناكل - ▁مثلا - ء - ▁راقد - ▁تفر - ▁الوقت - ▁تاخذ - حذا - نتر - ▁نبدا - ▁حال - ▁مريم - الم - ▁جمعة - رجول - ▁معايا - ▁تخرج - ▁باس - ▁ساعات - ▁عندهم - ▁نتفر - مسة - ▁الجمعة - بعين - ▁أكاهو - ▁ميش - مراة - ▁خذا - ▁ظ - ▁سيدي - ▁معاي - ▁شبيه - ▁حكا - ▁سف - ▁بعضنا - ▁بالض - ▁ليلة - ▁زعما - ▁الحق - مضان - ▁صعيب - ▁قالتلك - ً - ملة - ▁بق - عرف - لاطة - ▁خرج - ▁أخت - ▁تقوللي - ▁معانا - ▁صغير - ▁إسمه - ▁بعض - ▁العام - ▁علينا - ▁يتع - ▁فاش - ▁شع - ▁معاهم - ▁يسالش - ▁لهنا - ▁سمعت - ▁البار - ▁نتصو - ▁الاخ - ▁وكان - وبة - دمة - ▁كون - ▁مبعد - ▁تسمع - ▁بعيد - ▁تاكل - ▁نلقا - لامة - لاثة - ▁ذ - ▁تحس - ▁الواح - ▁لدار - ▁فاتت - ▁تاو - ▁أحوالك - ▁عاملين - ▁كبيرة - عجب - ▁بنت - ▁بيدي - ▁حكيت - ▁تحط - ▁مسكينة - ▁هاذوكم - ▁نزيد - لاث - ▁عشرة - ▁عيني - ▁تعب - ▁ياكل - ▁وزيد - ▁طول - ▁حمدلله - ▁وقتاه - ▁معناه - ▁وآش - ▁ووه - ▁وواحد - ▁نشوفوا - ▁عيد - ▁بصراحة - ▁بحذانا - ▁قاعدين - ▁راجل - ▁وحدي - ▁وعشرين - ▁لين - ▁خايب - ▁قالتله - ▁تهز - عيد - ▁كبير - ▁يعرف - ▁عارف - ▁الفلوس - ▁زايد - ▁خدمت - ▁هاذوما - ▁سلاطة - ▁فارغة - ▁ساعتين - ▁تبد - ▁راو - ▁مائة - ▁بعضهم - ▁ظاهرلي - ▁الفازة - كتب - ▁القهوة - سبوك - ▁زاد - ▁ضرب - حكيلي - ▁فوق - ▁عاود - ▁راي - ▁ومبعد - ▁حوايج - ▁دخلت - ▁يقوللك - ▁زيد - ▁زلت - لفزة - ▁وقال - ▁يهب - ▁يلزمني - ▁الحمد - ▁أذي - طبيعت - ▁دورة - ▁عالأقل - ▁آذاك - ▁وبال - ▁الجاي - عطيني - ▁ياخذ - ▁احكيلي - ▁نهبط - ▁رقدت - بلاصة - ▁عزيز - ▁صغار - ▁أقسم - ▁جيب - ▁وصلت - ▁أحوال - ▁جيست - ▁جماعة - سئل - ▁خوذ - ▁يهز - ▁الأخرى - ▁آلاف - ▁إسمع - ▁الحقيقة - ▁ناقص - ▁حاط - ▁موجود - عباد - ▁آذيك - ▁خارج - ▁الخير - ▁البنات - بقى - ▁طرف - ▁سينون - ▁ماذاب - ▁البحر - ▁نرقد - مدلله - ▁إيجى - ▁خالتي - ▁فازة - ▁بريك - ▁شريبتك - ▁تطلع - ؤ - ▁المشكلة - ▁طري - ▁مادام - ▁طلبت - ▁يلعب - ▁نعاود - ▁وحدك - ▁ظاهر - ٱ - ژ - ٍ - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: asr_weight: 0.3 mt_weight: 0.0 mtlalpha: 1.0 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe src_token_type: bpe bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: transformer encoder_conf: input_layer: conv2d num_blocks: 12 linear_units: 2048 dropout_rate: 0.1 output_size: 256 attention_heads: 4 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: input_layer: embed num_blocks: 6 linear_units: 2048 dropout_rate: 0.1 extra_asr_decoder: transformer extra_asr_decoder_conf: input_layer: embed num_blocks: 2 linear_units: 2048 dropout_rate: 0.1 extra_mt_decoder: transformer extra_mt_decoder_conf: input_layer: embed num_blocks: 2 linear_units: 2048 dropout_rate: 0.1 required: - output_dir - src_token_list - token_list version: 0.10.6a1 distributed: true ``` </details>
040698fe7f35875fda9c2e028dda0602
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
MultiBERTs Seed 2 Checkpoint 40k (uncased) Seed 2 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
61993b4de7185a326073be421e415683
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-40k') model = BertModel.from_pretrained("multiberts-seed-2-40k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
d41228264ca9aa5bef3346fd5ca46fe4
apache-2.0
['translation']
false
fi-en * source group: Finnish * target group: English * OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md) * model: transformer-align * source language(s): fin * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opusTCv20210807+bt-2021-08-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip) * test set translations: [opusTCv20210807+bt-2021-08-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt) * test set scores: [opusTCv20210807+bt-2021-08-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.eval.txt)
5d6a370b63d54a5dcb566f4582d8448c
apache-2.0
['translation']
false
words | BP | |---------|-------|-------|-------|--------|----| | newsdev2015-enfi.fin-eng | 27.1 | 0.550 | 1500 | 32104 | 0.988 | | newstest2015-enfi.fin-eng | 28.5 | 0.560 | 1370 | 27356 | 0.980 | | newstest2016-enfi.fin-eng | 31.7 | 0.586 | 3000 | 63043 | 1.000 | | newstest2017-enfi.fin-eng | 34.6 | 0.610 | 3002 | 61936 | 0.988 | | newstest2018-enfi.fin-eng | 25.4 | 0.530 | 3000 | 62325 | 0.981 | | newstest2019-fien.fin-eng | 30.6 | 0.577 | 1996 | 36227 | 0.994 | | newstestB2016-enfi.fin-eng | 25.8 | 0.538 | 3000 | 63043 | 0.987 | | newstestB2017-enfi.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 | | newstestB2017-fien.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 | | Tatoeba-test-v2021-08-07.fin-eng | 54.1 | 0.700 | 10000 | 75212 | 0.988 |
a63597b129473ae19d27de464116cb97
apache-2.0
['translation']
false
System Info: - hf_name: fi-en - source_languages: fin - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fi', 'en'] - src_constituents: ('Finnish', {'fin'}) - tgt_constituents: ('English', {'eng'}) - src_multilingual: False - tgt_multilingual: False - long_pair: fin-eng - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt - src_alpha3: fin - tgt_alpha3: eng - chrF2_score: 0.7 - bleu: 54.1 - src_name: Finnish - tgt_name: English - train_date: 2021-08-25 00:00:00 - src_alpha2: fi - tgt_alpha2: en - prefer_old: False - short_pair: fi-en - helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002 - transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b - port_machine: LM0-400-22516.local - port_time: 2021-11-04-21:36
8fc4d23d27d85a429d3ea8066c5a476f
mit
['generated_from_trainer']
false
nbme-roberta-large This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7825
7250d2b56042c1a1684a7b167ff868a6
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1117 | 1.0 | 1850 | 0.9610 | | 0.8911 | 2.0 | 3700 | 0.8466 | | 0.8158 | 3.0 | 5550 | 0.7825 |
f1beb166cf9cb23f032a7873e7173ae7
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased_finetuned_Balance_Upsampling_SPEECH_TEXT_DISPLAY_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6982 - Accuracy: 0.7759 - F1: 0.7743
3478be8dc89cd5c84f0c6076afa96241
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10
04fb21edc90d4ea47c3489e1abcefd34
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.5321 | 1.0 | 7958 | 1.3225 | 0.7271 | 0.7391 | | 0.2967 | 2.0 | 15916 | 1.3868 | 0.7574 | 0.7601 | | 0.1821 | 3.0 | 23874 | 1.4753 | 0.7513 | 0.7515 | | 0.1193 | 4.0 | 31832 | 1.7028 | 0.7588 | 0.7596 | | 0.0722 | 5.0 | 39790 | 1.8155 | 0.7615 | 0.7599 | | 0.041 | 6.0 | 47748 | 2.1622 | 0.7695 | 0.7678 | | 0.0258 | 7.0 | 55706 | 2.3871 | 0.75 | 0.7462 | | 0.0149 | 8.0 | 63664 | 2.6135 | 0.7571 | 0.7524 | | 0.0076 | 9.0 | 71622 | 2.7974 | 0.7648 | 0.7617 | | 0.0051 | 10.0 | 79580 | 2.6982 | 0.7759 | 0.7743 |
2a9b12458e76929699e173f672041cae
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-EL16-DL1 (Deep-Narrow version) T5-Efficient-SMALL-EL16-DL1 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
b11b2ed9b9b91272a89bf25bb7183334
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-el16-dl1** - is of model type **Small** with the following variations: - **el** is **16** - **dl** is **1** It has **71.01** million parameters and thus requires *ca.* **284.04 MB** of memory in full precision (*fp32*) or **142.02 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
169a27a923e921cbb4a92f48e055a7f3
apache-2.0
['pytorch', 'text-generation', 'causal-lm', 'rwkv']
false
Model Description RWKV-4 14B is a L40-D5120 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. Use https://github.com/BlinkDL/ChatRWKV to run it. ctx_len = 1024 n_layer = 40 n_embd = 5120 Final checkpoint: RWKV-4-Pile-14B-20230213-8019.pth : Trained on the Pile for 331B tokens. * Pile loss 1.7579 * LAMBADA ppl 3.81, acc 71.05% * PIQA acc 77.42% * SC2016 acc 75.57% * Hellaswag acc_norm 70.24% * WinoGrande acc 62.98%
fbd7f778fc81f18d671803828bed087f
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
mnist-digit-classification-2022-09-04 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset. It achieves the following results on the evaluation set: - Loss: 0.0319 - Accuracy: 0.9923
d96b1eb3d3e72914082657bca24a048c
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0
0ce2c4ce729005984f21542a4dd4cb34
apache-2.0
['translation']
false
tur-lit * source group: Turkish * target group: Lithuanian * OPUS readme: [tur-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md) * model: transformer-align * source language(s): tur * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.eval.txt)
2e4221e3e94e6356e38cb44c8474644c
apache-2.0
['translation']
false
System Info: - hf_name: tur-lit - source_languages: tur - target_languages: lit - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'lt'] - src_constituents: {'tur'} - tgt_constituents: {'lit'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt - src_alpha3: tur - tgt_alpha3: lit - short_pair: tr-lt - chrF2_score: 0.631 - bleu: 35.6 - brevity_penalty: 0.9490000000000001 - ref_len: 8285.0 - src_name: Turkish - tgt_name: Lithuanian - train_date: 2020-06-17 - src_alpha2: tr - tgt_alpha2: lt - prefer_old: False - long_pair: tur-lit - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
a954cbe4cc6c0c2a1e852cf4ea07fd77
apache-2.0
['generated_from_keras_callback']
false
NAOKITY/bert-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.1438 - Validation Loss: 0.0 - Epoch: 2
3ccd158e0467205187ac35bc0a091284
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1149, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
0e14a0fab8689f4226d97a7c5c0bfa7d
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1483 | 0.0 | 0 | | 2.1484 | 0.0 | 1 | | 2.1438 | 0.0 | 2 |
7a393ba0a13e018718768c7800ac0c0c
apache-2.0
['generated_from_trainer']
false
bert-large-cased-finetuned-lowR100-2-cased-DA-20 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.7801
6d22904c5ab921e140f64ab937890816
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP
d028852d3032a3c69559b78efa8cabbc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.4515 | 1.0 | 1 | 8.1791 | | 6.4671 | 2.0 | 2 | 6.0155 | | 6.533 | 3.0 | 3 | 5.9784 | | 5.8654 | 4.0 | 4 | 5.2092 | | 5.5458 | 5.0 | 5 | 6.1062 | | 5.1806 | 6.0 | 6 | 5.0913 | | 4.8797 | 7.0 | 7 | 4.3025 | | 4.6975 | 8.0 | 8 | 4.8598 | | 4.2859 | 9.0 | 9 | 4.2301 | | 4.3584 | 10.0 | 10 | 4.0683 | | 4.0203 | 11.0 | 11 | 2.7986 | | 3.977 | 12.0 | 12 | 4.1575 | | 3.4077 | 13.0 | 13 | 3.6507 | | 3.313 | 14.0 | 14 | 2.8674 | | 3.0962 | 15.0 | 15 | 2.5103 | | 2.8883 | 16.0 | 16 | 3.1318 | | 2.9623 | 17.0 | 17 | 2.1316 | | 2.5544 | 18.0 | 18 | 2.7741 | | 2.9957 | 19.0 | 19 | 2.9045 | | 2.749 | 20.0 | 20 | 2.8824 | | 2.291 | 21.0 | 21 | 2.7450 | | 2.3373 | 22.0 | 22 | 2.3774 | | 2.6506 | 23.0 | 23 | 2.5515 | | 2.6736 | 24.0 | 24 | 2.2106 | | 2.3845 | 25.0 | 25 | 2.3166 | | 2.3762 | 26.0 | 26 | 2.3221 | | 2.4184 | 27.0 | 27 | 2.8996 | | 2.6826 | 28.0 | 28 | 2.1793 | | 2.4678 | 29.0 | 29 | 2.4268 | | 2.2998 | 30.0 | 30 | 1.8153 | | 2.7085 | 31.0 | 31 | 2.4401 | | 2.1231 | 32.0 | 32 | 3.3329 | | 2.1349 | 33.0 | 33 | 1.9675 | | 2.4647 | 34.0 | 34 | 3.0172 | | 2.3552 | 35.0 | 35 | 1.8550 | | 2.2843 | 36.0 | 36 | 2.7737 | | 2.2164 | 37.0 | 37 | 3.4890 | | 2.2118 | 38.0 | 38 | 3.4251 | | 2.3133 | 39.0 | 39 | 2.6806 | | 1.9773 | 40.0 | 40 | 2.7801 |
775f3fc66b0bcd941d9b49b33152cd0e
mit
[]
false
Turkish ELECTRA model We release a base ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*. > ELECTRA is a new method for self-supervised language representation learning. It can be used to > pre-train transformer networks using relatively little compute. ELECTRA models are trained to > distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to > the discriminator of a GAN. More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB) or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
6c1dbc4d589a9aa2d40455b1d46c5ff7
mit
[]
false
Stats The current version of the model is trained on a filtered and sentence segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/), a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/). The final training corpus has a size of 35GB and 44,04,976,662 tokens. Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model on a TPU v3-8 for 1M steps.
f7eebe83e57c5f18e2bd3b98ea7ea329
mit
[]
false
Model weights [Transformers](https://github.com/huggingface/transformers) compatible weights for both PyTorch and TensorFlow are available. | Model | Downloads | ------------------------------------------------ | --------------------------------------------------------------------------------------------------------------- | `dbmdz/electra-base-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/vocab.txt)
1aa4bb0158b66cb29859141eecc9cce1
mit
[]
false
Usage With Transformers >= 2.8 our ELECTRA base cased model can be loaded like: ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator") model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator") ```
244e4a7b14e22f991c7989455292cecf