license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 30 - mixed_precision_training: Native AMP
15022fee38830381937de6d65ba9acbb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.9616 | 2.73 | 30 | 0.7717 | 0.0 | 0.0 | 0.0 | 0.8608 | | 0.9266 | 5.45 | 60 | 0.6687 | 0.0 | 0.0 | 0.0 | 0.8608 | | 0.8486 | 8.18 | 90 | 0.6100 | 0.2133 | 0.0488 | 0.0794 | 0.8635 | | 0.7421 | 10.91 | 120 | 0.5922 | 0.2534 | 0.1966 | 0.2215 | 0.8542 | | 0.6481 | 13.64 | 150 | 0.5696 | 0.2889 | 0.2378 | 0.2609 | 0.8596 | | 0.5948 | 16.36 | 180 | 0.5798 | 0.2678 | 0.3034 | 0.2845 | 0.8472 | | 0.5621 | 19.09 | 210 | 0.5913 | 0.2486 | 0.3293 | 0.2833 | 0.8381 | | 0.5234 | 21.82 | 240 | 0.5816 | 0.2585 | 0.3262 | 0.2884 | 0.8404 | | 0.5028 | 24.55 | 270 | 0.5944 | 0.2545 | 0.3476 | 0.2938 | 0.8368 | | 0.4975 | 27.27 | 300 | 0.5923 | 0.2531 | 0.3476 | 0.2929 | 0.8368 | | 0.4791 | 30.0 | 330 | 0.5926 | 0.2559 | 0.3460 | 0.2942 | 0.8368 |
102f241f2c16ac93766a1dd192930c8d
mit
[]
false
million-live-spade-q-object-3k on Stable Diffusion This is the `<spade_q>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<spade_q> 0](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/0.png) ![<spade_q> 1](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/1.png) ![<spade_q> 2](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/2.png) ![<spade_q> 3](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/3.png) ![<spade_q> 4](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/4.png) ![<spade_q> 5](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/5.png) ![<spade_q> 6](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/6.png) ![<spade_q> 7](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/7.png) ![<spade_q> 8](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/8.png) ![<spade_q> 9](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/9.png) ![<spade_q> 10](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/10.png) ![<spade_q> 11](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/11.png) ![<spade_q> 12](https://huggingface.co/sd-concepts-library/million-live-spade-q-object-3k/resolve/main/concept_images/12.png)
61e672e55cc17563b323d28b02f3fbfd
apache-2.0
[]
false
Model description An XLM-RoBERTa Large reading comprehension model initialized from [nq_tydi_sq1-xlmr_large-20221110](https://huggingface.co/PrimeQA/nq_tydi_sq1-reader-xlmr_large-20221110/) with continued training on TyDi QA with passage answer spans used for the begin and end of boolean questions.
7b63b7ba6243026bb57b35fc4ea40c8f
apache-2.0
[]
false
Intended uses & limitations You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model.
9c6c39807d2fa5d94cbc43ff38a36c0d
apache-2.0
[]
false
Usage You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb).
a54941f3d67673d88282d30f41f670bc
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @article{Rosenthal2021DoAT, title={Do Answers to Boolean Questions Need Explanations? Yes}, author={Sara Rosenthal and Mihaela A. Bornea and Avirup Sil and Radu Florian and Scott McCarley}, journal={ArXiv}, year={2021}, volume={abs/2112.07772} } ``` ```bibtex @misc{https://doi.org/10.48550/arxiv.2206.08441, author = {McCarley, Scott and Bornea, Mihaela and Rosenthal, Sara and Ferritto, Anthony and Sultan, Md Arafat and Sil, Avirup and Florian, Radu}, title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions}, journal = {CoRR}, publisher = {arXiv}, year = {2022}, url = {https://arxiv.org/abs/2206.08441}, } ```
501a082fe7c5a3c5333d214e81cbae7f
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.000475}, 'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True}, 'generation': {'batch_size': 64, 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 704, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 512, 'prefix': '<|aligned|>', 'use_prompt_for_scoring': False}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_samples': 512, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'codeparrot/codeparrot-small'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'kejian/final-cond-25-0.01', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0008, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 5000, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
552de0ae1d3d3cdafc47f8119430f2b8
mit
['roberta-base', 'roberta-base-epoch_44']
false
RoBERTa, Intermediate Checkpoint - Epoch 44 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_44.
0e9e3d42af3ac21e848dd125e0b3a504
mit
['generated_from_trainer']
false
spanish-t5-small-disco-poetry This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0477
b22063b0c57a16379d865d301696d7dc
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1417 | 1.0 | 1284 | 0.0577 | | 0.0902 | 2.0 | 2568 | 0.0516 | | 0.0803 | 3.0 | 3852 | 0.0494 | | 0.0733 | 4.0 | 5136 | 0.0488 | | 0.0683 | 5.0 | 6420 | 0.0480 | | 0.067 | 6.0 | 7704 | 0.0477 |
d78e697be27c5ff108f543b7ad3e7638
apache-2.0
['translation']
false
opus-mt-prl-es * source languages: prl * target languages: es * OPUS readme: [prl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/prl-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.eval.txt)
4e746847152438728a8d523f24b01180
apache-2.0
['translation']
false
opus-mt-fi-lu * source languages: fi * target languages: lu * OPUS readme: [fi-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-lu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-lu/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lu/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-lu/opus-2020-01-08.eval.txt)
4670e0317e5d94df7e87745d82fad82b
apache-2.0
['generated_from_trainer']
false
NLP-sentiment-project-2001-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.0008 - Accuracy: 0.9998 - F1: 0.9998 - Precision: 0.9996
4c5805fa023f3ad8da137a5551cc3e78
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 20 - eval_batch_size: 20 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15
19a2afd7cc1d02848aefd6179b1a544c
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
wav2vec2-xls-r-300m-Russian-small This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3514 - Wer: 0.4838
011980cbb5da2b60575af06622bc0459
apache-2.0
['generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.512 | 1.32 | 400 | 3.2207 | 1.0 | | 3.1562 | 2.65 | 800 | 3.0166 | 1.0 | | 1.5211 | 3.97 | 1200 | 0.7134 | 0.8275 | | 0.6724 | 5.3 | 1600 | 0.4713 | 0.6402 | | 0.4693 | 6.62 | 2000 | 0.3904 | 0.5668 | | 0.3693 | 7.95 | 2400 | 0.3609 | 0.5121 | | 0.3004 | 9.27 | 2800 | 0.3514 | 0.4838 |
dc49d0dbff6e17937dbaeaa97642ac46
apache-2.0
['Quality Estimation', 'microtransquest']
false
Using Pre-trained Models ```python from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel import torch model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_de-it-smt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available()) source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]]) ```
690e3182881f9dee0b63fa700ed999af
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Catalan This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 ca dataset. It achieves the following results on the evaluation set: - Loss: 0.2629 - Wer: 11.7313
0babc2498a5e1b8cc4a3dd42bf54a962
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP
a55b6b0cfe5439633598b66313bb1a08
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2835 | 0.5 | 1000 | 0.3243 | 14.7322 | | 0.1684 | 1.0 | 2000 | 0.2629 | 11.7313 |
59461cc91c1eee005aa20a4e800ce42c
cc-by-4.0
['text2text-generation', 'question-generation', 'answer-extraction', 'question-answering', 'text-generation']
false
mt5-small for Turkish Question Generation Automated question generation and question answering using text-to-text transformers by OBSS AI. ```python from core.api import GenerationAPI generation_api = GenerationAPI('mt5-small-3task-prepend-tquad2', qg_format='prepend') ```
bbbb8c97cabe3466699225b4bbd7d41b
cc-by-4.0
['text2text-generation', 'question-generation', 'answer-extraction', 'question-answering', 'text-generation']
false
Hyperparameters ``` batch_size = 256 n_epochs = 15 base_LM_model = "mt5-small" max_source_length = 512 max_target_length = 64 learning_rate = 1.0e-3 task_lisst = ["qa", "qg", "ans_ext"] qg_format = "prepend" ```
8aea68a2bcfdbadff4ac3536cf5d4031
cc-by-4.0
['text2text-generation', 'question-generation', 'answer-extraction', 'question-answering', 'text-generation']
false
Usage 🔥 ```python from core.api import GenerationAPI generation_api = GenerationAPI('mt5-small-3task-prepend-tquad2', qg_format='prepend') context = """ Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır. Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir. """
6194865c6526851e1b21b84ed1993b5a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-go_emotions_20220608_1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 0.0857 - F1: 0.5575 - Roc Auc: 0.7242 - Accuracy: 0.4364
fafdd59c3e5ede1a4d7c553d807a7c5d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.173 | 1.0 | 679 | 0.1074 | 0.4245 | 0.6455 | 0.2976 | | 0.0989 | 2.0 | 1358 | 0.0903 | 0.5199 | 0.6974 | 0.3972 | | 0.0865 | 3.0 | 2037 | 0.0868 | 0.5504 | 0.7180 | 0.4263 | | 0.0806 | 4.0 | 2716 | 0.0860 | 0.5472 | 0.7160 | 0.4233 | | 0.0771 | 5.0 | 3395 | 0.0857 | 0.5575 | 0.7242 | 0.4364 |
1bf16a5c062d70aea1e787fc0a3bc092
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-en-asr-timit This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4525 - Wer: 0.3510
fc1f27c2aee8f79c74f53c8c4ff862b3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.6253 | 3.17 | 200 | 3.0613 | 1.0 | | 2.9038 | 6.35 | 400 | 2.7513 | 1.0 | | 1.5048 | 9.52 | 600 | 0.6193 | 0.5702 | | 0.4196 | 12.7 | 800 | 0.4788 | 0.4464 | | 0.2203 | 15.87 | 1000 | 0.4743 | 0.4098 | | 0.1439 | 19.05 | 1200 | 0.4420 | 0.3804 | | 0.0963 | 22.22 | 1400 | 0.4587 | 0.3620 | | 0.073 | 25.4 | 1600 | 0.4681 | 0.3588 | | 0.0603 | 28.57 | 1800 | 0.4525 | 0.3510 |
cfc1ad3df5828244603240d6eaa8692e
apache-2.0
['tapex', 'table-question-answering']
false
TAPEX-large model fine-tuned on WTQ. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining). To load it and run inference, you can do the following: ``` from transformers import BartTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-wtq") model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-wtq")
913a0cf65f561d364379ad69d1467dfc
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-utility-1-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3728 - Accuracy: 0.3956
ec4a3d18c3c6087bbd230c82eaf24abe
apache-2.0
['generated_from_trainer']
false
bert-small-finetuned-ner-to-multilabel-finer-139 This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0019
2fe6bc9b019874044e4fe589e9c10550
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.1398 | 0.0 | 500 | 0.0244 | | 0.0164 | 0.01 | 1000 | 0.0114 | | 0.01 | 0.01 | 1500 | 0.0084 | | 0.0081 | 0.02 | 2000 | 0.0073 | | 0.0072 | 0.02 | 2500 | 0.0068 | | 0.0069 | 0.03 | 3000 | 0.0065 | | 0.0067 | 0.03 | 3500 | 0.0063 | | 0.0066 | 0.04 | 4000 | 0.0062 | | 0.0061 | 0.04 | 4500 | 0.0062 | | 0.0069 | 0.04 | 5000 | 0.0061 | | 0.0063 | 0.05 | 5500 | 0.0061 | | 0.0062 | 0.05 | 6000 | 0.0061 | | 0.006 | 0.06 | 6500 | 0.0061 | | 0.0059 | 0.06 | 7000 | 0.0056 | | 0.0058 | 0.07 | 7500 | 0.0054 | | 0.0054 | 0.07 | 8000 | 0.0054 | | 0.0057 | 0.08 | 8500 | 0.0053 | | 0.0057 | 0.08 | 9000 | 0.0052 | | 0.0056 | 0.08 | 9500 | 0.0051 | | 0.0051 | 0.09 | 10000 | 0.0050 | | 0.0054 | 0.09 | 10500 | 0.0049 | | 0.005 | 0.1 | 11000 | 0.0048 | | 0.0049 | 0.1 | 11500 | 0.0046 | | 0.0049 | 0.11 | 12000 | 0.0046 | | 0.0046 | 0.11 | 12500 | 0.0044 | | 0.0043 | 0.12 | 13000 | 0.0043 | | 0.0045 | 0.12 | 13500 | 0.0042 | | 0.0042 | 0.12 | 14000 | 0.0042 | | 0.0042 | 0.13 | 14500 | 0.0039 | | 0.0042 | 0.13 | 15000 | 0.0038 | | 0.0039 | 0.14 | 15500 | 0.0037 | | 0.004 | 0.14 | 16000 | 0.0036 | | 0.0037 | 0.15 | 16500 | 0.0035 | | 0.0036 | 0.15 | 17000 | 0.0035 | | 0.0036 | 0.16 | 17500 | 0.0035 | | 0.0035 | 0.16 | 18000 | 0.0033 | | 0.0037 | 0.16 | 18500 | 0.0033 | | 0.0035 | 0.17 | 19000 | 0.0032 | | 0.0032 | 0.17 | 19500 | 0.0031 | | 0.0032 | 0.18 | 20000 | 0.0031 | | 0.0033 | 0.18 | 20500 | 0.0030 | | 0.003 | 0.19 | 21000 | 0.0030 | | 0.0034 | 0.19 | 21500 | 0.0029 | | 0.0031 | 0.2 | 22000 | 0.0029 | | 0.003 | 0.2 | 22500 | 0.0028 | | 0.0032 | 0.2 | 23000 | 0.0028 | | 0.003 | 0.21 | 23500 | 0.0027 | | 0.0029 | 0.21 | 24000 | 0.0027 | | 0.0027 | 0.22 | 24500 | 0.0026 | | 0.0029 | 0.22 | 25000 | 0.0026 | | 0.0027 | 0.23 | 25500 | 0.0026 | | 0.0028 | 0.23 | 26000 | 0.0026 | | 0.0027 | 0.24 | 26500 | 0.0025 | | 0.0026 | 0.24 | 27000 | 0.0025 | | 0.0026 | 0.24 | 27500 | 0.0025 | | 0.0026 | 0.25 | 28000 | 0.0024 | | 0.0025 | 0.25 | 28500 | 0.0024 | | 0.0026 | 0.26 | 29000 | 0.0024 | | 0.0025 | 0.26 | 29500 | 0.0024 | | 0.0024 | 0.27 | 30000 | 0.0024 | | 0.0026 | 0.27 | 30500 | 0.0023 | | 0.0024 | 0.28 | 31000 | 0.0023 | | 0.0025 | 0.28 | 31500 | 0.0023 | | 0.0024 | 0.28 | 32000 | 0.0023 | | 0.0023 | 0.29 | 32500 | 0.0022 | | 0.0024 | 0.29 | 33000 | 0.0022 | | 0.0024 | 0.3 | 33500 | 0.0022 | | 0.0022 | 0.3 | 34000 | 0.0022 | | 0.0023 | 0.31 | 34500 | 0.0021 | | 0.0023 | 0.31 | 35000 | 0.0021 | | 0.0024 | 0.32 | 35500 | 0.0021 | | 0.0023 | 0.32 | 36000 | 0.0021 | | 0.0023 | 0.32 | 36500 | 0.0021 | | 0.0021 | 0.33 | 37000 | 0.0021 | | 0.0021 | 0.33 | 37500 | 0.0021 | | 0.0022 | 0.34 | 38000 | 0.0021 | | 0.0022 | 0.34 | 38500 | 0.0020 | | 0.0022 | 0.35 | 39000 | 0.0020 | | 0.0022 | 0.35 | 39500 | 0.0020 | | 0.0022 | 0.36 | 40000 | 0.0022 | | 0.0022 | 0.36 | 40500 | 0.0020 | | 0.0022 | 0.36 | 41000 | 0.0020 | | 0.0021 | 0.37 | 41500 | 0.0020 | | 0.0022 | 0.37 | 42000 | 0.0020 | | 0.0021 | 0.38 | 42500 | 0.0020 | | 0.0021 | 0.38 | 43000 | 0.0019 | | 0.0022 | 0.39 | 43500 | 0.0019 | | 0.002 | 0.39 | 44000 | 0.0019 | | 0.0021 | 0.4 | 44500 | 0.0020 | | 0.0022 | 0.4 | 45000 | 0.0019 | | 0.0022 | 0.4 | 45500 | 0.0019 | | 0.002 | 0.41 | 46000 | 0.0019 | | 0.0018 | 0.41 | 46500 | 0.0019 | | 0.0022 | 0.42 | 47000 | 0.0019 |
ac88ec85dbd4028b1f8ab30de4f87600
cc-by-4.0
['generated_from_trainer']
false
out_cat_v2 This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4102 - Accuracy: 0.7145
2f7e37dfd354f67ec47be4b2765f7a4b
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'pl', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Fine-tuned XLSR-53 large model for speech recognition in Polish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Polish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
e4b17430b4b7474a095870672487e30a
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'pl', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-polish") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "pl" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-polish" SAMPLES = 5 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
f84a3bd65889b03d7c0e6d6dac981f50
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'pl', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | """CZY DRZWI BYŁY ZAMKNIĘTE?""" | PRZY DRZWI BYŁY ZAMKNIĘTE | | GDZIEŻ TU POWÓD DO WYRZUTÓW? | WGDZIEŻ TO POM DO WYRYDÓ | | """O TEM JEDNAK NIE BYŁO MOWY.""" | O TEM JEDNAK NIE BYŁO MOWY | | LUBIĘ GO. | LUBIĄ GO | | — TO MI NIE POMAGA. | TO MNIE NIE POMAGA | | WCIĄŻ LUDZIE WYSIADAJĄ PRZED ZAMKIEM, Z MIASTA, Z PRAGI. | WCIĄŻ LUDZIE WYSIADAJĄ PRZED ZAMKIEM Z MIASTA Z PRAGI | | ALE ON WCALE INACZEJ NIE MYŚLAŁ. | ONY MONITCENIE PONACZUŁA NA MASU | | A WY, CO TAK STOICIE? | A WY CO TAK STOICIE | | A TEN PRZYRZĄD DO CZEGO SŁUŻY? | A TEN PRZYRZĄD DO CZEGO SŁUŻY | | NA JUTRZEJSZYM KOLOKWIUM BĘDZIE PIĘĆ PYTAŃ OTWARTYCH I TEST WIELOKROTNEGO WYBORU. | NAJUTRZEJSZYM KOLOKWIUM BĘDZIE PIĘĆ PYTAŃ OTWARTYCH I TEST WIELOKROTNEGO WYBORU |
e42d64f63c8e36b06b559c6077497ede
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'pl', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-polish --dataset mozilla-foundation/common_voice_6_0 --config pl --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-polish --dataset speech-recognition-community-v2/dev_data --config pl --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
a503e899b15c466b4baecef439d0d8d7
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'pl', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
false
Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-polish, title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}olish}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-polish}}, year={2021} } ```
bd807346ed570fc647844962a44039dd
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4028 - F1: 0.6869
019a285951a9d74cab5fd194e58161f4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1396 | 1.0 | 50 | 0.5670 | 0.5101 | | 0.5289 | 2.0 | 100 | 0.4594 | 0.6358 | | 0.3838 | 3.0 | 150 | 0.4028 | 0.6869 |
737f3dac9a9dda458abf26fb1c97f485
openrail
[]
false
![xy_grid-0034-666-masterpiece, best quality, 1girl, by namori, yuru yuri, toshinou kyouko, blonde hair, red hair bow, smile, blue eyes, nanamori s.jpg](https://s3.amazonaws.com/moonup/production/uploads/1672762465844-63602a9f3605bd411c18b4e0.jpeg) all of these were finetuned based off of kani-anime as a base model. namori based was trained off of danbooru images by the artist namori. yryr test was trained off of anime screenshots on top of namori base. yuruyuri prototype was trained off of additional yryr art official and non official and was trained on top of yryr test. these were all trained with 768 images and batch size 16 1.6-e5 learning rate in the stable tuner trainer. comparison of all the models prompted with the prompt masterpiece, best quality, 1girl, by namori, yuru yuri, toshinou kyouko, blonde hair, red hair bow, smile, blue eyes, nanamori school uniform Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, bath, onsen, water Steps: 18, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 666, Size: 768x1024, Model hash: 3b339a4d, Denoising strength: 0.7, First pass size: 384x512 "by namori" "yuru yuri" should help shift it more towards this style you can also prompt almost all of the characters on the models with yryrtest or yuruyuriprototype.
eeb7eecb63945d7648ebfef856cf1bd6
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
XLMIndic Base Uniscript This model is pretrained on a subset of the [OSCAR](https://huggingface.co/datasets/oscar) corpus spanning 14 Indo-Aryan languages. **Before pretraining this model we transliterate the text to [ISO-15919](https://en.wikipedia.org/wiki/ISO_15919) format using the [Aksharamukha](https://pypi.org/project/aksharamukha/) library.** A demo of Aksharamukha library is hosted [here](https://aksharamukha.appspot.com/converter) where you can transliterate your text and use it on our model on the inference widget.
b57cd7c4a4947b9ecc8fc3b209641382
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Model description This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration: - 12 repeating layers - 128 embedding dimension - 768 hidden dimension - 12 attention heads - 11M parameters - 512 sequence length
8a97eed0a44f85e5dbc4ef2e18f1ae86
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Training data This model was pretrained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset which is a medium sized multilingual corpus containing text from 163 languages. We select a subset of 14 languages based on the following criteria: - Belongs to the [Indo-Aryan language family](https://en.wikipedia.org/wiki/Indo-Aryan_languages). - Uses a [Brahmic script](https://en.wikipedia.org/wiki/Brahmic_scripts). These are the 14 languages we pretrain this model on: - Assamese - Bangla - Bihari - Bishnupriya Manipuri - Goan Konkani - Gujarati - Hindi - Maithili - Marathi - Nepali - Oriya - Panjabi - Sanskrit - Sinhala
19a2eba6349040b27c43010ee53a1b61
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Transliteration *The unique component of this model is that it takes in ISO-15919 transliterated text.* The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation. For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script. An example of ISO-15919 transliteration for a piece of **Bangla** text is the following: **Original:** "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।" **Transliterated:** 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.' Another example for a piece of **Hindi** text is the following: **Original:** "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है" **Transliterated:** "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
1675fabda585725de2d6cb5a704cb34d
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Preprocessing The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ```
9b7a9dad815fc1db03d6b33ca20fd086
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Training Training objective is the same as the original ALBERT. . The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The details of the sentence order prediction example generation procedure for each sentence are the following: - Split the sentence into two parts A and B at a random index. - With 50% probability swap the two parts. The model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the `revision` parameter. For example to load the checkpoint at 500k you can use the following code. ```python >>> AutoModel.from_pretrained('ibraheemmoosa/xlmindic-base-uniscript', revision='checkpoint_500k') ```
5576c11dd249ea4f49ab60d0e93c27fa
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Evaluation results We evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the [IndicGLUE](https://huggingface.co/datasets/indic_glue) benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model. We compare with an [ablation model](https://huggingface.co/ibraheemmoosa/xlmindic-base-multiscript) that do not use transliteration and is instead trained on original scripts.
f8e0aaa5dcad5bc33a6b8bb2797a41e5
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
IndicGLUE Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript (This Model) | XLMIndic-Base-Multiscript (Ablation Model) -----| ----- | ----- | ------ | ------- | -------- Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76 Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26 Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58 BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50 Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49 INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69 INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23 IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84 IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20 MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33 Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
f5f912cebcb6e922fc01f7b7625a373c
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Intended uses & limitations This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919). You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
adea0932ddb881a6fa05b6fba1c4fbf9
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
How to use To use this model you will need to first install the [Aksharamukha](https://pypi.org/project/aksharamukha/) library. ```bash pip install aksharamukha ``` Using this library you can transliterate any text wriiten in Indic scripts in the following way: ```python >>> from aksharamukha import transliterate >>> text = "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है" >>> transliterated_text = transliterate.process('autodetect', 'ISO', text) >>> transliterated_text "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai" ``` Then you can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> from aksharamukha import transliterate >>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-uniscript') >>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।" >>> transliterated_text = transliterate.process('Bengali', 'ISO', text) >>> transliterated_text 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli [MASK], aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama [MASK] puraskāra lābha karēna.' >>> unmasker(transliterated_text) [{'score': 0.39705055952072144, 'token': 1500, 'token_str': 'abhinētā', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli abhinētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.20499080419540405, 'token': 3585, 'token_str': 'kabi', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.1314290314912796, 'token': 15402, 'token_str': 'rājanētā', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli rājanētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.060830358415842056, 'token': 3212, 'token_str': 'kalākāra', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kalākāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}, {'score': 0.035522934049367905, 'token': 11586, 'token_str': 'sāhityakāra', 'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli sāhityakāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}] ```
674729a8170d08cdde3e0f883ef596e3
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Limitations and bias Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
b9f3998980f61ad468ed9fc5fc9b7360
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
Contact Feel free to contact us if you have any ideas or if you want to know more about our models. - Ibraheem Muhammad Moosa (ibraheemmoosa1347@gmail.com) - Mahmud Elahi Akhter (mahmud.akhter01@northsouth.edu) - Ashfia Binte Habib
5a1b3aab5ba5fa0de696e71d1e6acef5
apache-2.0
['multilingual', 'albert', 'masked-language-modeling', 'sentence-order-prediction', 'fill-mask', 'xlmindic', 'nlp', 'indoaryan', 'indicnlp', 'iso15919', 'transliteration']
false
BibTeX entry and citation info ```bibtex @article{Moosa2022DoesTH, title={Does Transliteration Help Multilingual Language Modeling?}, author={Ibraheem Muhammad Moosa and Mahmuda Akhter and Ashfia Binte Habib}, journal={ArXiv}, year={2022}, volume={abs/2201.12501} } ```
60e74e2008eb37b0402d2e7f306577e2
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
SeleStu Dreambooth model trained by ariciano with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
8daac4724dc3e60937e39b63c9fb55c4
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'filter_threshold': 0.002361, 'is_split_by_sentences': True}, 'generation': {'batch_size': 64, 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 640, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 512}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 272, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_samples': 512, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'codeparrot/codeparrot-small'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'kejian/final-filter', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0008, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 5000, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
e045f7a5a9a2bde51dbcb22e9174aa00
apache-2.0
['dialogue-summarization']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-5 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - label_smoothing_factor: 0.1
9b87e69fbfc2468d3dc12c3f89db2896
apache-2.0
['dialogue-summarization']
false
Results on Test Set - predict_gen_len = 23.9048 - predict_rouge1 = **47.355** - predict_rouge2 = **22.4593** - predict_rougeL = **38.694** - predict_rougeLsum = **42.98** - predict_samples = 819 - predict_samples_per_second = 9.279 - predict_steps_per_second = 2.322
fe61a3905d40ccc1a7e008025d7010f7
cc-by-4.0
['question generation']
false
Model Card of `lmqg/bart-base-squadshifts-reddit-qg` This model is fine-tuned version of [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: reddit) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
f38be93d3fe7f27f64dcfb64d7a846f7
cc-by-4.0
['question generation']
false
Overview - **Language model:** [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (reddit) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
b71634e4f5e4736151a159f9ed91734c
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-squadshifts-reddit-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
716793d63a548b99e66e926be5b0801d
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squadshifts-reddit-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 92.32 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 27.23 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 18.19 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 12.43 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 8.78 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 22.57 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 62.35 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 26.03 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
46cb683b01322e64e128a0a703909216
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: reddit - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: lmqg/bart-base-squad - max_length: 512 - max_length_output: 32 - epoch: 3 - batch: 8 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-squadshifts-reddit-qg/raw/main/trainer_config.json).
fb5d0f9c40454593e8a142f9c8ec719f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 2.3208
e84d881b8fb370eb431c15df80934f91
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7566 | 1.0 | 557 | 2.0440 | | 0.447 | 2.0 | 1114 | 2.0889 | | 0.3508 | 3.0 | 1671 | 2.3208 |
9d09214b32a805032421a15bc3a7f345
gpl-3.0
['distilbert', 'bert', 'tagalog', 'filipino']
false
**Deprecation Notice** This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available. Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance. ---
73f5d27abe3415094d146363bf51e548
gpl-3.0
['distilbert', 'bert', 'tagalog', 'filipino']
false
DistilBERT Tagalog Base Cased Tagalog version of DistilBERT, distilled from [`bert-tagalog-base-cased`](https://huggingface.co/jcblaise/bert-tagalog-base-cased). This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
3690409b25ef4b8bf00532da1943d142
gpl-3.0
['distilbert', 'bert', 'tagalog', 'filipino']
false
TensorFlow model = TFAutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased', from_pt=True) tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False)
a82706db56d3eeadca3dfdaa87045828
gpl-3.0
['distilbert', 'bert', 'tagalog', 'filipino']
false
PyTorch model = AutoModel.from_pretrained('jcblaise/distilbert-tagalog-base-cased') tokenizer = AutoTokenizer.from_pretrained('jcblaise/distilbert-tagalog-base-cased', do_lower_case=False) ``` Finetuning scripts and other utilities we use for our projects can be found in our centralized repository at https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
3efdc2987759a61e2b590ace391a8f2a
gpl-3.0
['distilbert', 'bert', 'tagalog', 'filipino']
false
Citations All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work: ``` @article{cruz2020establishing, title={Establishing Baselines for Text Classification in Low-Resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:2005.02068}, year={2020} } @article{cruz2019evaluating, title={Evaluating Language Model Finetuning Techniques for Low-resource Languages}, author={Cruz, Jan Christian Blaise and Cheng, Charibeth}, journal={arXiv preprint arXiv:1907.00409}, year={2019} } ```
b0973b5fcc87912646ea6b7a656b5f38
apache-2.0
['generated_from_trainer']
false
w2v2-libri This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7315 - Wer: 0.5574
1d84f37efe516404e8235bc3257c806b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 - mixed_precision_training: Native AMP
7cf17780c6305b36d74a93ed0270a3f1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.1828 | 50.0 | 200 | 3.0563 | 1.0 | | 2.8849 | 100.0 | 400 | 2.9023 | 1.0 | | 1.5108 | 150.0 | 600 | 1.1468 | 0.6667 | | 0.1372 | 200.0 | 800 | 1.3749 | 0.6279 | | 0.0816 | 250.0 | 1000 | 1.3985 | 0.6224 | | 0.0746 | 300.0 | 1200 | 1.5285 | 0.6141 | | 0.0556 | 350.0 | 1400 | 1.5496 | 0.5920 | | 0.0644 | 400.0 | 1600 | 1.6263 | 0.5947 | | 0.0546 | 450.0 | 1800 | 1.6803 | 0.5906 | | 0.0491 | 500.0 | 2000 | 1.6155 | 0.5837 | | 0.0518 | 550.0 | 2200 | 1.6784 | 0.5698 | | 0.0314 | 600.0 | 2400 | 1.6050 | 0.5602 | | 0.0048 | 650.0 | 2600 | 1.7703 | 0.5546 | | 0.0042 | 700.0 | 2800 | 1.7135 | 0.5615 | | 0.0025 | 750.0 | 3000 | 1.7315 | 0.5574 |
8d098021843cd1fbbc61de0edb2ff6e9
apache-2.0
['text2text-generation']
false
Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details>
9774990bb7646657496939778f38d4e4
apache-2.0
['text2text-generation']
false
pip install accelerate from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64", device_map="auto") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details>
52f4335cf55ea6914bd555b8ed917d41
apache-2.0
['text2text-generation']
false
pip install accelerate from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64", device_map="auto", torch_dtype=torch.float16) input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details>
aa86b86ab01ef5eea1c350617067649d
apache-2.0
['text2text-generation']
false
pip install bitsandbytes accelerate from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64") model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64", device_map="auto") input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>." input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s> ``` </details>
b6e3dbb2837f512f1ce64ae6fe2ac841
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0613 - Accuracy: 0.9807
656db55c9d78e45d4444c0b94545ca54
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2578 | 1.0 | 190 | 0.1447 | 0.9530 | | 0.1733 | 2.0 | 380 | 0.0787 | 0.9733 | | 0.1139 | 3.0 | 570 | 0.0613 | 0.9807 |
5f8a6c167383a1553237de513fd9bb83
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Czech wav2vec2-xls-r-300m-cs-250 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice 8.0 dataset as well as other datasets listed below. It achieves the following results on the evaluation set: - Loss: 0.1271 - Wer: 0.1475 - Cer: 0.0329 The `eval.py` script results using a LM are: - WER: 0.07274312090176113 - CER: 0.021207369275558875
6e103d0d7d39fd0770bcaf7224102165
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Model description Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "cs", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250") model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs-250") resampler = torchaudio.transforms.Resample(48_000, 16_000)
545f6d1bba58ae0ca1ca07c1c47284c9
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated using the attached `eval.py` script: ``` python eval.py --model_id comodoro/wav2vec2-xls-r-300m-cs-250 --dataset mozilla-foundation/common-voice_8_0 --split test --config cs ```
4b6874afc1c311ce5b9f3e22332c085b
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Training and evaluation data The Common Voice 8.0 `train` and `validation` datasets were used for training, as well as the following datasets: - Šmídl, Luboš and Pražák, Aleš, 2013, OVM – Otázky Václava Moravce, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-000D-EC98-3. - Pražák, Aleš and Šmídl, Luboš, 2012, Czech Parliament Meetings, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11858/00-097C-0000-0005-CF9C-4. - Plátek, Ondřej; Dušek, Ondřej and Jurčíček, Filip, 2016, Vystadial 2016 – Czech data, LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University, http://hdl.handle.net/11234/1-1740.
ff293cc96bdd0f64aecb0601c9f2b306
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 5 - mixed_precision_training: Native AMP
867d4ae4314d4b28a95a38351d94665c
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event', 'xlsr-fine-tuning-week']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 3.4203 | 0.16 | 800 | 3.3148 | 1.0 | 1.0 | | 2.8151 | 0.32 | 1600 | 0.8508 | 0.8938 | 0.2345 | | 0.9411 | 0.48 | 2400 | 0.3335 | 0.3723 | 0.0847 | | 0.7408 | 0.64 | 3200 | 0.2573 | 0.2840 | 0.0642 | | 0.6516 | 0.8 | 4000 | 0.2365 | 0.2581 | 0.0595 | | 0.6242 | 0.96 | 4800 | 0.2039 | 0.2433 | 0.0541 | | 0.5754 | 1.12 | 5600 | 0.1832 | 0.2156 | 0.0482 | | 0.5626 | 1.28 | 6400 | 0.1827 | 0.2091 | 0.0463 | | 0.5342 | 1.44 | 7200 | 0.1744 | 0.2033 | 0.0468 | | 0.4965 | 1.6 | 8000 | 0.1705 | 0.1963 | 0.0444 | | 0.5047 | 1.76 | 8800 | 0.1604 | 0.1889 | 0.0422 | | 0.4814 | 1.92 | 9600 | 0.1604 | 0.1827 | 0.0411 | | 0.4471 | 2.09 | 10400 | 0.1566 | 0.1822 | 0.0406 | | 0.4509 | 2.25 | 11200 | 0.1619 | 0.1853 | 0.0432 | | 0.4415 | 2.41 | 12000 | 0.1513 | 0.1764 | 0.0397 | | 0.4313 | 2.57 | 12800 | 0.1515 | 0.1739 | 0.0392 | | 0.4163 | 2.73 | 13600 | 0.1445 | 0.1695 | 0.0377 | | 0.4142 | 2.89 | 14400 | 0.1478 | 0.1699 | 0.0385 | | 0.4184 | 3.05 | 15200 | 0.1430 | 0.1669 | 0.0376 | | 0.3886 | 3.21 | 16000 | 0.1433 | 0.1644 | 0.0374 | | 0.3795 | 3.37 | 16800 | 0.1426 | 0.1648 | 0.0373 | | 0.3859 | 3.53 | 17600 | 0.1357 | 0.1604 | 0.0361 | | 0.3762 | 3.69 | 18400 | 0.1344 | 0.1558 | 0.0349 | | 0.384 | 3.85 | 19200 | 0.1379 | 0.1576 | 0.0359 | | 0.3762 | 4.01 | 20000 | 0.1344 | 0.1539 | 0.0346 | | 0.3559 | 4.17 | 20800 | 0.1339 | 0.1525 | 0.0351 | | 0.3683 | 4.33 | 21600 | 0.1315 | 0.1518 | 0.0342 | | 0.3572 | 4.49 | 22400 | 0.1307 | 0.1507 | 0.0342 | | 0.3494 | 4.65 | 23200 | 0.1294 | 0.1491 | 0.0335 | | 0.3476 | 4.81 | 24000 | 0.1287 | 0.1491 | 0.0336 | | 0.3475 | 4.97 | 24800 | 0.1271 | 0.1475 | 0.0329 |
3acf2a89e20cabaa1167379229b3ec97
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 128 | 2.9003 | 19.4784 | 2.8529 | 14.7786 | 15.0614 | 18.9825 |
486c02b14644a392d46214b9c6c30201
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3925 - F1: 0.7075
3bc2868355ec6f9bfbed287657ff93a7
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1493 | 1.0 | 50 | 0.5884 | 0.4748 | | 0.5135 | 2.0 | 100 | 0.4088 | 0.6623 | | 0.3558 | 3.0 | 150 | 0.3925 | 0.7075 |
305e05c4d402a09e3367ecae5a5656b1
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5506 - Wer: 0.3355
8ef184469ce0452678bf63575fa90a75
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.4326 | 1.0 | 500 | 1.5832 | 1.0063 | | 0.8235 | 2.01 | 1000 | 0.5310 | 0.5134 | | 0.4224 | 3.01 | 1500 | 0.4488 | 0.4461 | | 0.2978 | 4.02 | 2000 | 0.4243 | 0.4191 | | 0.232 | 5.02 | 2500 | 0.4532 | 0.4149 | | 0.1902 | 6.02 | 3000 | 0.4732 | 0.3912 | | 0.1628 | 7.03 | 3500 | 0.4807 | 0.3868 | | 0.1437 | 8.03 | 4000 | 0.5295 | 0.3670 | | 0.1241 | 9.04 | 4500 | 0.4602 | 0.3810 | | 0.1206 | 10.04 | 5000 | 0.4691 | 0.3783 | | 0.0984 | 11.04 | 5500 | 0.4500 | 0.3710 | | 0.0929 | 12.05 | 6000 | 0.5247 | 0.3550 | | 0.0914 | 13.05 | 6500 | 0.5546 | 0.3821 | | 0.0742 | 14.06 | 7000 | 0.4874 | 0.3646 | | 0.0729 | 15.06 | 7500 | 0.5327 | 0.3934 | | 0.0663 | 16.06 | 8000 | 0.5769 | 0.3661 | | 0.0575 | 17.07 | 8500 | 0.5191 | 0.3524 | | 0.0588 | 18.07 | 9000 | 0.5155 | 0.3360 | | 0.0456 | 19.08 | 9500 | 0.5135 | 0.3539 | | 0.0444 | 20.08 | 10000 | 0.5380 | 0.3603 | | 0.0419 | 21.08 | 10500 | 0.5275 | 0.3467 | | 0.0366 | 22.09 | 11000 | 0.5072 | 0.3487 | | 0.0331 | 23.09 | 11500 | 0.5450 | 0.3437 | | 0.0345 | 24.1 | 12000 | 0.5138 | 0.3431 | | 0.029 | 25.1 | 12500 | 0.5067 | 0.3413 | | 0.0274 | 26.1 | 13000 | 0.5421 | 0.3422 | | 0.0243 | 27.11 | 13500 | 0.5456 | 0.3392 | | 0.0226 | 28.11 | 14000 | 0.5665 | 0.3368 | | 0.0216 | 29.12 | 14500 | 0.5506 | 0.3355 |
bed46cf793328f2c0a4463665421186a
apache-2.0
['generated_from_trainer']
false
wav2vec2-base_toy_train_data_augment_0.1.csv This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3933 - Wer: 0.9997
1f673ddbbf8b0e3e3ebf74b914920550
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4
0d234f97a0f24ceb4abc89a80c5b3f57
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2787 | 0.84 | 200 | 3.5920 | 1.0 | | 3.0613 | 1.68 | 400 | 3.4069 | 1.0 | | 3.0481 | 2.52 | 600 | 3.4811 | 1.0 | | 2.896 | 3.36 | 800 | 2.3933 | 0.9997 |
1ca3a380df92c2bdeadb57bca7b4e79d
apache-2.0
['generated_from_trainer']
false
wspr-sm-ar3 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3582 - Wer: 57.7560
d9cecdac629d697a1f7d93bca0e3040e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1164 | 2.13 | 3000 | 0.3582 | 57.7560 |
f572ce7f5ed45b53fa8ccc3392f9895a
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2t_fr_unispeech_s42 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
514e305be72c7ef9c75385d826b4c6e7
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4519 - Wer: 0.3375
6cfbb3782edf098c4a01e7b1fb46a3df
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4351 | 4.0 | 500 | 1.2740 | 0.8259 | | 0.5828 | 8.0 | 1000 | 0.4276 | 0.4403 | | 0.2274 | 12.0 | 1500 | 0.4646 | 0.3739 | | 0.135 | 16.0 | 2000 | 0.4320 | 0.3662 | | 0.0962 | 20.0 | 2500 | 0.4831 | 0.3607 | | 0.0719 | 24.0 | 3000 | 0.4506 | 0.3463 | | 0.0556 | 28.0 | 3500 | 0.4519 | 0.3375 |
ae8d22b2ecaf3baa87369e6308754ea7
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7778 - Accuracy: 0.9168
5203cd8900009103b295d34d861aa5d3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2779 | 0.7394 | | 3.7834 | 2.0 | 636 | 1.8741 | 0.8287 | | 3.7834 | 3.0 | 954 | 1.1619 | 0.8887 | | 1.6892 | 4.0 | 1272 | 0.8601 | 0.9090 | | 0.9056 | 5.0 | 1590 | 0.7778 | 0.9168 |
86dab9b769693ae6be6959a31cf870cd