license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|:---:|:---:|:---:|:---:| | 0.4767 | 1.0 | 1346 | 0.6853 | 0.6323 | 0.6501 | 0.5789 | 0.7413 | 0.3677 | 290 | 248 | 119 | 341 | | 0.3783 | 2.0 | 2692 | 0.7041 | 0.6653 | 0.6528 | 0.6255 | 0.6826 | 0.3347 | 350 | 188 | 146 | 314 | | 0.2803 | 3.0 | 4038 | 0.8767 | 0.7094 | 0.7184 | 0.6491 | 0.8043 | 0.2906 | 338 | 200 | 90 | 370 |
0a394ea85f262abcde2081ebd9d4a70b
apache-2.0
['generated_from_trainer']
false
patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20 This model is a fine-tuned version of [google/bigbird-pegasus-large-arxiv](https://huggingface.co/google/bigbird-pegasus-large-arxiv) on the farleyknight/big_patent_5_percent dataset. It achieves the following results on the evaluation set: - Loss: 2.2617 - Rouge1: 37.3764 - Rouge2: 13.2442 - Rougel: 26.011 - Rougelsum: 31.0145 - Gen Len: 113.8789
eb7bb2bdccee8fe0d2a99e2946ad3a03
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0
6de90adadf8aec1475c78c6bd36cd854
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 2.6121 | 0.08 | 5000 | 2.5652 | 35.0673 | 12.0073 | 24.5471 | 28.9315 | 119.9866 | | 2.5182 | 0.17 | 10000 | 2.4797 | 34.6909 | 11.6432 | 24.87 | 28.1543 | 119.2043 | | 2.5102 | 0.25 | 15000 | 2.4238 | 35.8574 | 12.2402 | 25.0712 | 29.5607 | 115.2890 | | 2.4292 | 0.33 | 20000 | 2.3869 | 36.0133 | 12.2453 | 25.4039 | 29.483 | 112.5920 | | 2.3678 | 0.41 | 25000 | 2.3594 | 35.238 | 11.6833 | 25.0449 | 28.3313 | 119.1739 | | 2.3511 | 0.5 | 30000 | 2.3326 | 36.7755 | 12.8394 | 25.7218 | 30.2594 | 110.5819 | | 2.3334 | 0.58 | 35000 | 2.3125 | 36.6317 | 12.7493 | 25.5388 | 30.094 | 115.5998 | | 2.3833 | 0.66 | 40000 | 2.2943 | 37.1219 | 13.1564 | 25.7571 | 30.8666 | 113.8222 | | 2.341 | 0.75 | 45000 | 2.2813 | 36.4962 | 12.6225 | 25.6904 | 29.9741 | 115.9845 | | 2.3179 | 0.83 | 50000 | 2.2725 | 37.3535 | 13.1596 | 25.7385 | 31.056 | 117.7754 | | 2.3164 | 0.91 | 55000 | 2.2654 | 36.9191 | 12.9316 | 25.7586 | 30.4691 | 116.1670 | | 2.3046 | 0.99 | 60000 | 2.2618 | 37.3992 | 13.2731 | 26.0327 | 31.0338 | 114.5195 |
65b9ec1ebb50598cbe64259ee4ac1fa3
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-kitchen_and_dining-2-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3560 - Accuracy: 0.2692
03a8773e42d22bb9fa5afeeac529a663
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-qqp This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5151
57945e08ac82343fea33cfbb3c3a17c5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8517 | 0.4 | 500 | 2.7156 | | 2.8184 | 0.8 | 1000 | 2.6309 | | 2.7461 | 1.2 | 1500 | 2.5335 | | 2.5785 | 1.6 | 2000 | 2.5472 | | 2.5753 | 2.0 | 2500 | 2.5667 | | 2.4744 | 2.4 | 3000 | 2.4824 | | 2.4448 | 2.8 | 3500 | 2.5490 | | 2.476 | 3.2 | 4000 | 2.4906 | | 2.3352 | 3.6 | 4500 | 2.5151 |
e8a46679f056ca5a1d5110176733250f
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_vp-100k_accent_france-8_belgium-2_s365 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
34a665c2dbc6f1c26a9e882d9e2a94fb
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'mr', 'robust-speech-event']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset. It achieves the following results on the mozilla-foundation/common_voice_8_0 mr test set: - Without LM + WER: 48.53 + CER: 10.63 - With LM + WER: 38.27 + CER: 8.91
8e8303f946185f1cc4d14c7acd4a4701
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'mr', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 400.0 - mixed_precision_training: Native AMP
c4e6e0afedebae63430724dcc4a59fc4
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'mr', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 4.2706 | 22.73 | 500 | 4.0174 | 1.0 | | 3.2492 | 45.45 | 1000 | 3.2309 | 0.9908 | | 1.9709 | 68.18 | 1500 | 1.0651 | 0.8440 | | 1.4088 | 90.91 | 2000 | 0.5765 | 0.6550 | | 1.1326 | 113.64 | 2500 | 0.4842 | 0.5760 | | 0.9709 | 136.36 | 3000 | 0.4785 | 0.6013 | | 0.8433 | 159.09 | 3500 | 0.5048 | 0.5419 | | 0.7404 | 181.82 | 4000 | 0.5052 | 0.5339 | | 0.6589 | 204.55 | 4500 | 0.5237 | 0.5897 | | 0.5831 | 227.27 | 5000 | 0.5166 | 0.5447 | | 0.5375 | 250.0 | 5500 | 0.5292 | 0.5487 | | 0.4784 | 272.73 | 6000 | 0.5480 | 0.5596 | | 0.4421 | 295.45 | 6500 | 0.5682 | 0.5467 | | 0.4047 | 318.18 | 7000 | 0.5681 | 0.5447 | | 0.3779 | 340.91 | 7500 | 0.5783 | 0.5347 | | 0.3525 | 363.64 | 8000 | 0.5856 | 0.5367 | | 0.3393 | 386.36 | 8500 | 0.5960 | 0.5359 |
639de2dd1716fd7292098b552f5a6233
apache-2.0
['translation']
false
eng-mkh * source group: English * target group: Mon-Khmer languages * OPUS readme: [eng-mkh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md) * model: transformer * source language(s): eng * target language(s): kha khm khm_Latn mnw vie vie_Hani * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.eval.txt)
bf6cb25b9405ab8b0b664d1a15f42ccb
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng-kha.eng.kha | 0.1 | 0.015 | | Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.226 | | Tatoeba-test.eng-mnw.eng.mnw | 0.7 | 0.003 | | Tatoeba-test.eng.multi | 16.5 | 0.330 | | Tatoeba-test.eng-vie.eng.vie | 33.7 | 0.513 |
78ea76bda9cbef98814dc9069553258c
apache-2.0
['translation']
false
System Info: - hf_name: eng-mkh - source_languages: eng - target_languages: mkh - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'vi', 'km', 'mkh'] - src_constituents: {'eng'} - tgt_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt - src_alpha3: eng - tgt_alpha3: mkh - short_pair: en-mkh - chrF2_score: 0.33 - bleu: 16.5 - brevity_penalty: 1.0 - ref_len: 34734.0 - src_name: English - tgt_name: Mon-Khmer languages - train_date: 2020-07-27 - src_alpha2: en - tgt_alpha2: mkh - prefer_old: False - long_pair: eng-mkh - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
6940c60a37f5e88fca0fd54e8ca12be0
mit
['generated_from_trainer']
false
gpt2.CEBaB_confounding.observational.sa.5-class.seed_42 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE dataset. It achieves the following results on the evaluation set: - Loss: 0.9425 - Accuracy: 0.6091 - Macro-f1: 0.5206 - Weighted-macro-f1: 0.5595
bf481afad6e6e57c7d525a4e037e243c
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0
eb068a8f3a5be7ea9d6d15bda602623e
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2388 - F1: 0.8233
956b8323a75ddefd948dedbd362c95db
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8099 | 1.0 | 70 | 0.3035 | 0.7333 | | 0.2766 | 2.0 | 140 | 0.2661 | 0.7948 | | 0.1792 | 3.0 | 210 | 0.2388 | 0.8233 |
7f705979d8f54aa18a8ef1a652e8caf3
cc-by-sa-4.0
[]
false
How to use You can use this model for masked language modeling as follows: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp") model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp") sentence = '早稲田大学で自然言語処理を[MASK]する。' encoding = tokenizer(sentence, return_tensors='pt') ... ``` You can fine-tune this model on downstream tasks.
b9fd9da9a2eb4e891394b2a790e2c19b
cc-by-sa-4.0
[]
false
Tokenization `BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese). Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
be013839c29816fd61d8cd7de75c5d07
cc-by-sa-4.0
[]
false
Vocabulary The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
d78111d5f218e09292dc274ea1970481
cc-by-sa-4.0
[]
false
Training procedure This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took a week using eight NVIDIA A100 GPUs. The following hyperparameters were used during pretraining: - learning_rate: 1e-4 - per_device_train_batch_size: 256 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 4096 - max_seq_length: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 700000 - warmup_steps: 10000 - mixed_precision_training: Native AMP
0beacadb1e7569bd697c8a482081d3c9
cc-by-sa-4.0
[]
false
Electra Base Japanese Irony This is an [ELECTRA](https://github.com/google-research/electra) Base model for the Japanese language finetuned for automatic irony detection. The model was based on [transformers-ud-japanese-electra-ginza](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-discriminator/tree/main), and later finetuned on a dataset containing ironic and sarcastic tweets.
36bde219c40baa90855d2832dd971437
cc-by-sa-4.0
[]
false
Citations Please, cite this model using the following citation. ``` @inproceedings{dan2022electra-base-irony, title={北見工業大学 テキスト情報処理研究室 ELECTRA Base 皮肉検出モデル (Megagon Labs ver.)}, author={団 俊輔 and プタシンスキ ミハウ and ジェプカ ラファウ and 桝井 文人}, publisher={HuggingFace}, year={2022}, url = "https://huggingface.co/kit-nlp/bert-base-japanese-basic-char-v2-irony" } ```
2ea90d331bd8b5c98fe5f5fa0755e7df
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4442
b5c49d96d90dfd8ed2e40dc665f36b03
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6985 | 1.0 | 157 | 2.5612 | | 2.562 | 2.0 | 314 | 2.4226 | | 2.5316 | 3.0 | 471 | 2.4218 |
856e833e272ce9a026dc6ee1bc8dcfd0
mit
[]
false
model by hans120791 This your the Stable Diffusion model fine-tuned the metahuman rkr concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks rkr** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/8.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/3.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/1.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/5.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/0.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/7.jpeg) ![image 6](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/10.jpeg) ![image 7](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/6.jpeg) ![image 8](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/4.jpeg) ![image 9](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/9.jpeg) ![image 10](https://huggingface.co/sd-dreambooth-library/metahuman-rkr/resolve/main/concept_images/2.jpeg)
67765a08cc2aa74ac06ee6dd7426979d
mit
[]
false
ResNet18 model ported from [torchvision](https://pytorch.org/vision/stable/index.html) for use with [Metalhead.jl](https://github.com/FluxML/Metalhead.jl). The scripts for creating this file can be found at [this gist](https://gist.github.com/darsnack/bfb8594cf5fdc702bdacb66586f518ef). To use this model in Julia, [add the Metalhead.jl package to your environment](https://pkgdocs.julialang.org/v1/managing-packages/
446c94a325304485aab993d0a59d3c00
apache-2.0
['lexical normalization']
false
Fine-tuned ByT5-small for MultiLexNorm (Turkish version) ![model image](https://github.com/ufal/multilexnorm2021/raw/master/img/overall.png) This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
be4a736c2a22698ebcca68d42e39cfe8
apache-2.0
['generated_from_trainer']
false
whisper-large-v2-japanese-5k-steps This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Japanese CommonVoice dataset (v11).. It achieves the following results on the evaluation set: - Loss: 0.4200 - Wer: 0.7449
7bb614aedc087cf849959999be0ccdea
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 50 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
d44b3136292fe743e6d6da386941838e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0111 | 7.63 | 1000 | 0.3210 | 0.7888 | | 0.0007 | 15.27 | 2000 | 0.3585 | 0.7478 | | 0.0003 | 22.9 | 3000 | 0.3937 | 0.7432 | | 0.0002 | 30.53 | 4000 | 0.4123 | 0.7443 | | 0.0002 | 38.17 | 5000 | 0.4200 | 0.7449 |
a049710d6ca65fad039a617bef8b9922
apache-2.0
['generated_from_trainer']
false
load the model processor = WhisperProcessor.from_pretrained("clu-ling/whisper-large-v2-japanese-5k-steps") model = WhisperForConditionalGeneration.from_pretrained("clu-ling/whisper-large-v2-japanese-5k-steps").to(device) forced_decoder_ids = processor.get_decoder_prompt_ids(language="ja", task="transcribe")
9a2cb46eb961e28ffa02ae8bf6064356
apache-2.0
['generated_from_trainer']
false
load the dataset commonvoice_eval = load_dataset("mozilla-foundation/common_voice_11_0", "ja", split="validation", streaming=True) commonvoice_eval = commonvoice_eval.cast_column("audio", Audio(sampling_rate=16000)) sample = next(iter(commonvoice_eval))["audio"]
ba39d555d5dd4bf5f80d007855eb34e4
apache-2.0
['generated_from_trainer']
false
features and generate token ids input_features = processor(sample["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features predicted_ids = model.generate(input_features.to(device), forced_decoder_ids=forced_decoder_ids)
f22d3cd02a087d84eebdab8549601b00
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2116 - Accuracy: 0.9295 - F1: 0.9293
1eb4f69e7dd3438d2715e9e2cb4df52f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8487 | 1.0 | 250 | 0.3135 | 0.909 | 0.9051 | | 0.2515 | 2.0 | 500 | 0.2116 | 0.9295 | 0.9293 |
5af54f98c28b4f23335a2f7c8a9f1340
apache-2.0
['generate', 'gpt2']
false
模型信息 Model Information 类似于Wenzhong2.0-GPT2-3.5B-chinese,我们实现了一个small版本的12层的Wenzhong2.0-GPT2-110M-BertTokenizer-chinese,并在悟道(300G版本)上面进行预训练。本次开源别于之前开源的闻仲-GPT2系列,主要在于将BPE的分词换成了BertTokenzier的字级别分词。 Similar to Wenzhong2.0-GPT2-3.5B-chinese, we implement a small size Wenzhong2.0-GPT2-110M-BertTokenizer-chinese with 12 layers, which is pre-trained on Wudao Corpus (300G version).This open source version is different from the previous open source Wenzhong-GPT2 series, mainly because the word segmentation of BPE is replaced by the word level word segmentation of BertTokenzier.
804ac62285d22a2171c5afca265b3e5c
apache-2.0
['generate', 'gpt2']
false
加载模型 Loading Models ```python from transformers import BertTokenizer,GPT2LMHeadModel hf_model_path = 'IDEA-CCNL/Wenzhong-GPT2-110M' tokenizer = BertTokenizer.from_pretrained(hf_model_path) model = GPT2LMHeadModel.from_pretrained(hf_model_path) ```
c9275e869a139767039f5c04e98e294b
apache-2.0
['generate', 'gpt2']
false
使用示例 Usage Examples 这里需要提一点,GPT在训练的时候是没有添加special_tokens的,BertTokenizer会默认补充special_tokens,所以在tokenzier的时候需要将add_special_tokens设置为false,这样生产效果会更好。 ```python def generate_word_level(input_text,n_return=5,max_length=128,top_p=0.9): inputs = tokenizer(input_text,return_tensors='pt',add_special_tokens=False).to(model.device) gen = model.generate( inputs=inputs['input_ids'], max_length=max_length, do_sample=True, top_p=top_p, eos_token_id=21133, pad_token_id=0, num_return_sequences=n_return) sentences = tokenizer.batch_decode(gen) for idx,sentence in enumerate(sentences): print(f'sentence {idx}: {sentence}') print('*'*20) return gen outputs = generate_word_level('西湖的景色',n_return=5,max_length=128) ```
5c316bafa4b8511fc7bcccc5e39ff602
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0
cb1719ed2ea0c0249aafc8dd318caa3a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-test2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.3055 - Precision: 0.5278 - Recall: 0.3957 - F1: 0.4523 - Accuracy: 0.9462
a9e4f3ec4e8ac85fcd4cee0e67967c34
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2889 | 0.5439 | 0.3503 | 0.4262 | 0.9453 | | No log | 2.0 | 426 | 0.2938 | 0.5236 | 0.3800 | 0.4404 | 0.9457 | | 0.0544 | 3.0 | 639 | 0.3055 | 0.5278 | 0.3957 | 0.4523 | 0.9462 |
113f129353d8da620b85694228826e21
apache-2.0
['tapas']
false
TAPAS base model fine-tuned on WikiTable Questions (WTQ) This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is: - `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_base` (intermediate pre-training, absolute position embeddings). Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors.
239596fdb498add58ec12cde71c246e4
apache-2.0
['tapas']
false
Results Size | Reset | Dev Accuracy | Link -------- | --------| -------- | ---- LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset) LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main) **BASE** | **noreset** | **0.4525** | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset) **BASE** | **reset** | **0.4638** | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main) MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset) MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main) SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset) SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main) MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset) MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main) TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset) TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
ee6db502abb73696b6ef7f3cb25268b9
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-tr-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.1786 - Wer: 0.5933
8f079a427a5cfcb35de234179674f9b1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3421 | 14.81 | 400 | 1.1795 | 0.5922 | | 0.113 | 29.63 | 800 | 1.1786 | 0.5933 |
0cb477d9ae7bb61c7ccc873a8b93816b
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'endpoints-template']
false
Fork of [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) > Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. > For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion). For more information about the model, license and limitations check the original model card at [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4).
7bfccc8cffb47407071dc2a6199768bc
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'endpoints-template']
false
License (CreativeML OpenRAIL-M) The full license can be found here: https://huggingface.co/spaces/CompVis/stable-diffusion-license --- This repository implements a custom `handler` task for `text-to-image` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/stable-diffusion-v1-4-endpoints/blob/main/handler.py). There is also a [notebook](https://huggingface.co/philschmid/stable-diffusion-v1-4-endpoints/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
60cec841587d0aeede009363078bf7e4
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'endpoints-template']
false
helper decoder def decode_base64_image(image_string): base64_image = base64.b64decode(image_string) buffer = BytesIO(base64_image) return Image.open(buffer) def predict(prompt:str=None): payload = {"inputs": code_snippet,"parameters": parameters} response = r.post( ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json={"inputs": prompt} ) resp = response.json() return decode_base64_image(resp["image"]) prediction = predict( prompt="the first animal on the mars" ) ``` expected output ![sample](sample.jpg)
03ed7f2fe8800059bd7f528c9e184ff0
apache-2.0
['automatic-speech-recognition', 'th']
false
exp_w2v2t_th_vp-nl_s947 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
122453428f0e1e29ee65aa58f8db96da
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
XLS-R-300M - Bulgarian This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset. It achieves the following results on the evaluation set: - Loss: 0.2473 - Wer: 0.3002
e87b4b19142e8b6d82fd3156b4b9ae57
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50.0 - mixed_precision_training: Native AMP
fda23bee2b297b6c2945cc20009f8bf0
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1589 | 3.48 | 400 | 3.0830 | 1.0 | | 2.8921 | 6.96 | 800 | 2.6605 | 0.9982 | | 1.3049 | 10.43 | 1200 | 0.5069 | 0.5707 | | 1.1349 | 13.91 | 1600 | 0.4159 | 0.5041 | | 1.0686 | 17.39 | 2000 | 0.3815 | 0.4746 | | 0.999 | 20.87 | 2400 | 0.3541 | 0.4343 | | 0.945 | 24.35 | 2800 | 0.3266 | 0.4132 | | 0.9058 | 27.83 | 3200 | 0.2969 | 0.3771 | | 0.8672 | 31.3 | 3600 | 0.2802 | 0.3553 | | 0.8313 | 34.78 | 4000 | 0.2662 | 0.3380 | | 0.8068 | 38.26 | 4400 | 0.2528 | 0.3181 | | 0.7796 | 41.74 | 4800 | 0.2537 | 0.3073 | | 0.7621 | 45.22 | 5200 | 0.2503 | 0.3036 | | 0.7611 | 48.7 | 5600 | 0.2477 | 0.2991 |
8216f6314e7faa898b33760fda2cbc07
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset mozilla-foundation/common_voice_8_0 --config bg --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
273a5c86905efd326108929a2f089952
apache-2.0
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
false
Inference With LM ```python import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "anuragshas/wav2vec2-large-xls-r-300m-bg" sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "bg", split="test", streaming=True, use_auth_token=True)) sample = next(sample_iter) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits transcription = processor.batch_decode(logits.numpy()).text
c2171c254589333ec81c5eb18e479a78
apache-2.0
['synthesis', 'speech', 'speech synthesis']
false
Welcome to Crust 🍕⭕ Crust is a 168 speaker model based on uberduck's pipeline. We've noticed that having multiple speakers instead of having one speaker, improves the performance of the model and makes it be able to synthesize comparable results with only 1 minute of data. The results are surprisingly good and because of the lower dataset, batch size can be lowered and the model is generally faster than other models.
63628ddcf8a253de5bdddc5c21ba8464
apache-2.0
['synthesis', 'speech', 'speech synthesis']
false
What is a multispeaker model? A multispeaker model is a model that has been trained on multiple speakers, the model first generates an "average" voice of all of the speakers and then tunes the different speakers on that average voice. If you have a lot of speakers, individual results won't be that great, as the model only has ~250+ mb to work with, but this is great for finetuning different voices on it because the model has learned an "average" voice. This average voice has the knowledge of all voices included in the dataset. Core: A multispeaker model is a model trained on multiple speakers.
0d0ed9b5bdda1b3fbd6751f975406fbb
apache-2.0
['synthesis', 'speech', 'speech synthesis']
false
How does this make training possible with 1 minute of training data? The model has been trained on 168 datasets, ~20 hours of data, or ~19.8 thousand audio files. This is smaller than LJ speech but it has way more variety in voices, which LJ speech doesn't have. this variety allows the model to learn speech in different genders, accents, pitches, and other important factors, meaning that it knows a lot more in terms of voices. Finetuning this on 1 minute of data is possible because it already has a decently close match of your voice somewhere in its latent space. Core: The multispeaker has more knowledge of multiple people speaking, making it surprisingly good at training on low-minute datasets.
d4d6375e7bb13a03cfdb52dc23b38c4b
apache-2.0
['synthesis', 'speech', 'speech synthesis']
false
What are the downsides? **-Training time.** Training time sadly does still take a while, but considering you might only be training using 1 minute of data, it would take shorter than training it on the Lj-speech model, but would not come close to corentj's realtime voice cloning, it would be more accurate. **-Clean datasets.** We still doubt if the model would be able to be trained on datasets that have loud noise in them or have background music in them, realistically, it would not be able to be trained on these kinds of datasets, so before you train, please use a clean dataset. **-Inference.** Even though this model can be trained on 1 minute of data, we still recommend training it on more, we can't promise good results if the model doesn't have sufficient data, this would ideally be measured in syllables or phonemes, but minutes is a lot easier. **-Audio quality.** Sadly, the model has only been trained on 22050 hz and mono audio files, while this still sounds good when there's a Hi-Fi Gan vocoder, It's still going to not have stereo sound (which would not be that useful) or 44100 hz audio quality on its own. Sadly the Hi-Fi Gan vocoder does also bring in artifacts into the wav files which makes synthesis not as realistic. We used [**Uberduck's TTS Pipeline on github**](https://github.com/uberduck-ai/uberduck-ml-dev) To train our model.
bcbf1569acea05ad70649ab2349696bc
apache-2.0
['generated_from_trainer']
false
german_trained This model is a fine-tuned version of [flozi00/wav2vec-xlsr-german](https://huggingface.co/flozi00/wav2vec-xlsr-german) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9367 - Wer: 1.0
709bdd579e04de3c5682714e9f10750b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 30
3fefb9a0f588969d606bae781f386738
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 12.0352 | 5.0 | 5 | 12.6165 | 1.0 | | 4.0249 | 10.0 | 10 | 6.6453 | 1.0 | | 2.6661 | 15.0 | 15 | 5.7873 | 1.0 | | 2.4123 | 20.0 | 20 | 4.3250 | 1.0 | | 1.9481 | 25.0 | 25 | 3.9899 | 1.0 | | 1.7533 | 30.0 | 30 | 3.9367 | 1.0 |
cb3cc4613a9822d8a278ec645ac72c09
apache-2.0
['generated_from_trainer', 'whisper-event']
false
luigisaetta/whisper-tiny2-it This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4686 - Wer: 25.9110
a170c0ba64a4f6b204915d896df52c8e
apache-2.0
['generated_from_trainer', 'whisper-event']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5765 | 2.01 | 1000 | 0.5728 | 32.2181 | | 0.3726 | 4.02 | 2000 | 0.5035 | 28.4606 | | 0.2789 | 6.04 | 3000 | 0.4861 | 26.7894 | | 0.2996 | 8.05 | 4000 | 0.4694 | 26.0279 | | 0.2925 | 10.06 | 5000 | 0.4686 | 25.9110 |
4a2ee018b97623d0fd88a4712e2ac58c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8489
b0b85b68b9266b60dd46d4616ea6c47f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.7098 | 1.0 | 5681 | 2.3952 | | 2.3633 | 2.0 | 11362 | 1.9956 | | 2.1293 | 3.0 | 17043 | 1.8489 |
dcb41ba3cc3361e45e944ea94de145bd
mit
['russian', 'ukrainian']
false
A little about the model The model is trained to answer questions about health topics (Open-book question answering-comprehend). cointegrated/rut5-base-multitask For training, a compact T5 model was used: cointegrated/rut5-base-multitask The training was conducted on a small set out of 220 thousand pairs of question-answer sentences, so it still does not work as correctly as we would like. The model is not a medical application and it is strongly discouraged to use the model for medical purposes!
0373fde253497d79ee22312b71ef2aed
apache-2.0
['speech']
false
Wav2Vec2-Large-Tedlium The Wav2Vec2 large model fine-tuned on the TEDLIUM corpus. The model is initialised with Facebook's [Wav2Vec2 large LV-60k](https://huggingface.co/facebook/wav2vec2-large-lv60) checkpoint pre-trained on 60,000h of audiobooks from the LibriVox project. It is fine-tuned on 452h of TED talks from the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) corpus (Release 3). When using the model, make sure that your speech input is sampled at 16Khz. The model achieves a word error rate (WER) of 8.4% on the dev set and 8.2% on the test set. [Training logs](https://wandb.ai/sanchit-gandhi/tedlium/runs/10c85yc4?workspace=user-sanchit-gandhi) document the training and evaluation progress over 50k steps of fine-tuning. See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how this model was fine-tuned.
8eff744f88730f244be2446c4013ee60
apache-2.0
['speech']
false
Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch
4067db0b54c6014888374d6576fbd471
apache-2.0
['speech']
false
load model and processor processor = Wav2Vec2Processor.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium") model = Wav2Vec2ForCTC.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium")
7737b98b22d19f96dd9d86340dc0c0e6
apache-2.0
['speech']
false
take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) print("Target: ", ds["text"][0]) print("Transcription: ", transcription[0]) ```
bb63909c9fb5a6758d62f18768eb1578
apache-2.0
['speech']
false
Evaluation This code snippet shows how to evaluate **Wav2Vec2-Large-Tedlium** on the TEDLIUM test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer tedlium_eval = load_dataset("LIUM/tedlium", "release3", split="test") model = Wav2Vec2ForCTC.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium").to("cuda") processor = Wav2Vec2Processor.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = tedlium_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ```
cfee5d1704319ebfa647bfe327fa7cbc
mit
['zul', 'fill-mask', 'pytorch', 'roberta', 'masked-lm']
false
How to use ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_zul_roberta") model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_zul_roberta") ```
25a34ec112db495d8cfcfaff9fe82179
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab-1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9634 - Wer: 0.4398
d6c2d81a9a53c43e091658f75dccb8ff
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 6 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 50 - mixed_precision_training: Native AMP
e19519a20c8e7f355ba7a73cfd6713b2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.8991 | 5.26 | 500 | 1.4319 | 0.7522 | | 0.8555 | 10.53 | 1000 | 0.7895 | 0.5818 | | 0.4584 | 15.79 | 1500 | 0.7198 | 0.5211 | | 0.3096 | 21.05 | 2000 | 0.7983 | 0.5118 | | 0.2165 | 26.32 | 2500 | 0.7893 | 0.4745 | | 0.163 | 31.58 | 3000 | 0.8779 | 0.4589 | | 0.1144 | 36.84 | 3500 | 0.9256 | 0.4540 | | 0.0886 | 42.11 | 4000 | 0.9184 | 0.4530 | | 0.0668 | 47.37 | 4500 | 0.9634 | 0.4398 |
6f1fa521948eff9a4d434fd99576cc3c
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1636 - F1: 0.8567
ce4f589fecf7308f0b2948593eb1def9
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2905 | 1.0 | 715 | 0.1810 | 0.8263 | | 0.1477 | 2.0 | 1430 | 0.1561 | 0.8488 | | 0.095 | 3.0 | 2145 | 0.1636 | 0.8567 |
a1083e2187d88e4c25b027af922859f6
mit
['generated_from_trainer']
false
Model description This model was trained using [pile-detoxify](https://huggingface.co/datasets/tomekkorbak/pile-detoxify), which is data from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on toxicity detected by [Detoxify](https://github.com/unitaryai/detoxify).
56b3d7db3854dfe9c47e324f718188d1
mit
['generated_from_trainer']
false
Intended uses & limitations This model has been trained to generate text that receives a low score for toxicity from [Detoxify](https://github.com/unitaryai/detoxify). While we have promising results with the methods used to avoid toxic text, we cannot guarantee that it will output text that is fully aligned with non-toxicity in every situation. This model and its associated datasets are intended for research purposes only and should not be deployed anywhere. Please take care to avoid misusing the datasets used to train this model (where toxicity and personal identifiable information are annotated) or putting anybody in danger by publicizing their information.
23ab4af651665d790e97b88711737068
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'goofy_pasteur', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
ff6e6c55261d4e8012a9f5b5714e2a50
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-chaii This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4651
5f867ae0f4f17fae07b37f82266d5069
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.92 | 1.0 | 899 | 0.4482 | | 0.8055 | 2.0 | 1798 | 0.3225 | | 0.7485 | 3.0 | 2697 | 0.4651 |
9736d47c255087a80dec36b1e2a8bc22
apache-2.0
['generated_from_trainer']
false
STT_Model_4 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2311 - Wer: 0.1373
5a8276b69884fc31ff7c47c70dd382ca
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100
2866d8d0ae03afdd5dad3a8799de31be
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4196 | 5.68 | 500 | 0.9866 | 0.6983 | | 0.3696 | 11.36 | 1000 | 0.8788 | 0.4010 | | 0.1182 | 17.05 | 1500 | 0.2187 | 0.1947 | | 0.0658 | 22.73 | 2000 | 0.2578 | 0.1757 | | 0.0421 | 28.41 | 2500 | 0.2178 | 0.1609 | | 0.0346 | 34.09 | 3000 | 0.2038 | 0.1584 | | 0.0285 | 39.77 | 3500 | 0.2187 | 0.1594 | | 0.0228 | 45.45 | 4000 | 0.2114 | 0.1445 | | 0.0262 | 51.14 | 4500 | 0.2201 | 0.1631 | | 0.0162 | 56.82 | 5000 | 0.2078 | 0.1424 | | 0.0135 | 62.5 | 5500 | 0.1989 | 0.1393 | | 0.0128 | 68.18 | 6000 | 0.2118 | 0.1410 | | 0.0104 | 73.86 | 6500 | 0.2158 | 0.1361 | | 0.0081 | 79.55 | 7000 | 0.2154 | 0.1348 | | 0.0067 | 85.23 | 7500 | 0.2107 | 0.1358 | | 0.0067 | 90.91 | 8000 | 0.2161 | 0.1373 | | 0.0056 | 96.59 | 8500 | 0.2311 | 0.1373 |
c4449ad5fc0bea89ae9731cc954d2a8a
apache-2.0
['vision', 'image-classification']
false
Dataset ```python DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 329 }) validation: Dataset({ features: ['image', 'label'], num_rows: 56 }) }) ``` 36 Break and 293 Normal in train 5 Break and 51 Normal in validation
14c26f9bd3104866d55530bf9ef94d3a
apache-2.0
['vision', 'image-classification']
false
Load image import torch from transformers import ViTFeatureExtractor, ViTForImageClassification,AutoModel from PIL import Image import requests url='https://datasets-server.huggingface.co/assets/ShihTing/IsCausewayOffset/--/ShihTing--IsCausewayOffset/validation/0/image/image.jpg' image = Image.open(requests.get(url, stream=True).raw)
d0ea7ea6e9c1bded9dd7d5bb8d8087f4
apache-2.0
['vision', 'image-classification']
false
Load model from transformers import AutoFeatureExtractor, AutoModelForImageClassification device = torch.device('cpu') extractor = AutoFeatureExtractor.from_pretrained('ShihTing/PanJuOffset_TwoClass') model = AutoModelForImageClassification.from_pretrained('ShihTing/PanJuOffset_TwoClass')
b36f1bb1ce0fa1a5429f3fe57faeec67
mit
['generated_from_trainer']
false
Bio_ClinicalBERT-zero-shot-finetuned-50cad This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1475 - Accuracy: 0.5 - F1: 0.6667
4ce4a0e157f9e2badfd1e423a69f1ae8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.3775 | 0.06 | 500 | 0.0302 | | 0.0207 | 0.11 | 1000 | 0.0188 | | 0.0182 | 0.17 | 1500 | 0.0179 | | 0.0171 | 0.22 | 2000 | 0.0152 | | 0.0178 | 0.28 | 2500 | 0.0161 | | 0.0147 | 0.33 | 3000 | 0.0150 | | 0.0157 | 0.39 | 3500 | 0.0137 | | 0.0137 | 0.44 | 4000 | 0.0126 | | 0.0133 | 0.5 | 4500 | 0.0137 | | 0.012 | 0.56 | 5000 | 0.0120 | | 0.0122 | 0.61 | 5500 | 0.0117 | | 0.0129 | 0.67 | 6000 | 0.0118 | | 0.0113 | 0.72 | 6500 | 0.0114 | | 0.0106 | 0.78 | 7000 | 0.0109 | | 0.0119 | 0.83 | 7500 | 0.0108 | | 0.0122 | 0.89 | 8000 | 0.0102 | | 0.0105 | 0.94 | 8500 | 0.0101 | | 0.0094 | 1.0 | 9000 | 0.0098 | | 0.01 | 1.06 | 9500 | 0.0097 | | 0.0097 | 1.11 | 10000 | 0.0096 |
63beb5c6a1b62387326ac60377e4533f
apache-2.0
['generated_from_trainer']
false
T5-summarizer-simple-wiki-v2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0866
2281908ea7b5e4eab419bc3258264e4b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.2575 | 1.0 | 14719 | 2.1173 | | 2.2663 | 2.0 | 29438 | 2.0926 | | 2.2092 | 3.0 | 44157 | 2.0866 |
5bcbcb5c95f6e84c6118c2989757de74
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1894 - Accuracy: 0.9448
ef8401576ed24d59d1bdd151eeab2209
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6133 | 1.0 | 318 | 1.0679 | 0.7290 | | 0.8231 | 2.0 | 636 | 0.5164 | 0.8652 | | 0.4289 | 3.0 | 954 | 0.3019 | 0.9168 | | 0.2722 | 4.0 | 1272 | 0.2336 | 0.9335 | | 0.214 | 5.0 | 1590 | 0.2117 | 0.94 | | 0.1914 | 6.0 | 1908 | 0.2007 | 0.9445 | | 0.1785 | 7.0 | 2226 | 0.1947 | 0.9435 | | 0.1716 | 8.0 | 2544 | 0.1919 | 0.9468 | | 0.1674 | 9.0 | 2862 | 0.1901 | 0.9452 | | 0.1659 | 10.0 | 3180 | 0.1894 | 0.9448 |
447e2b66dc72ee80dc9a76f9363b8e25
apache-2.0
['translation', 'generated_from_trainer']
false
kyoto_marian_mod_3_5 This model is a fine-tuned version of [Hoax0930/kyoto_marian_mod_2](https://huggingface.co/Hoax0930/kyoto_marian_mod_2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8052 - Bleu: 18.4305
decb86f3ca0d1d1acdcee72b590f7cd0
apache-2.0
['translation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP
b64fc47a0260b8a8e23114b645685021
mit
['generated_from_trainer']
false
finetuning-insult-model-deberta This model is a fine-tuned version of [yangheng/deberta-v3-base-absa-v1.1](https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9472 - Accuracy: 0.7458 - F1: 0.7630 - Precision: 0.7332 - Recall: 0.7953
20aa5ad8f160f02c612eb9aac1187200
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-0.02-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 2.8160 - Bleu: 7.448 - Gen Len: 44.2241
1b2132a9ddadcd025ca190aaaa629ba8