license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_data_aug_stsb This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.8342 - Pearson: 0.1765 - Spearmanr: 0.1800 - Combined Score: 0.1782
08935f70a27ba13798eb3e612044d5dd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|:--------------:| | 1.0254 | 1.0 | 2518 | 2.8776 | 0.1575 | 0.1742 | 0.1659 | | 0.5854 | 2.0 | 5036 | 3.1464 | 0.1591 | 0.1679 | 0.1635 | | 0.4255 | 3.0 | 7554 | 2.8342 | 0.1765 | 0.1800 | 0.1782 | | 0.2765 | 4.0 | 10072 | 2.8524 | 0.1815 | 0.1838 | 0.1827 | | 0.1862 | 5.0 | 12590 | 2.9184 | 0.1736 | 0.1768 | 0.1752 | | 0.1339 | 6.0 | 15108 | 2.9817 | 0.1688 | 0.1728 | 0.1708 | | 0.1029 | 7.0 | 17626 | 2.9702 | 0.1618 | 0.1643 | 0.1631 | | 0.0806 | 8.0 | 20144 | 3.0033 | 0.1588 | 0.1624 | 0.1606 |
b89fded8d9bdb7c4d94390db31d303bd
bsd-3-clause
[]
false
Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-Multi 6B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 6B* and further pre-trained on a dataset of multiple programming languages, and "6B" refers to the number of trainable parameters.
768df5c3f3b15b6e1347e3b93a3e8c13
bsd-3-clause
[]
false
Training data This checkpoint (CodeGen-Multi 6B) was firstly initialized with *CodeGen-NL 6B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
f0cfc50e3ed42342fc09135365420f00
bsd-3-clause
[]
false
How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-multi") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-multi") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ```
894e7fd5c7092a1d4dcbce0a57af6a03
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1561 - Accuracy: 0.9285
351da6f5ba1ec8753ab998be6045f53f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 250 | 0.1635 | 0.9295 | | 0.111 | 2.0 | 500 | 0.1515 | 0.936 | | 0.111 | 3.0 | 750 | 0.1561 | 0.9285 |
0be4f1e560c03f667a8f057bb4b2ddfc
afl-3.0
[]
false
afro-xlmr-small AfroXLMR-small was created by [first reducing the vocabulary token size](https://aclanthology.org/2020.sustainlp-1.16/) of XLM-R-base from 250K to 70k, followed by MLM adaptation on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
9f805ca95c3727cbe5a25903392bc6a9
afl-3.0
[]
false
Eval results on MasakhaNER (F-score) language| XLM-R-miniLM| XLM-R-base |XLM-R-large| afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini -|-|-|-|-|-|- amh |69.5|70.6|76.2|76.1|70.1|69.7 hau |74.5|89.5|90.5|91.2|91.4|87.7 ibo |81.9|84.8|84.1|87.4|86.6|83.5 kin |68.6|73.3|73.8|78.0|77.5|74.1 lug |64.7|79.7|81.6|82.9|83.2|77.4 luo |11.7|74.9|73.6|75.1|75.4|17.5 pcm |83.2|87.3|89.0|89.6|89.0|85.5 swa |86.3|87.4|89.4|88.6|88.7|86.0 wol |51.7|63.9|67.9|67.4|65.9|59.0 yor |72.0|78.3|78.9|82.1|81.3|75.1
2037b6031a6a5d4c0670dd9fb05f1993
afl-3.0
[]
false
BibTeX entry and citation info ``` @inproceedings{alabi-etal-2022-adapting, title = "Adapting Pre-trained Language Models to {A}frican Languages via Multilingual Adaptive Fine-Tuning", author = "Alabi, Jesujoba O. and Adelani, David Ifeoluwa and Mosbach, Marius and Klakow, Dietrich", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.382", pages = "4336--4349", abstract = "Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {---} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.", } ```
f418ed9c109c1a5ef977d3c9bdecbcde
apache-2.0
['generated_from_keras_callback']
false
bert-base-cased-finetuned-log-parser-winlogbeat_nowhitespace_large This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set:
4b2f2dee39f19509b8af11da0d4c7d3e
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 15321, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 15321, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-06, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
15acaf76d05569dda7015faf4eb775b1
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_xlsr-53_s870 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
da41435f34a8dea915ef52197c01d213
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4568 - Wer: 0.3422
77334a71c5f1c0156ecf3286b7b62957
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3896 | 4.0 | 500 | 1.1573 | 0.8886 | | 0.5667 | 8.0 | 1000 | 0.4841 | 0.4470 | | 0.2126 | 12.0 | 1500 | 0.4201 | 0.3852 | | 0.1235 | 16.0 | 2000 | 0.4381 | 0.3623 | | 0.0909 | 20.0 | 2500 | 0.4784 | 0.3748 | | 0.0611 | 24.0 | 3000 | 0.4390 | 0.3577 | | 0.0454 | 28.0 | 3500 | 0.4568 | 0.3422 |
439fc53444edc630516429678e78003c
other
[]
false
https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/corneos7thHeavenMix_v2.safetensors https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/novelai%20f111%20sd1.4%20add%20difference%201.0.ckpt https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/Anything-V3.0-pruned-fp16.ckpt !gdown https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/novelai%20f111%20sd1.4%20add%20difference%201.0.ckpt -O /content/stable-diffusion-webui/models/Stable-diffusion/nai_f111.ckpt
cf851e8167729142c23aa3852c7cb480
mit
['generated_from_keras_callback']
false
LeoFelix/bert-finetuned-squad This model is a fine-tuned version of [pierreguillou/bert-base-cased-squad-v1.1-portuguese](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0193 - Epoch: 2
b8c43b2159dd40f46fdd73ae584fea36
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 852, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
9d2105bdaef9db8ceb1b1ae035733f28
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-kinyarwanda This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3917 - Wer: 0.3246
17fb7735b017433e856a41406a832927
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 8 - mixed_precision_training: Native AMP
d571ee571e10ba58d985bfca7473cc09
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 9.0634 | 0.12 | 400 | 3.0554 | 1.0 | | 2.8009 | 0.24 | 800 | 1.5927 | 0.9554 | | 0.9022 | 0.36 | 1200 | 0.7328 | 0.6445 | | 0.6213 | 0.48 | 1600 | 0.6138 | 0.5510 | | 0.5299 | 0.6 | 2000 | 0.6072 | 0.5223 | | 0.4999 | 0.72 | 2400 | 0.5449 | 0.4969 | | 0.4731 | 0.84 | 2800 | 0.5261 | 0.4828 | | 0.458 | 0.96 | 3200 | 0.5058 | 0.4607 | | 0.4158 | 1.09 | 3600 | 0.4892 | 0.4463 | | 0.4037 | 1.21 | 4000 | 0.4759 | 0.4429 | | 0.4021 | 1.33 | 4400 | 0.4615 | 0.4330 | | 0.3934 | 1.45 | 4800 | 0.4593 | 0.4315 | | 0.3808 | 1.57 | 5200 | 0.4736 | 0.4344 | | 0.3838 | 1.69 | 5600 | 0.4569 | 0.4249 | | 0.3726 | 1.81 | 6000 | 0.4473 | 0.4140 | | 0.3623 | 1.93 | 6400 | 0.4403 | 0.4097 | | 0.3517 | 2.05 | 6800 | 0.4389 | 0.4061 | | 0.333 | 2.17 | 7200 | 0.4383 | 0.4104 | | 0.3354 | 2.29 | 7600 | 0.4360 | 0.3955 | | 0.3257 | 2.41 | 8000 | 0.4226 | 0.3942 | | 0.3275 | 2.53 | 8400 | 0.4206 | 0.4040 | | 0.3262 | 2.65 | 8800 | 0.4172 | 0.3875 | | 0.3206 | 2.77 | 9200 | 0.4209 | 0.3877 | | 0.323 | 2.89 | 9600 | 0.4177 | 0.3825 | | 0.3099 | 3.01 | 10000 | 0.4101 | 0.3691 | | 0.3008 | 3.14 | 10400 | 0.4055 | 0.3709 | | 0.2918 | 3.26 | 10800 | 0.4085 | 0.3800 | | 0.292 | 3.38 | 11200 | 0.4089 | 0.3713 | | 0.292 | 3.5 | 11600 | 0.4092 | 0.3730 | | 0.2785 | 3.62 | 12000 | 0.4151 | 0.3687 | | 0.2941 | 3.74 | 12400 | 0.4004 | 0.3639 | | 0.2838 | 3.86 | 12800 | 0.4108 | 0.3703 | | 0.2854 | 3.98 | 13200 | 0.3911 | 0.3596 | | 0.2683 | 4.1 | 13600 | 0.3944 | 0.3575 | | 0.2647 | 4.22 | 14000 | 0.3836 | 0.3538 | | 0.2704 | 4.34 | 14400 | 0.4006 | 0.3540 | | 0.2664 | 4.46 | 14800 | 0.3974 | 0.3553 | | 0.2662 | 4.58 | 15200 | 0.3890 | 0.3470 | | 0.2615 | 4.7 | 15600 | 0.3856 | 0.3507 | | 0.2553 | 4.82 | 16000 | 0.3814 | 0.3497 | | 0.2587 | 4.94 | 16400 | 0.3837 | 0.3440 | | 0.2522 | 5.06 | 16800 | 0.3834 | 0.3486 | | 0.2451 | 5.19 | 17200 | 0.3897 | 0.3414 | | 0.2423 | 5.31 | 17600 | 0.3864 | 0.3481 | | 0.2434 | 5.43 | 18000 | 0.3808 | 0.3416 | | 0.2525 | 5.55 | 18400 | 0.3795 | 0.3408 | | 0.2427 | 5.67 | 18800 | 0.3841 | 0.3411 | | 0.2411 | 5.79 | 19200 | 0.3804 | 0.3366 | | 0.2404 | 5.91 | 19600 | 0.3800 | 0.3328 | | 0.2372 | 6.03 | 20000 | 0.3749 | 0.3335 | | 0.2244 | 6.15 | 20400 | 0.3820 | 0.3327 | | 0.2381 | 6.27 | 20800 | 0.3789 | 0.3325 | | 0.2294 | 6.39 | 21200 | 0.3867 | 0.3298 | | 0.2378 | 6.51 | 21600 | 0.3843 | 0.3281 | | 0.2312 | 6.63 | 22000 | 0.3813 | 0.3277 | | 0.2411 | 6.75 | 22400 | 0.3780 | 0.3268 | | 0.2315 | 6.87 | 22800 | 0.3790 | 0.3280 | | 0.241 | 6.99 | 23200 | 0.3776 | 0.3281 | | 0.2313 | 7.11 | 23600 | 0.3929 | 0.3283 | | 0.2423 | 7.24 | 24000 | 0.3905 | 0.3280 | | 0.2337 | 7.36 | 24400 | 0.3979 | 0.3249 | | 0.2368 | 7.48 | 24800 | 0.3980 | 0.3257 | | 0.2409 | 7.6 | 25200 | 0.3937 | 0.3229 | | 0.2416 | 7.72 | 25600 | 0.3867 | 0.3237 | | 0.2364 | 7.84 | 26000 | 0.3912 | 0.3253 | | 0.234 | 7.96 | 26400 | 0.3917 | 0.3246 |
0d2bf19e47706f9c76c66700f3b4c5f5
mit
[]
false
Model description This SLED model is based on the BART model, which is described in its [model card](https://huggingface.co/facebook/bart-base). BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). When used as a BART-SLED model, it can be applied on long text tasks. This model was finetuned on the [GovReport](https://arxiv.org/abs/2104.02112)
ad927b026c1eeb720200e0001201c2b8
mit
[]
false
BibTeX entry and citation info Please cite both the SLED [paper](https://arxiv.org/abs/2208.00748.pdf) and the BART [paper](https://arxiv.org/abs/1910.13461) by Lewis et al as well as GovReport by Huang et al ```bibtex @inproceedings{Ivgi2022EfficientLU, title={Efficient Long-Text Understanding with Short-Text Models}, author={Maor Ivgi and Uri Shaham and Jonathan Berant}, year={2022} } ``` ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{huang2021govreport, title = "Efficient Attentions for Long Document Summarization", author = "Huang, Luyang and Cao, Shuyang and Parulian, Nikolaus and Ji, Heng and Wang, Lu", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.naacl-main.112", doi = "10.18653/v1/2021.naacl-main.112", pages = "1419--1436" } ```
855cef2d434896ca6f02521f2096b532
other
['generated_from_trainer']
false
opt-125m-finetuned-wikitext2 This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3409
8cd13772cb220576c103c94bb8092952
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4123 | 1.0 | 2370 | 3.3621 | | 3.2096 | 2.0 | 4740 | 3.3452 | | 3.0822 | 3.0 | 7110 | 3.3409 |
bd7a3bbbe31e9e251fcce638bcc86a85
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0594 - Precision: 0.9331 - Recall: 0.9529 - F1: 0.9429 - Accuracy: 0.9872
399b1ed0f7bac7247ee19966162b86e4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0872 | 1.0 | 1756 | 0.0631 | 0.9128 | 0.9359 | 0.9242 | 0.9827 | | 0.0338 | 2.0 | 3512 | 0.0578 | 0.9322 | 0.9510 | 0.9415 | 0.9867 | | 0.0174 | 3.0 | 5268 | 0.0594 | 0.9331 | 0.9529 | 0.9429 | 0.9872 |
1d0ce2eca033c77f651bc39ff675caa6
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-squad This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1433
998ad2ec60b0cc2c1e9d748ba4b3eb22
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.4107 | 1.0 | 3693 | 2.2321 | | 2.1359 | 2.0 | 7386 | 2.1499 | | 1.9214 | 3.0 | 11079 | 2.1433 |
5cbc527912643e373c2dabd740323bac
apache-2.0
['generated_from_keras_callback']
false
pedramyamini/distilbert-base-multilingual-cased-finetuned-mobile-banks-cafebazaar2022-09-12-08-14-58 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4986 - Validation Loss: 0.7589 - Epoch: 7
806aaa4196b4fb210e03abbc286e23db
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 21392, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
b9220bd777dafdff3835d924595e0151
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.7544 | 0.7034 | 0 | | 0.6815 | 0.6905 | 1 | | 0.6463 | 0.6960 | 2 | | 0.6135 | 0.6896 | 3 | | 0.5764 | 0.7041 | 4 | | 0.5447 | 0.7340 | 5 | | 0.5170 | 0.7562 | 6 | | 0.4986 | 0.7589 | 7 |
9bde80bfa9288cfbfa0b827b754745bd
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7170 - Wer: 0.4784
fb3c6a4e23ae0719ebd6f5839a950f31
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP
a123a6abe3046dfe1951568eb9bbb208
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.1915 | 13.89 | 500 | 3.1318 | 1.0 | | 1.4993 | 27.78 | 1000 | 0.6736 | 0.5485 | | 0.3416 | 41.67 | 1500 | 0.7111 | 0.5092 | | 0.1937 | 55.56 | 2000 | 0.7170 | 0.4784 |
720821d241ff5f94e5a1c8dd8ffa003a
openrail
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
false
SantaCoder 🎅 fine-tuned on bash/shell 🐚 scripts This model is a fine-tuned version of [BigCode/SantaCoder](https://huggingface.co/bigcode/santacoder) on The Stack [bash/shell scripts](https://huggingface.co/datasets/bigcode/the-stack-dedup). It achieves the following results on the evaluation set: - Loss: 1.2272
fc3722d0563197744234a246d580822c
openrail
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
false
Model description The [SantaCoder](https://huggingface.co/bigcode/santacoder) models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests). The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255). In addition, there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
278ad3c4ec5917172028cbff809d747d
openrail
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
false
Intended uses & limitations The model has been trained on source code in Python, Java, and JavaScript and fine-tuned on bash/shell scripts. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
6afda4dca092ca933a59c8a7783c2afc
openrail
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
false
Training and evaluation data The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. **This is the near-deduplicated version with 3TB data.**
3fda5b8428996a762aadf8f028349dd0
openrail
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 10000
0a77cdb0e189c2f7270e5e85b2875040
openrail
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6101 | 0.05 | 500 | 1.5078 | | 1.6156 | 0.1 | 1000 | 1.4687 | | 1.4916 | 0.15 | 1500 | 1.4728 | | 1.4027 | 0.2 | 2000 | 1.4237 | | 1.499 | 0.25 | 2500 | 1.4067 | | 1.4378 | 0.3 | 3000 | 1.3838 | | 1.3698 | 0.35 | 3500 | 1.3767 | | 1.3021 | 0.4 | 4000 | 1.3562 | | 4.0521 | 0.45 | 4500 | 1.3433 | | 0.9722 | 0.5 | 5000 | 1.3461 | | 1.3836 | 0.55 | 5500 | 1.2955 | | 1.3727 | 0.6 | 6000 | 1.2809 | | 1.3332 | 0.65 | 6500 | 1.2665 | | 1.2232 | 0.7 | 7000 | 1.2573 | | 1.2373 | 0.75 | 7500 | 1.2463 | | 1.3759 | 0.8 | 8000 | 1.2391 | | 1.3021 | 0.85 | 8500 | 1.2325 | | 1.369 | 0.9 | 9000 | 1.2292 | | 1.4911 | 0.95 | 9500 | 1.2275 | | 1.1677 | 1.0 | 10000 | 1.2272 |
4065a361e20b8e0eaf00b71e9db8b3c4
openrail
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
false
Citation ``` @misc {manuel_romero_2023, author = { {Manuel Romero} }, title = { santacoder-finetuned-the-stack-bash-shell (Revision d3e56a7) }, year = 2023, url = { https://huggingface.co/mrm8488/santacoder-finetuned-the-stack-bash-shell }, doi = { 10.57967/hf/0320 }, publisher = { Hugging Face } } ```
1e4efe8ab7d71b1c11e4e62680ad620c
apache-2.0
['automatic-speech-recognition', 'timit_asr', 'generated_from_trainer']
false
sew-d-small-100k-timit This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 1.7541 - Wer: 0.8061
8db6730df1bc2358268230065a9f1a70
apache-2.0
['automatic-speech-recognition', 'timit_asr', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.2068 | 0.69 | 100 | 4.0802 | 1.0 | | 2.9805 | 1.38 | 200 | 2.9792 | 1.0 | | 2.9781 | 2.07 | 300 | 2.9408 | 1.0 | | 2.9655 | 2.76 | 400 | 2.9143 | 1.0 | | 2.8953 | 3.45 | 500 | 2.8775 | 1.0 | | 2.7718 | 4.14 | 600 | 2.7787 | 1.0 | | 2.6711 | 4.83 | 700 | 2.6401 | 0.9786 | | 2.6403 | 5.52 | 800 | 2.5435 | 1.0392 | | 2.4052 | 6.21 | 900 | 2.4580 | 1.0706 | | 2.1708 | 6.9 | 1000 | 2.2800 | 1.0090 | | 2.2555 | 7.59 | 1100 | 2.1493 | 0.9579 | | 2.3673 | 8.28 | 1200 | 2.0709 | 0.9051 | | 2.091 | 8.97 | 1300 | 2.0258 | 0.8926 | | 1.8433 | 9.66 | 1400 | 1.9645 | 0.8243 | | 1.6824 | 10.34 | 1500 | 1.9211 | 0.8707 | | 2.2282 | 11.03 | 1600 | 1.8914 | 0.8695 | | 1.9027 | 11.72 | 1700 | 1.8718 | 0.8343 | | 1.6303 | 12.41 | 1800 | 1.8646 | 0.8232 | | 1.648 | 13.1 | 1900 | 1.8297 | 0.8177 | | 2.0429 | 13.79 | 2000 | 1.8127 | 0.8642 | | 1.8833 | 14.48 | 2100 | 1.8005 | 0.8307 | | 1.5996 | 15.17 | 2200 | 1.7926 | 0.8467 | | 1.4876 | 15.86 | 2300 | 1.7795 | 0.8341 | | 1.8925 | 16.55 | 2400 | 1.7716 | 0.8199 | | 1.814 | 17.24 | 2500 | 1.7846 | 0.8086 | | 1.536 | 17.93 | 2600 | 1.7655 | 0.8019 | | 1.4476 | 18.62 | 2700 | 1.7599 | 0.8070 | | 1.7629 | 19.31 | 2800 | 1.7589 | 0.8119 | | 1.7646 | 20.0 | 2900 | 1.7541 | 0.8061 |
0dc5e66eb95fee4ef75e80a8d8caab14
mit
['text', 'Twitter']
false
distilbert-depression-mixed This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) trained on CLPsych 2015 and a scraped dataset, and evaluated on a scraped dataset from Twitter to detect potential users in Twitter for depression. It achieves the following results on the evaluation set: - Evaluation Loss: 0.71 - Accuracy: 0.63 - F1: 0.59 - Precision: 0.66 - Recall: 0.53 - AUC: 0.63
469806c6134d73bd3a51f65a351a9749
mit
['text', 'Twitter']
false
How to use You can use this model directly with a pipeline for sentiment analysis: ```python >>> from transformers import DistilBertTokenizerFast, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained(r"distilbert-depression-mixed") >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512} >>> result=classifier('pain peko',**tokenizer_kwargs)
1e44d1a5805745c26918674cfbf3c180
mit
['text', 'Twitter']
false
Should note that the string passed as the input can be a corpus of tweets concatenated together into one document. [{'label': 'LABEL_1', 'score': 0.5048992037773132}] ``` Otherwise, download the files and specify within the pipeline the path to the folder that contains the config.json, pytorch_model.bin, and training_args.bin
98428a64be1f73160c8abbc528a66283
mit
['text', 'Twitter']
false
Training results | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | AUC | |:-----:|:-------------:|:---------------:|:--------:|:--------:|:---------:|:--------:|:--------:| | 1.0 | 0.68 | 0.66 | 0.61 | 0.54 | 0.60 | 0.50 | 0.60 | | 2.0 | 0.65 | 0.65 | 0.63 | 0.49 | 0.70 | 0.37 | 0.62 | | 3.0 | 0.53 | 0.63 | 0.66 | 0.58 | 0.69 | 0.50 | 0.65 | | 4.0 | 0.39 | 0.66 | 0.67 | 0.61 | 0.69 | 0.54 | 0.67 | | 5.0 | 0.27 | 0.72 | 0.65 | 0.61 | 0.63 | 0.60 | 0.64 |
fb5b069f9369ce136b5af646e0667d85
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Romansh-Sursilvan Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romansh Sursilvan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
7f286761f95f9449a3e0e5e5e1a10a45
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "rm-sursilv", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv") model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv") resampler = torchaudio.transforms.Resample(48_000, 16_000)
538885ea723d5024b4015b38897c1d6a
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "rm-sursilv", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv") model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\…\\«\\»\\–]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
fd87971f003daf4a8f492c92703abd79
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 25.16 %
581983e7cc9a4660de815bdf14120354
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Training The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://colab.research.google.com/drive/1dpZr_GzRowCciUbzM3GnW04TNKnB7vrP?usp=sharing).
7ee348bf7ba8a3fd7b373fbeed823231
apache-2.0
[]
false
Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-small-smiles2caption", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small-smiles2caption') input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
89262a11feb0c6589c60dfaef1c84b92
apache-2.0
['generated_from_keras_callback']
false
piyusharma/bert-base-uncased-finetuned-lex This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2112 - Epoch: 0
f5202b366271b732d94d8974c3b919a7
mit
['generated_from_keras_callback']
false
recklessrecursion/Heresy-clustered This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1793 - Train End Logits Accuracy: 0.9618 - Train Start Logits Accuracy: 0.9549 - Validation Loss: 0.7725 - Validation End Logits Accuracy: 0.6667 - Validation Start Logits Accuracy: 0.3333 - Epoch: 0
c3c95f275383f41743c329dec59da0ee
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.1793 | 0.9618 | 0.9549 | 0.7725 | 0.6667 | 0.3333 | 0 |
bbb1c18e2390cb0c1bed3fa01122c127
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_qnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6648 - Accuracy: 0.6066
1167a8b701cd7289eed8daf892962389
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6886 | 1.0 | 410 | 0.6648 | 0.6066 | | 0.6569 | 2.0 | 820 | 0.6677 | 0.5999 | | 0.6419 | 3.0 | 1230 | 0.6672 | 0.5914 | | 0.6293 | 4.0 | 1640 | 0.6677 | 0.5977 | | 0.6118 | 5.0 | 2050 | 0.6691 | 0.6002 | | 0.5857 | 6.0 | 2460 | 0.6854 | 0.6077 |
4606c53e7560bfd512ebbf619876bd55
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'food']
false
DreamBooth model for the Berlinberger concept trained by veereshd on the veereshd/Dreambooth_food_dataset dataset. This is a Stable Diffusion model fine-tuned on the Berlinberger concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of Berlinberger berger** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
11f8b40d9b26cb942c6b13f718e2dc7d
mit
['generated_from_trainer']
false
confident_knuth This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
a7e3d40751e0563d2e0f89754026e27b
mit
['generated_from_trainer']
false
Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 0.5, 'beta': 0.1, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'confident_knuth', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
a76379a4c2c3a4d65a9625a50b7e1f21
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2240 - Accuracy: 0.9265 - F1: 0.9265
a11f0cb718e9eea19b4eb47ec36205c5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8488 | 1.0 | 250 | 0.3268 | 0.9055 | 0.9031 | | 0.2532 | 2.0 | 500 | 0.2240 | 0.9265 | 0.9265 |
405ffe4406a9c7dea348d7ffdc87f8a9
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2161 - Accuracy: 0.9225 - F1: 0.9226
59e3ffe218782dd17190a24a8865eced
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8009 | 1.0 | 250 | 0.3027 | 0.9045 | 0.9015 | | 0.2402 | 2.0 | 500 | 0.2161 | 0.9225 | 0.9226 |
15baacad642e01bfe4f2baeef46c293a
apache-2.0
['generated_from_trainer']
false
openai/whisper-base This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1929 - Wer: 4.3549
20308a522dabf04df57b87217e142ae0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0326 | 10.0 | 500 | 0.1670 | 5.0398 | | 0.0019 | 20.0 | 1000 | 0.1728 | 4.5113 | | 0.0008 | 30.01 | 1500 | 0.1820 | 4.4071 | | 0.0005 | 40.01 | 2000 | 0.1847 | 4.3773 | | 0.0004 | 51.0 | 2500 | 0.1886 | 4.3549 | | 0.0003 | 61.0 | 3000 | 0.1910 | 4.3475 | | 0.0003 | 71.01 | 3500 | 0.1925 | 4.3549 | | 0.0002 | 81.01 | 4000 | 0.1929 | 4.3549 |
9bb1ffff5c98b4d71d086bac19089106
apache-2.0
['automatic-speech-recognition', 'de', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
false
Wav2Vec2-Large-XLSR-53-German-GPT2 This is an encoder-decoder model for automatic speech recognition trained on on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from [jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) and the decoder from [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2). It was trained using a two step process: * fine-tuning only the cross-attention weights and the decoder using the pre-computed outputs of the Wav2Vec-Modell * relatively fast training * also works on small GPU (eg. 8 GB) * but may take a lot of disk space * should already yield decent results * fine-tuning the model end-to-end * much slower * needs a bigger GPU There is also one trick, which seemed to improve performance significantly: adding position embeddings to the encoder outputs and initializing them with the pre-trained position embeddings of the GPT2 model (See `eval.py`). The training notebooks are still early drafts. Also results can probably improved a lot by using for example a learning rate schedule.
923cfae3aa6a594b76443080ad35b18a
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xlsr-53k-russian This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2660 - Wer: 0.2052
b3189c181a0d276cffa3107c796b9e7e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 96 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
3f5f2b2cfdedef652e8cf5bbc329e092
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.2873 | 1.09 | 400 | 0.8580 | 0.8982 | | 0.4728 | 2.19 | 800 | 0.3182 | 0.3892 | | 0.1639 | 9.83 | 1200 | 0.2374 | 0.2646 | | 0.1014 | 13.11 | 1600 | 0.2470 | 0.2467 | | 0.0754 | 16.39 | 2000 | 0.2516 | 0.2337 | | 0.0616 | 19.67 | 2400 | 0.2559 | 0.2237 | | 0.0505 | 22.95 | 2800 | 0.2557 | 0.2155 | | 0.0437 | 26.23 | 3200 | 0.2711 | 0.2099 | | 0.0377 | 29.51 | 3600 | 0.2660 | 0.2052 |
08a581ce0c00b291717ce63498aafa6e
apache-2.0
['translation']
false
opus-mt-fi-ZH * source languages: fi * target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh * OPUS readme: [fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt)
530b037808df4b799844d7ba9aba5076
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-0']
false
MultiBERTs Seed 0 Checkpoint 1000k (uncased) Seed 0 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
9f08e31f7074d3e6c13355fd42c74d32
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-0']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1000k') model = BertModel.from_pretrained("multiberts-seed-0-1000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
bda382aff8269cb2d947a733aee681dc
mit
[]
false
Uncertainty types label | type | description | example ---| ---| ---| --- E | Epistemic | The proposition is possible, but its truth-value cannot be decided at the moment. | She **may** be already asleep. I | Investigation | The proposition is in the process of having its truth-value determined. | She **examined** the role of NF-kappaB in protein activation. D | Doxatic | The proposition expresses beliefs and hypotheses, which may be known as true or false by others. | She **believes** that the Earth is flat. N | Condition | The proposition is true or false based on the truth-value of another proposition. | **If** she gets the job, she will move to Utrecht. C | *certain* | *n/a* | *n/a*
e7fbf98d2e8c9f9836b4689456470e6d
mit
[]
false
Intended uses and limitations - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
1778b7856a3402a04c5f115458456b42
mit
[]
false
How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.ner import NERModel model = NERModel( 'bert', 'jeniakim/hedgehog', use_cuda=False, labels=["C", "D", "E", "I", "N"], ) example = "As much as I definitely enjoy solitude, I wouldn't mind perhaps spending little time with you (Björk)" predictions, raw_outputs = model.predict([example]) ``` The predictions look like this: ``` [[{'As': 'C'}, {'much': 'C'}, {'as': 'C'}, {'I': 'C'}, {'definitely': 'C'}, {'enjoy': 'C'}, {'solitude,': 'C'}, {'I': 'C'}, {"wouldn't": 'C'}, {'mind': 'C'}, {'perhaps': 'E'}, {'spending': 'C'}, {'little': 'C'}, {'time': 'C'}, {'with': 'C'}, {'you': 'C'}, {'(Björk)': 'C'}]] ``` In other words, the token 'perhaps' is recognized as an **epistemic uncertainty cue** and all the other tokens are not uncertainty cues.
ae8f1af950fa750dcc20ad8025939e1f
mit
[]
false
Training Data HEDGEhog is trained and evaluated on the [Szeged Uncertainty Corpus](https://rgai.inf.u-szeged.hu/node/160) (Szarvas et al. 2012<sup>1</sup>). The original sentence-level XML version of this dataset is available [here](https://rgai.inf.u-szeged.hu/node/160). The token-level version that was used for the training can be downloaded from [here](https://1drv.ms/u/s!AvPkt_QxBozXk7BiazucDqZkVxLo6g?e=IisuM6) in a form of pickled pandas DataFrame's. You can download either the split sets (```train.pkl``` 137MB, ```test.pkl``` 17MB, ```dev.pkl``` 17MB) or the full dataset (```szeged_fixed.pkl``` 172MB). Each row in the df contains a token, its features (these are not relevant for HEDGEhog; they were used to train the baseline CRF model, see [here](https://github.com/vanboefer/uncertainty_crf)), its sentence ID, and its label.
b006a414ac09655590f6b68cb0b9a27c
mit
[]
false
Evaluation Results class | precision | recall | F1-score | support ---|---|---|---|--- Epistemic | 0.90 | 0.85 | 0.88 | 624 Doxatic | 0.88 | 0.92 | 0.90 | 142 Investigation | 0.83 | 0.86 | 0.84 | 111 Condition | 0.85 | 0.87 | 0.86 | 86 Certain | 1.00 | 1.00 | 1.00 | 104,751 **macro average** | **0.89** | **0.90** | **0.89** | 105,714
eeee2e668cdbc6103d315185ab147132
mit
[]
false
References <sup>1</sup> Szarvas, G., Vincze, V., Farkas, R., Móra, G., & Gurevych, I. (2012). Cross-genre and cross-domain detection of semantic uncertainty. *Computational Linguistics, 38*(2), 335-367.
3dca0ed6ce453006129868704f7bcc5f
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'nl', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset. This model is also available with a language model which improves these results. This model can be found at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-nl-lm. The Common Voice 8 Dutch test Wer is 9.73 of that model. It achieves the following results on the evaluation set: - Loss: 0.1479 - Wer: 0.1156
f81381209d4de1ecbb43f7291ea1e601
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'nl', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.2223 | 0.52 | 500 | 0.3866 | 0.3425 | | 1.0748 | 1.03 | 1000 | 0.2574 | 0.2169 | | 1.0416 | 1.55 | 1500 | 0.2177 | 0.1946 | | 0.9951 | 2.06 | 2000 | 0.2008 | 0.1760 | | 0.975 | 2.58 | 2500 | 0.1961 | 0.1751 | | 0.9461 | 3.1 | 3000 | 0.1989 | 0.1782 | | 0.9381 | 3.61 | 3500 | 0.1928 | 0.1699 | | 0.934 | 4.13 | 4000 | 0.1923 | 0.1633 | | 0.9322 | 4.64 | 4500 | 0.1871 | 0.1634 | | 0.9012 | 5.16 | 5000 | 0.1890 | 0.1702 | | 0.9045 | 5.68 | 5500 | 0.1882 | 0.1740 | | 0.8826 | 6.19 | 6000 | 0.1856 | 0.1575 | | 0.8848 | 6.71 | 6500 | 0.1861 | 0.1617 | | 0.8723 | 7.22 | 7000 | 0.1927 | 0.1646 | | 0.8725 | 7.74 | 7500 | 0.1798 | 0.1531 | | 0.8573 | 8.26 | 8000 | 0.1781 | 0.1587 | | 0.8633 | 8.77 | 8500 | 0.1852 | 0.1628 | | 0.8603 | 9.29 | 9000 | 0.1833 | 0.1601 | | 0.8421 | 9.8 | 9500 | 0.1788 | 0.1543 | | 0.8404 | 10.32 | 10000 | 0.1844 | 0.1556 | | 0.8342 | 10.84 | 10500 | 0.1770 | 0.1538 | | 0.8161 | 11.35 | 11000 | 0.1821 | 0.1567 | | 0.8371 | 11.87 | 11500 | 0.1909 | 0.1629 | | 0.8083 | 12.38 | 12000 | 0.1778 | 0.1498 | | 0.806 | 12.9 | 12500 | 0.1802 | 0.1547 | | 0.8013 | 13.42 | 13000 | 0.1859 | 0.1584 | | 0.7913 | 13.93 | 13500 | 0.1875 | 0.1517 | | 0.8063 | 14.45 | 14000 | 0.1799 | 0.1571 | | 0.7991 | 14.96 | 14500 | 0.1792 | 0.1538 | | 0.7843 | 15.48 | 15000 | 0.1753 | 0.1464 | | 0.7905 | 16.0 | 15500 | 0.1784 | 0.1508 | | 0.7808 | 16.51 | 16000 | 0.1771 | 0.1485 | | 0.7743 | 17.03 | 16500 | 0.1795 | 0.1491 | | 0.7833 | 17.54 | 17000 | 0.1722 | 0.1484 | | 0.7763 | 18.06 | 17500 | 0.1767 | 0.1518 | | 0.7698 | 18.58 | 18000 | 0.1720 | 0.1460 | | 0.7571 | 19.09 | 18500 | 0.1735 | 0.1478 | | 0.7673 | 19.61 | 19000 | 0.1817 | 0.1511 | | 0.7415 | 20.12 | 19500 | 0.1763 | 0.1481 | | 0.751 | 20.64 | 20000 | 0.1742 | 0.1484 | | 0.7563 | 21.16 | 20500 | 0.1810 | 0.1611 | | 0.7423 | 21.67 | 21000 | 0.1817 | 0.1557 | | 0.7242 | 22.19 | 21500 | 0.1690 | 0.1446 | | 0.7251 | 22.7 | 22000 | 0.1684 | 0.1446 | | 0.7302 | 23.22 | 22500 | 0.1735 | 0.1430 | | 0.733 | 23.74 | 23000 | 0.1720 | 0.1454 | | 0.7128 | 24.25 | 23500 | 0.1668 | 0.1383 | | 0.7184 | 24.77 | 24000 | 0.1635 | 0.1377 | | 0.7015 | 25.28 | 24500 | 0.1646 | 0.1389 | | 0.7198 | 25.8 | 25000 | 0.1775 | 0.1462 | | 0.7178 | 26.32 | 25500 | 0.1705 | 0.1419 | | 0.7199 | 26.83 | 26000 | 0.1649 | 0.1416 | | 0.6981 | 27.35 | 26500 | 0.1724 | 0.1418 | | 0.6886 | 27.86 | 27000 | 0.1633 | 0.1382 | | 0.6922 | 28.38 | 27500 | 0.1698 | 0.1420 | | 0.6833 | 28.9 | 28000 | 0.1611 | 0.1351 | | 0.6798 | 29.41 | 28500 | 0.1639 | 0.1365 | | 0.6711 | 29.93 | 29000 | 0.1668 | 0.1358 | | 0.6762 | 30.44 | 29500 | 0.1682 | 0.1355 | | 0.6594 | 30.96 | 30000 | 0.1629 | 0.1345 | | 0.6664 | 31.48 | 30500 | 0.1625 | 0.1321 | | 0.6838 | 31.99 | 31000 | 0.1597 | 0.1372 | | 0.6603 | 32.51 | 31500 | 0.1583 | 0.1302 | | 0.6468 | 33.02 | 32000 | 0.1595 | 0.1322 | | 0.6464 | 33.54 | 32500 | 0.1609 | 0.1315 | | 0.6623 | 34.06 | 33000 | 0.1622 | 0.1366 | | 0.6414 | 34.57 | 33500 | 0.1587 | 0.1330 | | 0.6242 | 35.09 | 34000 | 0.1614 | 0.1337 | | 0.632 | 35.6 | 34500 | 0.1568 | 0.1272 | | 0.6346 | 36.12 | 35000 | 0.1583 | 0.1274 | | 0.6143 | 36.64 | 35500 | 0.1576 | 0.1264 | | 0.6208 | 37.15 | 36000 | 0.1621 | 0.1263 | | 0.6185 | 37.67 | 36500 | 0.1623 | 0.1270 | | 0.6128 | 38.18 | 37000 | 0.1604 | 0.1268 | | 0.6151 | 38.7 | 37500 | 0.1593 | 0.1246 | | 0.6082 | 39.22 | 38000 | 0.1532 | 0.1238 | | 0.6 | 39.73 | 38500 | 0.1524 | 0.1224 | | 0.6032 | 40.25 | 39000 | 0.1521 | 0.1212 | | 0.6016 | 40.76 | 39500 | 0.1551 | 0.1215 | | 0.6009 | 41.28 | 40000 | 0.1523 | 0.1215 | | 0.5875 | 41.8 | 40500 | 0.1541 | 0.1216 | | 0.608 | 42.31 | 41000 | 0.1536 | 0.1209 | | 0.5876 | 42.83 | 41500 | 0.1567 | 0.1211 | | 0.5714 | 43.34 | 42000 | 0.1532 | 0.1217 | | 0.5756 | 43.86 | 42500 | 0.1516 | 0.1196 | | 0.5719 | 44.38 | 43000 | 0.1491 | 0.1191 | | 0.5829 | 44.89 | 43500 | 0.1497 | 0.1193 | | 0.5664 | 45.41 | 44000 | 0.1487 | 0.1173 | | 0.5707 | 45.92 | 44500 | 0.1470 | 0.1164 | | 0.5696 | 46.44 | 45000 | 0.1479 | 0.1161 | | 0.5767 | 46.96 | 45500 | 0.1492 | 0.1175 | | 0.5573 | 47.47 | 46000 | 0.1471 | 0.1165 | | 0.5625 | 47.99 | 46500 | 0.1484 | 0.1168 | | 0.5671 | 48.5 | 47000 | 0.1474 | 0.1162 | | 0.5484 | 49.02 | 47500 | 0.1479 | 0.1158 | | 0.555 | 49.54 | 48000 | 0.1477 | 0.1157 |
f136de17698ad43949e6c279f8c933a5
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_xls-r_accent_us-10_england-0_s253 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b3ba3f574e1925e365d96bd430884962
mit
['summarization', 'translation', 'question-answering']
false
How to use For more details, do check out [our Github repo](https://github.com/vietai/mtet). [Finetunning examples can be found here](https://github.com/vietai/ViT5/tree/main/finetunning_huggingface). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("VietAI/envit5-base") model = AutoModelForSeq2SeqLM.from_pretrained("VietAI/envit5-base") model.cuda()
1eabf0987934b9abead11b566c06d0a8
mit
['summarization', 'translation', 'question-answering']
false
need prefix for en: and vi: sentences inputs = [ "vi: VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam.", "vi: Theo báo cáo mới nhất của Linkedin về danh sách việc làm triển vọng với mức lương hấp dẫn năm 2020, các chức danh công việc liên quan đến AI như Chuyên gia AI (Artificial Intelligence Specialist), Kỹ sư ML (Machine Learning Engineer) đều xếp thứ hạng cao.", "en: Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.", "en: We're on a journey to advance and democratize artificial intelligence through open source and open science." ] outputs = model.generate(tokenizer(inputs, return_tensors="pt", padding=True).input_ids.to('cuda'), max_length=512) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ```
3bb59fe6ef8e7f35be85143b369a17bc
mit
['summarization', 'translation', 'question-answering']
false
Citation ``` @misc{mtet, doi = {10.48550/ARXIV.2210.05610}, url = {https://arxiv.org/abs/2210.05610}, author = {Ngo, Chinh and Trinh, Trieu H. and Phan, Long and Tran, Hieu and Dang, Tai and Nguyen, Hieu and Nguyen, Minh and Luong, Minh-Thang}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {MTet: Multi-domain Translation for English and Vietnamese}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
260dd7a09368a8c272e58133418bec81
creativeml-openrail-m
['text-to-image', 'stable-diffusion', 'furry', 'anything-v3.0']
false
![images](https://cdn.discordapp.com/attachments/1050047774315532300/1057079481581445230/grid-0005.png) FurryDiffusion is a model made to generate furry art, this model is very much in beta still and will keep improoving! To use this please make sure to include `furry` in your prompt and to make a specific breed add the breed name only. Example Prompts: ``` Positive: highres, furry, fox, orange fur, blue eyes Negative: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, blurry ``` Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) **NOTE**: Its better to run it in Google Colab since you can use google's powerful gpu's for free. Go ahead try it now!
64ca91160f64ff363bd9bcc53c77b3c4
mit
['generated_from_trainer']
false
bart-large-cnn-pubmed1o3 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the scientific_papers dataset. It achieves the following results on the evaluation set: - Loss: 1.9359 - Rouge1: 36.7566 - Rouge2: 14.813 - Rougel: 22.4693 - Rougelsum: 33.4325 - Gen Len: 138.7332
1a32e7d379d2a1347b977869585820bb
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 2.028 | 1.0 | 19988 | 1.9359 | 36.7566 | 14.813 | 22.4693 | 33.4325 | 138.7332 |
644d2fc9631ef2fd1b9541c98f5b96c8
apache-2.0
['generated_from_trainer']
false
NLP2122_FranciosoDonato This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8885
5c8c922ec5ff36038ebf965e0aaaa5b8
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 13 | 8.1476 | | No log | 2.0 | 26 | 7.4435 | | No log | 3.0 | 39 | 7.2082 |
542a007d5640c51921f36a2d68a9227e
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_vp-100k_s219 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4e3915ff19ab919bdb3bea3f3f33afb4
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-ta This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2183 - F1: 0.8145
cbec4adeb10e46abd6ac371a13556fc4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5477 | 1.0 | 209 | 0.2732 | 0.7305 | | 0.2506 | 2.0 | 418 | 0.2425 | 0.7626 | | 0.168 | 3.0 | 627 | 0.2183 | 0.8145 |
d8be82d155328d493b27c2f11bb69fa5
apache-2.0
['text-classification', 'fact-or-opinion', 'transformers']
false
By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) This is an XLM-Roberta-base model with a binary classification head. Given a sentence, it can classify it either as a fact or an opinion based on its content. You can use this model in any of the XLM-R supported languages for the same task, taking advantage of its 0-shot learning capabilities. However, the model was trained only using English and Greek sentences. Legend of HuggingFace API labels: * Label 0: Opinion/Subjective sentence * Label 1: Fact/Objective sentence
64a9928d35d5fb2f11d03a4dcbc70f37
apache-2.0
['text-classification', 'fact-or-opinion', 'transformers']
false
Dataset training info The original dataset (available here: https://github.com/1024er/cbert_aug/tree/crayon/datasets/subj) contained aprox. 9000 annotated sentences (classified as subjective or objective). It was translated to Greek using Google Translate. The Greek version was then concatenated with the original English one to create the mixed EN-EL dataset. The model was trained for 5 epochs, using batch size = 8. Detailed metrics and hyperparameters available on the "Metrics" tab.
07b393957af8532b8d58e3e28749ca6d
mit
[]
false
schloss mosigkau on Stable Diffusion This is the `<ralph>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<ralph> 0](https://huggingface.co/sd-concepts-library/schloss-mosigkau/resolve/main/concept_images/0.jpeg) ![<ralph> 1](https://huggingface.co/sd-concepts-library/schloss-mosigkau/resolve/main/concept_images/3.jpeg) ![<ralph> 2](https://huggingface.co/sd-concepts-library/schloss-mosigkau/resolve/main/concept_images/4.jpeg) ![<ralph> 3](https://huggingface.co/sd-concepts-library/schloss-mosigkau/resolve/main/concept_images/1.jpeg) ![<ralph> 4](https://huggingface.co/sd-concepts-library/schloss-mosigkau/resolve/main/concept_images/2.jpeg)
25e0294765b791af5f7a99ae25df1b75
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0637 - Precision: 0.9336 - Recall: 0.9488 - F1: 0.9412 - Accuracy: 0.9854
01cf7555e9f1101e505561a32b32ac6e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0897 | 1.0 | 1756 | 0.0648 | 0.9152 | 0.9408 | 0.9278 | 0.9837 | | 0.0384 | 2.0 | 3512 | 0.0601 | 0.9277 | 0.9507 | 0.9391 | 0.9859 | | 0.0201 | 3.0 | 5268 | 0.0637 | 0.9336 | 0.9488 | 0.9412 | 0.9854 |
70969295f60de8195f8fb99417c50a74