license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['hf-asr-leaderboard', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.198 | 0.59 | 1000 | 0.2338 | 16.2424 | | 0.0933 | 1.19 | 2000 | 0.2138 | 14.9756 | | 0.082 | 1.78 | 3000 | 0.2024 | 14.2111 | | 0.0452 | 2.38 | 4000 | 0.2065 | 14.3447 | | a428465ce553b7a46ba737861ad6c4d4 |
apache-2.0 | ['tabular-classification', 'baseline-trainer'] | false | Baseline Model trained on tipsuhtxfu to apply classification on sex **Metrics of the best model:** accuracy 0.647364 average_precision 0.507660 roc_auc 0.625546 recall_macro 0.589832 f1_macro 0.585292 Name: MultinomialNB(), dtype: float64 **See model plot below:** <style> | 74b1c46ff94098e20fd58cbdb1a3c7c0 |
apache-2.0 | ['tabular-classification', 'baseline-trainer'] | false | sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;} | d20b161d0f9f0f1566d30684918b531f |
apache-2.0 | ['tabular-classification', 'baseline-trainer'] | false | x27;,EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless total_bill True False False ... False False False tip True False False ... False False False smoker False False False ... False False False day False False False ... False False False time False False False ... False False False size False False False ... False False False[6 rows x 7 columns])),(& | 0274cf8139ef029d4b1531ca2b38bc61 |
apache-2.0 | ['tabular-classification', 'baseline-trainer'] | false | x27;, MultinomialNB())]))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(& | 94108732e8f5e956124e8cd1657a28f5 |
apache-2.0 | ['tabular-classification', 'baseline-trainer'] | false | x27;, MultinomialNB())]))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless total_bill True False False ... False False False tip True False False ... False False False smoker False False False ... False False False day False False False ... False False False time False False False ... False False False size False False False ... False False False[6 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">pipeline: Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(& | cfdedc5cc0e4b4253cd8b897acf91799 |
apache-2.0 | ['tabular-classification', 'baseline-trainer'] | false | x27;, MultinomialNB())])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">MinMaxScaler</label><div class="sk-toggleable__content"><pre>MinMaxScaler()</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">MultinomialNB</label><div class="sk-toggleable__content"><pre>MultinomialNB()</pre></div></div></div></div></div></div></div></div></div> **Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain). **Logs of training** including the models tried in the process can be found in logs.txt | f3af9b34d28df8032fd2f40743d1e23e |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | Model Details <!-- Give an overview of your model, the relevant research paper, who trained it, etc. --> EfficientFormer-L1, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. This checkpoint of EfficientFormer-L1 was trained for 1000 epochs. - Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren - Language(s): English - License: This model is licensed under the apache-2.0 license - Resources for more information: - [Research Paper](https://arxiv.org/abs/2206.01191) - [GitHub Repo](https://github.com/snap-research/EfficientFormer/) </model_details> <how_to_start> | f2d64cb817f2fb16c9b3373cd51c60af |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | How to Get Started with the Model Use the code below to get started with the model. ```python import requests import torch from PIL import Image from transformers import EfficientFormerImageProcessor, EfficientFormerForImageClassificationWithTeacher | e1313b272297bdd1d1517d931e35b737 |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | Load preprocessor and pretrained model model_name = "huggingface/efficientformer-l1-300" processor = EfficientFormerImageProcessor.from_pretrained(model_name) model = EfficientFormerForImageClassificationWithTeacher.from_pretrained(model_name) | 04c8b210c5b56ca3d0fccc176f1b0cfc |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | Print the top ImageNet1k class prediction logits = outputs.logits scores = torch.nn.functional.softmax(logits, dim=1) top_pred_class = torch.argmax(scores, dim=1) print(f"Predicted class: {top_pred_class}") ``` </how_to_start> <uses> | e458b8d27a06a8eeb46126e66fd65774 |
apache-2.0 | ['generated_from_trainer', 'summarization'] | false | mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full This model is a fine-tuned version of [shamikbose89/mt5-small-finetuned-arxiv-cs](https://huggingface.co/shamikbose89/mt5-small-finetuned-arxiv-cs) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4037 - Rouge1: 39.8923 - Rouge2: 20.9831 - Rougel: 35.8642 - Rougelsum: 35.8511 | 3153b8fd361aa48d40951fbf8523d82d |
apache-2.0 | ['generated_from_trainer', 'summarization'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 | 97e550f511357db8ef32f419ebfeb288 |
apache-2.0 | ['generated_from_trainer', 'summarization'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 1.9675 | 1.0 | 500 | 1.5573 | 36.4989 | 18.4839 | 33.2984 | 33.2917 | | 1.7523 | 2.0 | 1000 | 1.4972 | 37.7911 | 19.0357 | 33.5725 | 33.6058 | | 1.6611 | 3.0 | 1500 | 1.4593 | 38.5822 | 19.4928 | 34.215 | 34.2531 | | 1.6187 | 4.0 | 2000 | 1.4492 | 39.1219 | 20.8705 | 35.1969 | 35.2255 | | 1.5864 | 5.0 | 2500 | 1.4289 | 39.7304 | 21.0654 | 35.6602 | 35.6667 | | 1.5553 | 6.0 | 3000 | 1.4184 | 40.0696 | 21.0883 | 35.9536 | 35.9132 | | 1.5215 | 7.0 | 3500 | 1.4163 | 39.1956 | 20.6757 | 35.5016 | 35.5196 | | 1.5038 | 8.0 | 4000 | 1.4148 | 39.2373 | 20.3114 | 35.1676 | 35.1532 | | 1.4929 | 9.0 | 4500 | 1.4064 | 39.9249 | 21.0155 | 35.8247 | 35.7937 | | 1.4791 | 10.0 | 5000 | 1.4037 | 39.8923 | 20.9831 | 35.8642 | 35.8511 | | 8a47f73eddc1b6647c9dec82d126fc2c |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-urdu This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m). It achieves the following results on the evaluation set: - Loss: 0.5285 - Wer: 0.1702 | 362d8948d16f5218ca31fd2b5d9a8a88 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 35 - mixed_precision_training: Native AMP | fadf07cf7795e4f1b550fd60025cec83 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 16.9618 | 0.74 | 32 | 15.0745 | 1.0 | | 9.1928 | 1.49 | 64 | 5.9361 | 1.0 | | 4.9307 | 2.23 | 96 | 4.2924 | 1.0 | | 3.8917 | 2.98 | 128 | 3.5873 | 1.0 | | 3.3867 | 3.72 | 160 | 3.2594 | 1.0 | | 3.2107 | 4.47 | 192 | 3.1718 | 1.0 | | 3.1395 | 5.21 | 224 | 3.1281 | 1.0 | | 3.115 | 5.95 | 256 | 3.1238 | 1.0 | | 3.0801 | 6.7 | 288 | 3.0674 | 1.0 | | 2.9725 | 7.44 | 320 | 2.8277 | 1.0 | | 2.4159 | 8.19 | 352 | 1.7186 | 0.9036 | | 1.3377 | 8.93 | 384 | 1.0271 | 0.6433 | | 0.8591 | 9.67 | 416 | 0.8087 | 0.5441 | | 0.726 | 10.42 | 448 | 0.7263 | 0.4634 | | 0.6242 | 11.16 | 480 | 0.6783 | 0.4156 | | 0.5417 | 11.91 | 512 | 0.6611 | 0.4305 | | 0.4784 | 12.65 | 544 | 0.6300 | 0.3926 | | 0.4198 | 13.4 | 576 | 0.5646 | 0.3499 | | 0.3798 | 14.14 | 608 | 0.5919 | 0.3229 | | 0.3356 | 14.88 | 640 | 0.5715 | 0.3369 | | 0.2954 | 15.63 | 672 | 0.5325 | 0.2728 | | 0.264 | 16.37 | 704 | 0.5535 | 0.2689 | | 0.2535 | 17.12 | 736 | 0.5467 | 0.2366 | | 0.2277 | 17.86 | 768 | 0.5219 | 0.2345 | | 0.2141 | 18.6 | 800 | 0.5314 | 0.2487 | | 0.2036 | 19.35 | 832 | 0.5382 | 0.2236 | | 0.2021 | 20.09 | 864 | 0.5038 | 0.1922 | | 0.1676 | 20.84 | 896 | 0.5238 | 0.2033 | | 0.1544 | 21.58 | 928 | 0.5069 | 0.1866 | | 0.1512 | 22.33 | 960 | 0.5045 | 0.1965 | | 0.1512 | 23.07 | 992 | 0.5167 | 0.1862 | | 0.1399 | 23.81 | 1024 | 0.5236 | 0.1840 | | 0.1291 | 24.56 | 1056 | 0.5234 | 0.1957 | | 0.1274 | 25.3 | 1088 | 0.5348 | 0.1943 | | 0.127 | 26.05 | 1120 | 0.4978 | 0.1719 | | 0.1105 | 26.79 | 1152 | 0.5067 | 0.1767 | | 0.1069 | 27.53 | 1184 | 0.5150 | 0.1758 | | 0.1058 | 28.28 | 1216 | 0.5218 | 0.1844 | | 0.0999 | 29.02 | 1248 | 0.5375 | 0.1852 | | 0.0964 | 29.77 | 1280 | 0.5373 | 0.1843 | | 0.0971 | 30.51 | 1312 | 0.5190 | 0.1776 | | 0.0906 | 31.26 | 1344 | 0.5217 | 0.1747 | | 0.0909 | 32.0 | 1376 | 0.5204 | 0.1778 | | 0.0784 | 32.74 | 1408 | 0.5336 | 0.1756 | | 0.0823 | 33.49 | 1440 | 0.5281 | 0.1699 | | 0.0834 | 34.23 | 1472 | 0.5292 | 0.1700 | | 0.0827 | 34.98 | 1504 | 0.5285 | 0.1702 | | 0c35ba967ac5878c95808e5f6e1d466f |
apache-2.0 | ['national library of spain', 'spanish', 'bne', 'capitel', 'pos'] | false | Model description The **roberta-large-bne-capitel-pos** is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the [roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) large model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. | 4b41816cf93287846c96c284212cc475 |
apache-2.0 | ['national library of spain', 'spanish', 'bne', 'capitel', 'pos'] | false | Intended uses and limitations **roberta-large-bne-capitel-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases. | 3eafca233326198074d46cef9af6d345 |
apache-2.0 | ['national library of spain', 'spanish', 'bne', 'capitel', 'pos'] | false | How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("token-classification", model="PlanTL-GOB-ES/roberta-large-bne-capitel-pos") example = "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto." pos_results = nlp(example) pprint(pos_results) ``` | f8fa01773906242d081b1d95347160fe |
apache-2.0 | ['national library of spain', 'spanish', 'bne', 'capitel', 'pos'] | false | Training procedure The model was trained with a batch size of 16 and a learning rate of 3e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. | 591b69e556cb0dd4d1ea4d39b3ff16c6 |
apache-2.0 | ['national library of spain', 'spanish', 'bne', 'capitel', 'pos'] | false | Evaluation results We evaluated the **roberta-large-bne-capitel-pos** on the CAPITEL-POS test set against standard multilingual and monolingual baselines: | Model | CAPITEL-POS (F1) | | ------------|:----| | roberta-large-bne-capitel-pos | **98.56** | | roberta-base-bne-capitel-pos | 98.46 | | BETO | 98.36 | | mBERT | 98.39 | | BERTIN | 98.47 | | ELECTRA | 98.16 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). | c36af50c99b28f2524d398178078e735 |
mit | ['generated_from_trainer'] | false | nbme-xlnet-large-cased This model is a fine-tuned version of [xlnet-large-cased](https://huggingface.co/xlnet-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7151 | 322f6d57962d7ec3d3dc25a9d82d9967 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 | 3ffd659d81e7afbb748166f8040bc157 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2931 | 1.0 | 1850 | 1.9915 | | 1.9467 | 2.0 | 3700 | 1.7866 | | 1.7983 | 3.0 | 5550 | 1.6919 | | 1dea65a732af59eb72f77e42fe2fc386 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2527 - F1: 0.8086 | c878d81f4b96ba9b47fcb42d0af462a5 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8319 | 1.0 | 70 | 0.3179 | 0.7474 | | 0.2959 | 2.0 | 140 | 0.2695 | 0.7916 | | 0.2036 | 3.0 | 210 | 0.2527 | 0.8086 | | 8606c457fc346bdcecb3ba03057ac40b |
['apache-2.0'] | [] | false | ```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for text in self.pre_tokenizer(text): if text in self.vocab: split_tokens.append(text) else: split_tokens.extend(super()._tokenize(text)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-tiny-1024') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-tiny-1024') ``` https://github.com/LowinLi/chinese-bigbird | 6eb4877bb08f8b1cfbcbdfd0d0b33479 |
apache-2.0 | ['generated_from_trainer'] | false | flan-t5-base-squad-swe2 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the squad_v2_sv dataset. It achieves the following results on the evaluation set: - Loss: 1.4248 | 3852dceaf3c8968ea0efda574499a6bf |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 | 34ed3148047c36783fed46d918d66450 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0881 | 1.0 | 890 | 1.6422 | | 1.7772 | 2.0 | 1780 | 1.5586 | | 1.6763 | 3.0 | 2670 | 1.5153 | | 1.6215 | 4.0 | 3560 | 1.4852 | | 1.5912 | 5.0 | 4450 | 1.4629 | | 1.5651 | 6.0 | 5340 | 1.4481 | | 1.5407 | 7.0 | 6230 | 1.4374 | | 1.5278 | 8.0 | 7120 | 1.4308 | | 1.5137 | 9.0 | 8010 | 1.4269 | | 1.5116 | 10.0 | 8900 | 1.4248 | | 9726d74c2472e9bfde373dd510e67723 |
apache-2.0 | ['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event'] | false | Model description This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset. | 9e129157da7d60d59cdc5e801b1b983d |
apache-2.0 | ['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event'] | false | Benchmark WER result: | | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) |---|---|---| |without LM| 15.74 | 25.10 | |with 4-grams LM| 15.37 | 16.09 | | adb402133236ea12939a7651be53ea9c |
apache-2.0 | ['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event'] | false | Benchmark CER result: | | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) |---|---|---| |without LM| 9.51 | 9.95 | |with 4-grams LM| 6.91 | 7.15 | | cff9b2f9d92418f2112497c5cb5f7e49 |
apache-2.0 | ['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event'] | false | Evaluation Please use the eval.py file to run the evaluation: ```python python eval.py --model_id vutankiet2901/wav2vec2-large-xlsr-53-ja --dataset mozilla-foundation/common_voice_7_0 --config ja --split test --log_outputs ``` | ed9b91f10d334ca874424287703ddb2d |
apache-2.0 | ['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP | 44f23f916004a60e23c8a8a9294092c7 |
apache-2.0 | ['automatic-speech-recognition', 'common-voice', 'hf-asr-leaderboard', 'ja', 'robust-speech-event'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 4.7776 | 4.73 | 1500 | 2.9540 | 0.9772 | 0.8489 | | 1.9076 | 9.46 | 3000 | 0.7146 | 0.5371 | 0.2484 | | 1.507 | 14.2 | 4500 | 0.5843 | 0.4689 | 0.2196 | | 1.3742 | 18.93 | 6000 | 0.5286 | 0.4321 | 0.1988 | | 1.2776 | 23.66 | 7500 | 0.5007 | 0.4056 | 0.1870 | | 1.2003 | 28.39 | 9000 | 0.4676 | 0.3848 | 0.1802 | | 1.1281 | 33.12 | 10500 | 0.4524 | 0.3694 | 0.1720 | | 1.0657 | 37.85 | 12000 | 0.4449 | 0.3590 | 0.1681 | | 1.0129 | 42.59 | 13500 | 0.4266 | 0.3423 | 0.1617 | | 0.9691 | 47.32 | 15000 | 0.4214 | 0.3375 | 0.1587 | | 390df9a93e84f5dad798431975aa514a |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | Versatile Diffusion V1.0 Model Card We built **Versatile Diffusion (VD), the first unified multi-flow multimodal diffusion framework**, as a step towards **Universal Generative AI**. Versatile Diffusion can natively support image-to-text, image-variation, text-to-image, and text-variation, and can be further extended to other applications such as semantic-style disentanglement, image-text dual-guided generation, latent image-to-text-to-image editing, and more. Future versions will support more modalities such as speech, music, video and 3D. Resources for more information: [GitHub](https://github.com/SHI-Labs/Versatile-Diffusion), [arXiv](https://arxiv.org/abs/2211.08332). | 1589ff856665ec788f1f15142d07d45a |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | Model Details One single flow of Versatile Diffusion contains a VAE, a diffuser, and a context encoder, and thus handles one task (e.g., text-to-image) under one data type (e.g., image) and one context type (e.g., text). The multi-flow structure of Versatile Diffusion shows in the following diagram: <p align="center"> <img src="https://huggingface.co/shi-labs/versatile-diffusion-model/resolve/main/assets/figures/vd_combined.png" width="99%"> </p> - **Developed by:** Xingqian Xu, Atlas Wang, Eric Zhang, Kai Wang, and Humphrey Shi - **Model type:** Diffusion-based multimodal generation model - **Language(s):** English - **License:** MIT - **Resources for more information:** [GitHub Repository](https://github.com/SHI-Labs/Versatile-Diffusion), [Paper](https://arxiv.org/abs/2211.08332). - **Cite as:** ``` @article{xu2022versatile, title = {Versatile Diffusion: Text, Images and Variations All in One Diffusion Model}, author = {Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi}, year = 2022, url = {https://arxiv.org/abs/2211.08332}, eprint = {2211.08332}, archiveprefix = {arXiv}, primaryclass = {cs.CV} } ``` | 47f7a085f4c5dc6c1f54e96f6c0314ba |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | Usage You can use the model both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [SHI-Labs Versatile Diffusion codebase](https://github.com/SHI-Labs/Versatile-Diffusion). | a04104b1173f47d02ef176b2f595fcd5 |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | 🧨 Diffusers Diffusers let's you both use a unified and more memory-efficient, task-specific pipelines. **Make sure to install `transformers` from `"main"` in order to use this model.**: ``` pip install git+https://github.com/huggingface/transformers ``` | 99b13344f4410ce3c159647784ec48d4 |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | VersatileDiffusionPipeline To use Versatile Diffusion for all tasks, it is recommend to use the [`VersatileDiffusionPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/versatile_diffusion | 3860bb2a0e1eee01539cfbbeefef89f3 |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | ! pip install git+https://github.com/huggingface/transformers diffusers torch from diffusers import VersatileDiffusionPipeline import torch import requests from io import BytesIO from PIL import Image pipe = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16) pipe = pipe.to("cuda") | be979ff1a990757a0c617fc4c578547f |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | Task Specific The task specific pipelines load only the weights that are needed onto GPU. You can find all task specific pipelines [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/versatile_diffusion | b3101e6a53508ddf6292212e96c13be3 |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | Text to Image ```py from diffusers import VersatileDiffusionTextToImagePipeline import torch pipe = VersatileDiffusionTextToImagePipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16) pipe.remove_unused_weights() pipe = pipe.to("cuda") generator = torch.Generator(device="cuda").manual_seed(0) image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] image.save("./astronaut.png") ``` | 17293878abdf0ece14c08d58567fb4bc |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | download an initial image url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" response = requests.get(url) image = Image.open(BytesIO(response.content)).convert("RGB") pipe = VersatileDiffusionImageVariationPipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16) pipe = pipe.to("cuda") generator = torch.Generator(device="cuda").manual_seed(0) image = pipe(image, generator=generator).images[0] image.save("./car_variation.png") ``` | e18b6a89700d8d7b631de422c88c6630 |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | download an initial image url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" response = requests.get(url) image = Image.open(BytesIO(response.content)).convert("RGB") text = "a red car in the sun" pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained("shi-labs/versatile-diffusion", torch_dtype=torch.float16) pipe.remove_unused_weights() pipe = pipe.to("cuda") generator = torch.Generator(device="cuda").manual_seed(0) text_to_image_strength = 0.75 image = pipe(prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator).images[0] image.save("./red_car.png") ``` | 7a6c0c7a16168891d2c363b829a25cb8 |
mit | ['generation', 'text-to-image', 'image-variation', 'image-to-text', 'image-editing', 'vision'] | false | Cautions, Biases, and Content Acknowledgment We would like the raise the awareness of users of this demo of its potential issues and concerns. Like previous large foundation models, Versatile Diffusion could be problematic in some cases, partially due to the imperfect training data and pretrained network (VAEs / context encoders) with limited scope. In its future research phase, VD may do better on tasks such as text-to-image, image-to-text, etc., with the help of more powerful VAEs, more sophisticated network designs, and more cleaned data. So far, we have kept all features available for research testing both to show the great potential of the VD framework and to collect important feedback to improve the model in the future. We welcome researchers and users to report issues with the HuggingFace community discussion feature or email the authors. Beware that VD may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography, and violence. VD was trained on the LAION-2B dataset, which scraped non-curated online images and text, and may contain unintended exceptions as we removed illegal content. VD in this demo is meant only for research purposes. | 300dab854020c593d04b02fa67935102 |
apache-2.0 | ['generated_from_trainer'] | false | finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0890 - Accuracy: 0.9750 - F1: 0.9873 | fc3e310fc7a8bf1cf7d52cb029dbef99 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 641b271d09069435813cc9f569c14609 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0485 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0558 | 0.9857 | 0.9927 | | No log | 3.0 | 312 | 0.0501 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0593 | 0.9828 | 0.9913 | | 0.04 | 5.0 | 520 | 0.0653 | 0.9828 | 0.9913 | | 24da40438aa151975830caa73ffe0b4e |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-finetuned-effectiveFeedback This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 | 123305de10cc8ddd7513bc7b054ecea4 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 361 | 0.0003 | | 0.0139 | 2.0 | 722 | 0.0001 | | 0.0002 | 3.0 | 1083 | 0.0001 | | c28e63b8732d0949301bda13555bbe91 |
gpl-3.0 | ['bicleaner-ai'] | false | Bicleaner AI full model for en-sq Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0). Sentence pairs considered very noisy are scored with 0. Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai | 3f00d5cdcac9a58d5706f5671d990e2b |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | MultiBERTs Seed 4 Checkpoint 300k (uncased) Seed 4 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | d8e9dec6e4719533261ad43747c38f2c |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-300k') model = BertModel.from_pretrained("multiberts-seed-4-300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | c8d225dd594fb5e833e42ac68741ba37 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Wav2Vec2-Large-XLSR-Indonesian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Indonesian Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181). When using this model, make sure that your speech input is sampled at 16kHz. | 6c0ce5a68d176547fe32ae33e08f0b86 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "id", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") resampler = torchaudio.transforms.Resample(48_000, 16_000) | c6f785918b3f80fec645fa5f2a732a96 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` | 8086de4c24d27acdf08d60ac6aeddc67 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Evaluation The model can be evaluated as follows on the Indonesian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "id", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) | 03a760ce224fbf54b0b9544551b9e356 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 51.69 % | 536bc54c01c990f4caa7f5c3f8004413 |
apache-2.0 | ['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week'] | false | Training The Artificial Common Voice `train`, `validation`, and ... datasets were used for training. The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition) (will be available soon) | 8fd1d74e4f633d1b61ad4451d80f89c8 |
mit | [] | false | Swedish BERT models for sentiment analysis [Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task. The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes. The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums. The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data. The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified. | 7d7bce522bfbe12a6a5012d5567f953c |
mit | [] | false | Swedish-Sentiment-Fear The model can be imported from the transformers library by running from transformers import BertForSequenceClassification, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear") classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear") When the model and tokenizer are initialized the model can be used for inference. | 35aead6bde63e0606816343b2dde5d80 |
mit | [] | false | Verification metrics During training, the model had maximized validation metrics at the following classification breakpoint. | Classification Breakpoint | F-score | Precision | Recall | |:-------------------------:|:-------:|:---------:|:------:| | 0.45 | 0.8754 | 0.8618 | 0.8895 | | ca7c269abe7f1f1f30f3ced6949da987 |
mit | [] | false | Swedish-Sentiment-Violence The model be can imported from the transformers library by running from transformers import BertForSequenceClassification, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence") classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence") When the model and tokenizer are initialized the model can be used for inference. | 5e481f9e7bac99501d8855208f4ad7f7 |
mit | [] | false | Verification metrics During training, the model had maximized validation metrics at the following classification breakpoint. | Classification Breakpoint | F-score | Precision | Recall | |:-------------------------:|:-------:|:---------:|:------:| | 0.35 | 0.7677 | 0.7456 | 0.791 | | 8136d1bb5d8747430cd34187987a6b99 |
apache-2.0 | [] | false | distilbert-base-en-ja-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). | 1ebb6de0c6e95f89ef9ba872f1626223 |
apache-2.0 | [] | false | How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-ja-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-ja-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). | cc1ff0091ff5cd16c4669fad13e3fd5f |
apache-2.0 | ['translation'] | false | ita-cat * source group: Italian * target group: Catalan * OPUS readme: [ita-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-cat/README.md) * model: transformer-align * source language(s): ita * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.eval.txt) | c460805b34446be4c1fc8208802f6183 |
apache-2.0 | ['translation'] | false | System Info: - hf_name: ita-cat - source_languages: ita - target_languages: cat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-cat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'ca'] - src_constituents: {'ita'} - tgt_constituents: {'cat'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.test.txt - src_alpha3: ita - tgt_alpha3: cat - short_pair: it-ca - chrF2_score: 0.706 - bleu: 52.5 - brevity_penalty: 0.993 - ref_len: 2074.0 - src_name: Italian - tgt_name: Catalan - train_date: 2020-06-16 - src_alpha2: it - tgt_alpha2: ca - prefer_old: False - long_pair: ita-cat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | 8740cf585fd83e4013e5769c232bad67 |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_add_GLUE_Experiment_logit_kd_pretrain_stsb This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: nan - Mse: nan | f36311be3a06f925ae24988841aff6c9 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Mse | |:-------------:|:-----:|:----:|:---------------:|:---:| | 0.0 | 1.0 | 45 | nan | nan | | 0.0 | 2.0 | 90 | nan | nan | | 0.0 | 3.0 | 135 | nan | nan | | 0.0 | 4.0 | 180 | nan | nan | | 0.0 | 5.0 | 225 | nan | nan | | 0.0 | 6.0 | 270 | nan | nan | | 3f84f5e6e785b12809ddd0985e99c837 |
cc-by-4.0 | ['answer extraction'] | false | Model Card of `lmqg/mt5-small-jaquad-ae` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for answer extraction on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). | a4a1e7608095d096b3f68c4e836042de |
cc-by-4.0 | ['answer extraction'] | false | Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** ja - **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) | 38311e7e382bacf97d76621f6b6de686 |
cc-by-4.0 | ['answer extraction'] | false | model prediction answers = model.generate_a("フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-jaquad-ae") output = pipe("『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。<hl>前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる3万5000部が刷られた。<hl>他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。") ``` | 8c1253d3a7b2095cb4429f1419464dd7 |
cc-by-4.0 | ['answer extraction'] | false | Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_jaquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 23.99 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | AnswerF1Score | 24.01 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | BERTScore | 75.65 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_1 | 30.11 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_2 | 27.39 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_3 | 25.24 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | Bleu_4 | 23.53 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | METEOR | 25.23 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | MoverScore | 62.71 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | ROUGE_L | 31.89 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | | f7e0186023a117cf4c994f16de9e066f |
cc-by-4.0 | ['answer extraction'] | false | Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_jaquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 32 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-jaquad-ae/raw/main/trainer_config.json). | 9000dd432633f3c483c65be3c2006606 |
cc-by-sa-4.0 | ['generated_from_trainer'] | false | AkeyLegalBert6 This model is a fine-tuned version of [hatemestinbejaia/AkeyLegalBert](https://huggingface.co/hatemestinbejaia/AkeyLegalBert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3634 | 05eb9e71c1ba9a576ff209cc4da5919d |
cc-by-sa-4.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3875 | 1.0 | 18422 | 3.5239 | | 3.44 | 2.0 | 36844 | 3.4214 | | 3.4738 | 3.0 | 55266 | 3.3597 | | b76962e4ff320e6754d74f2c3900955e |
mit | ['audio', 'speech-translation', 'automatic-speech-recognition'] | false | S2T-SMALL-COVOST2-FR-EN-ST `s2t-small-covost2-fr-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) | 525b5f66f77d4d328ea2e6246cde965c |
mit | ['audio', 'speech-translation', 'automatic-speech-recognition'] | false | Intended uses & limitations This model can be used for end-to-end French speech to English text translation. See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints. | 13c8d09b386648f8722088ced4a66b03 |
mit | ['audio', 'speech-translation', 'automatic-speech-recognition'] | false | How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-fr-en-st") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-fr-en-st") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) ds = ds.map(map_to_array) inputs = processor( ds["speech"][0], sampling_rate=48_000, return_tensors="pt" ) generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) translation = processor.batch_decode(generated_ids, skip_special_tokens=True) ``` | 90da0fc0e9e8b97eaa5f106a8ec0bd14 |
mit | ['audio', 'speech-translation', 'automatic-speech-recognition'] | false | Training data The s2t-small-covost2-fr-en-st is trained on French-English subset of [CoVoST2](https://github.com/facebookresearch/covost). CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster ST research with the largest ever open dataset | 7420e045854abaf193b0993837b31f1b |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_add_GLUE_Experiment_sst2_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.6814 - Accuracy: 0.5562 | c265156953f37da49cf2b7e316de69fe |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6662 | 1.0 | 527 | 0.6814 | 0.5562 | | 0.5954 | 2.0 | 1054 | 0.7090 | 0.5493 | | 0.5689 | 3.0 | 1581 | 0.7150 | 0.5596 | | 0.5546 | 4.0 | 2108 | 0.6893 | 0.5539 | | 0.5473 | 5.0 | 2635 | 0.7051 | 0.5872 | | 0.5421 | 6.0 | 3162 | 0.6983 | 0.5872 | | a783934adfce3d6ae6ed083b32d9f643 |
mit | ['generated_from_trainer'] | false | bert-base-historic-dutch-cased-squad-nl This model is a fine-tuned version of [dbmdz/bert-base-historic-dutch-cased](https://huggingface.co/dbmdz/bert-base-historic-dutch-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5392 | 010d4edeab1d2401bd098d27d69bc58f |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8534 | 1.0 | 4268 | 1.6793 | | 1.4998 | 2.0 | 8536 | 1.5392 | | dacc795d8b40c7e6814cf0d9faa846c4 |
apache-2.0 | ['generated_from_trainer'] | false | bert-large-uncased-finetuned-JD_CV This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.3896 | 5cd56fb467791945549bc0564ee21259 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 1 | 8.2520 | | No log | 2.0 | 2 | 7.5931 | | No log | 3.0 | 3 | 7.3896 | | 2c0fed0c6ac0d24a3e1e1fb34fff2bdc |
mit | ['generated_from_keras_callback'] | false | sachinsahu/Paper-clustered This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2563 - Train End Logits Accuracy: 0.9132 - Train Start Logits Accuracy: 0.9306 - Validation Loss: 1.4623 - Validation End Logits Accuracy: 0.5 - Validation Start Logits Accuracy: 0.75 - Epoch: 0 | 47d1d2123fe05aad657c37369f075820 |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 | ba69a596ccd86aad8ae47f1e3867b596 |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.2563 | 0.9132 | 0.9306 | 1.4623 | 0.5 | 0.75 | 0 | | 5bd77349ccaef3fbd6060d2fed6d64df |
apache-2.0 | ['translation'] | false | eng-gem * source group: English * target group: Germanic languages * OPUS readme: [eng-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md) * model: transformer * source language(s): eng * target language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.eval.txt) | e89b7aae1c56803230444eebb46e7d22 |
apache-2.0 | ['translation'] | false | Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-engdeu.eng.deu | 20.9 | 0.521 | | news-test2008-engdeu.eng.deu | 21.1 | 0.511 | | newstest2009-engdeu.eng.deu | 20.5 | 0.516 | | newstest2010-engdeu.eng.deu | 22.5 | 0.526 | | newstest2011-engdeu.eng.deu | 20.5 | 0.508 | | newstest2012-engdeu.eng.deu | 20.8 | 0.507 | | newstest2013-engdeu.eng.deu | 24.6 | 0.534 | | newstest2015-ende-engdeu.eng.deu | 27.9 | 0.569 | | newstest2016-ende-engdeu.eng.deu | 33.2 | 0.607 | | newstest2017-ende-engdeu.eng.deu | 26.5 | 0.560 | | newstest2018-ende-engdeu.eng.deu | 39.4 | 0.648 | | newstest2019-ende-engdeu.eng.deu | 35.0 | 0.613 | | Tatoeba-test.eng-afr.eng.afr | 56.5 | 0.745 | | Tatoeba-test.eng-ang.eng.ang | 6.7 | 0.154 | | Tatoeba-test.eng-dan.eng.dan | 58.0 | 0.726 | | Tatoeba-test.eng-deu.eng.deu | 40.3 | 0.615 | | Tatoeba-test.eng-enm.eng.enm | 1.4 | 0.215 | | Tatoeba-test.eng-fao.eng.fao | 7.2 | 0.304 | | Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.159 | | Tatoeba-test.eng-fry.eng.fry | 19.4 | 0.433 | | Tatoeba-test.eng-gos.eng.gos | 1.0 | 0.182 | | Tatoeba-test.eng-got.eng.got | 0.3 | 0.012 | | Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.130 | | Tatoeba-test.eng-isl.eng.isl | 23.4 | 0.505 | | Tatoeba-test.eng-ksh.eng.ksh | 1.1 | 0.141 | | Tatoeba-test.eng-ltz.eng.ltz | 20.3 | 0.379 | | Tatoeba-test.eng.multi | 46.5 | 0.641 | | Tatoeba-test.eng-nds.eng.nds | 20.6 | 0.458 | | Tatoeba-test.eng-nld.eng.nld | 53.4 | 0.702 | | Tatoeba-test.eng-non.eng.non | 0.6 | 0.166 | | Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.679 | | Tatoeba-test.eng-pdc.eng.pdc | 3.9 | 0.189 | | Tatoeba-test.eng-sco.eng.sco | 33.0 | 0.542 | | Tatoeba-test.eng-stq.eng.stq | 2.3 | 0.274 | | Tatoeba-test.eng-swe.eng.swe | 57.9 | 0.719 | | Tatoeba-test.eng-swg.eng.swg | 1.2 | 0.171 | | Tatoeba-test.eng-yid.eng.yid | 7.2 | 0.304 | | 73430726dd08bbf533f5734fe1c538e6 |
apache-2.0 | ['translation'] | false | System Info: - hf_name: eng-gem - source_languages: eng - target_languages: gem - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem'] - src_constituents: {'eng'} - tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: gem - short_pair: en-gem - chrF2_score: 0.6409999999999999 - bleu: 46.5 - brevity_penalty: 0.9790000000000001 - ref_len: 73328.0 - src_name: English - tgt_name: Germanic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: gem - prefer_old: False - long_pair: eng-gem - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | 4bc89499681871ed3f2b37b81606c472 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 | cccee7af987677bd0301e486664dd5f4 |
apache-2.0 | ['audio', 'speech', 'wav2vec2', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch'] | false | Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset in Russian [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Russian using a single-speaker dataset. | 3bb3456993c7dce7bd0ee0de54dbba78 |
apache-2.0 | ['audio', 'speech', 'wav2vec2', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch'] | false | Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-russian") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-russian") ``` | b45ab8dff81388bd46d0c435ae8db60a |
apache-2.0 | ['audio', 'speech', 'wav2vec2', 'Russian-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch'] | false | Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resampl(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` | 3ec03ade7be2a9c35d410f6396a02cf4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.