license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-rte-target-glue-cola This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7986 - Matthews Correlation: 0.1168
4d5ca755b8bfb741503f9ae3cf9fc59f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6097 | 1.87 | 500 | 0.6209 | 0.0 | | 0.6011 | 3.73 | 1000 | 0.6173 | 0.0 | | 0.5827 | 5.6 | 1500 | 0.6197 | 0.0622 | | 0.5534 | 7.46 | 2000 | 0.6410 | 0.0939 | | 0.5244 | 9.33 | 2500 | 0.6664 | 0.1184 | | 0.5087 | 11.19 | 3000 | 0.6684 | 0.1327 | | 0.4867 | 13.06 | 3500 | 0.6789 | 0.0999 | | 0.4693 | 14.93 | 4000 | 0.7124 | 0.1109 | | 0.4483 | 16.79 | 4500 | 0.7333 | 0.1388 | | 0.4303 | 18.66 | 5000 | 0.7486 | 0.1287 | | 0.4105 | 20.52 | 5500 | 0.7961 | 0.1321 | | 0.4046 | 22.39 | 6000 | 0.7986 | 0.1168 |
4b05a25787011b5c1607d9c61855758f
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5090 - Wer: 0.3435
c17a37badb3df266dc8e1582ca23e71b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5501 | 1.0 | 500 | 1.9752 | 0.9950 | | 0.8608 | 2.01 | 1000 | 0.5051 | 0.5035 | | 0.43 | 3.01 | 1500 | 0.4485 | 0.4525 | | 0.2921 | 4.02 | 2000 | 0.4658 | 0.4332 | | 0.2248 | 5.02 | 2500 | 0.4262 | 0.4268 | | 0.1863 | 6.02 | 3000 | 0.4126 | 0.3977 | | 0.1542 | 7.03 | 3500 | 0.4795 | 0.3987 | | 0.1374 | 8.03 | 4000 | 0.4882 | 0.3982 | | 0.1231 | 9.04 | 4500 | 0.4312 | 0.3790 | | 0.1082 | 10.04 | 5000 | 0.4344 | 0.3679 | | 0.0949 | 11.04 | 5500 | 0.4720 | 0.3769 | | 0.0897 | 12.05 | 6000 | 0.5382 | 0.3706 | | 0.0816 | 13.05 | 6500 | 0.4946 | 0.3618 | | 0.0726 | 14.06 | 7000 | 0.5383 | 0.3630 | | 0.0656 | 15.06 | 7500 | 0.4944 | 0.3693 | | 0.059 | 16.06 | 8000 | 0.5096 | 0.3639 | | 0.0572 | 17.07 | 8500 | 0.5066 | 0.3572 | | 0.0559 | 18.07 | 9000 | 0.5366 | 0.3610 | | 0.0468 | 19.08 | 9500 | 0.5103 | 0.3604 | | 0.0413 | 20.08 | 10000 | 0.5126 | 0.3496 | | 0.044 | 21.08 | 10500 | 0.5055 | 0.3524 | | 0.0351 | 22.09 | 11000 | 0.5526 | 0.3515 | | 0.0328 | 23.09 | 11500 | 0.4884 | 0.3512 | | 0.032 | 24.1 | 12000 | 0.5167 | 0.3474 | | 0.0271 | 25.1 | 12500 | 0.5027 | 0.3495 | | 0.0229 | 26.1 | 13000 | 0.5076 | 0.3444 | | 0.0252 | 27.11 | 13500 | 0.5122 | 0.3464 | | 0.0224 | 28.11 | 14000 | 0.5133 | 0.3447 | | 0.0236 | 29.12 | 14500 | 0.5090 | 0.3435 |
28e9583b344c9f210122fd7236333936
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_vp-100k_age_teens-0_sixties-10_s50 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
edc29df6e958eb80f5b37c4bc967a42b
mit
[]
false
Artist_Yukiko Kanagai on Stable Diffusion This is the `<Yukiko Kanagai >` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Yukiko Kanagai > 0](https://huggingface.co/sd-concepts-library/artist-yukiko-kanagai/resolve/main/concept_images/0.jpeg) ![<Yukiko Kanagai > 1](https://huggingface.co/sd-concepts-library/artist-yukiko-kanagai/resolve/main/concept_images/4.jpeg) ![<Yukiko Kanagai > 2](https://huggingface.co/sd-concepts-library/artist-yukiko-kanagai/resolve/main/concept_images/1.jpeg) ![<Yukiko Kanagai > 3](https://huggingface.co/sd-concepts-library/artist-yukiko-kanagai/resolve/main/concept_images/3.jpeg) ![<Yukiko Kanagai > 4](https://huggingface.co/sd-concepts-library/artist-yukiko-kanagai/resolve/main/concept_images/2.jpeg)
64e0da67c64364f694103f90f5dd6026
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4025 - F1: 0.6778
4e538ba4c46fe58260b8ba31a4c710c2
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1069 | 1.0 | 50 | 0.5201 | 0.5010 | | 0.4975 | 2.0 | 100 | 0.4503 | 0.6198 | | 0.3705 | 3.0 | 150 | 0.4025 | 0.6778 |
14cd51481f582b31e3abf5e74ef65665
mit
[]
false
Description This model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model.
88fb3bed2e693f4766d79341bf39dd40
mit
[]
false
Privacy By anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task.
0496fefd2176337fee46aedd575bd1a9
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3150 - Accuracy: 0.8633 - F1: 0.8656
5f8dc79f725fed85d3b2bcf7eaf674be
cc-by-4.0
['questions and answers generation']
false
Model Card of `lmqg/mt5-small-koquad-qag` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question & answer pair generation task on the [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
54068691efc3dda87095b4d4196e5ee8
cc-by-4.0
['questions and answers generation']
false
Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** ko - **Training data:** [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
8c4d66666cde768f88109b4b913d733a
cc-by-4.0
['questions and answers generation']
false
model prediction question_answer_pairs = model.generate_qa("1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-koquad-qag") output = pipe("1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.") ```
683d1439b3ab9d9bff66e40e17727215
cc-by-4.0
['questions and answers generation']
false
Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-koquad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_koquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-------------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 74.23 | default | [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) | | QAAlignedF1Score (MoverScore) | 75.06 | default | [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) | | QAAlignedPrecision (BERTScore) | 74.29 | default | [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) | | QAAlignedPrecision (MoverScore) | 75.14 | default | [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) | | QAAlignedRecall (BERTScore) | 74.2 | default | [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) | | QAAlignedRecall (MoverScore) | 75.04 | default | [lmqg/qag_koquad](https://huggingface.co/datasets/lmqg/qag_koquad) |
7c8dd4d889bce17d1f2ea61dae182d26
cc-by-4.0
['questions and answers generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_koquad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 256 - epoch: 13 - batch: 8 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-koquad-qag/raw/main/trainer_config.json).
c8e1f814c467f962b51e519595261e96
apache-2.0
['generated_from_trainer']
false
vit-base-patch16-224-in21k-finetuned-emotion-classification-balanced-data-fer2013-affecthq-v0.0 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0842 - Accuracy: 0.5958
aa8e9ea5a9d5e43de93d381d05b56201
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 17 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10
f4c259cd99114d1ed6a25917f10cb27c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7482 | 1.0 | 133 | 1.7009 | 0.4323 | | 1.4052 | 2.0 | 266 | 1.3518 | 0.4998 | | 1.2372 | 3.0 | 399 | 1.2425 | 0.5344 | | 1.1663 | 4.0 | 532 | 1.1871 | 0.5494 | | 1.1238 | 5.0 | 665 | 1.1443 | 0.5817 | | 1.0124 | 6.0 | 798 | 1.1228 | 0.5869 | | 1.0262 | 7.0 | 931 | 1.1035 | 0.5920 | | 0.9963 | 8.0 | 1064 | 1.0917 | 0.5934 | | 0.9739 | 9.0 | 1197 | 1.0870 | 0.5948 | | 0.986 | 10.0 | 1330 | 1.0842 | 0.5958 |
a6adcf30958faf091afc413dfad3141f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4969 - Matthews Correlation: 0.4354
b608cb5b99fad05ba883c21b703dd7ba
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5287 | 1.0 | 535 | 0.4969 | 0.4354 |
ee3a366b7dcad51935ae7e4fe183ed08
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'masked-lm']
false
Model Description This is a RoBERTa model pre-trained on Classical Chinese texts, derived from [GuwenBERT-base](https://huggingface.co/ethanyt/guwenbert-base). Character-embeddings are enhanced into traditional/simplified characters. You can fine-tune `roberta-classical-chinese-base-char` for downstream tasks, such as [sentence-segmentation](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-sentence-segmentation), [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-ud-goeswith), and so on.
8ce0b22a6abc94d55f88eadd3923b40b
apache-2.0
['classical chinese', 'literary chinese', 'ancient chinese', 'masked-lm']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-char") ```
d30980c6a7cad4afbe694722b89fe771
apache-2.0
[]
false
Model Description This **DAMO-YOLO-S** model is a small-size object detection model with fast inference speed and high accuracy, trained by **DAMO-YOLO**. DAMO-YOLO is a fast and accurate object detection method, which is developed by TinyML Team from Alibaba DAMO Data Analytics and Intelligence Lab. And it achieves a higher performance than state-of-the-art YOLO series. DAMO-YOLO is extend from YOLO but with some new techs, including Neural Architecture Search (NAS) backbones, efficient Reparameterized Generalized-FPN (RepGFPN), a lightweight head with AlignedOTA label assignment, and distillation enhancement. For more details, please refer to our [Arxiv Report](https://arxiv.org/abs/2211.15444) and [Github Code](https://github.com/tinyvision/DAMO-YOLO). Moreover, here you can find not only powerful models, but also highly efficient training strategies and complete tools from training to deployment.
f5a29e1cf3fe15442ae615b3d2986133
cc-by-4.0
['question generation']
false
Model Card of `research-backup/bart-large-squadshifts-vanilla-reddit-qg` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: reddit) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
e23f1498c1b7c7bf83b40bc5ae19d95f
cc-by-4.0
['question generation']
false
Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (reddit) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
3d8d77f2a1d291b03990a45dc9c885b8
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/bart-large-squadshifts-vanilla-reddit-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
21aff96ef78ce5d051e28f0fa48fe920
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-reddit-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 92.19 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 26.22 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 16.98 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 11.22 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 7.74 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 20.72 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 61.37 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 24.81 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
f1ac52610585731f3cd69f6439ac8d2d
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: reddit - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: facebook/bart-large - max_length: 512 - max_length_output: 32 - epoch: 2 - batch: 32 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-large-squadshifts-vanilla-reddit-qg/raw/main/trainer_config.json).
aa5d769995be7d5f5aec009a4c3a21d3
apache-2.0
['lexical normalization']
false
Fine-tuned ByT5-small for MultiLexNorm (Serbian version) ![model image](https://github.com/ufal/multilexnorm2021/raw/master/img/overall.png) This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages. Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
9a5b4a7807e3c47434fd5a522e1526f0
mit
['conversational']
false
Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
c8254bd4854f2c087866becc5ea2cc69
mit
['bart', 'pytorch']
false
BART-IT - FanPage BART-IT is a sequence-to-sequence model, based on the BART architecture that is specifically tailored to the Italian language. The model is pre-trained on a [large corpus of Italian text](https://huggingface.co/datasets/gsarti/clean_mc4_it), and can be fine-tuned on a variety of tasks.
40c5ff0387015c64be0945aebee69dc7
mit
['bart', 'pytorch']
false
Model description The model is a `base-`sized BART model, with a vocabulary size of 52,000 tokens. It has 140M parameters and can be used for any task that requires a sequence-to-sequence model. It is trained from scratch on a large corpus of Italian text, and can be fine-tuned on a variety of tasks.
74a3f41227e349d57c733ba664bc88bc
mit
['bart', 'pytorch']
false
Fine-tuning The model has been fine-tuned for the abstractive summarization task on 3 different Italian datasets: - **This model** [FanPage](https://huggingface.co/datasets/ARTeLab/fanpage) - finetuned model [here](https://huggingface.co/morenolq/bart-it-fanpage) - [IlPost](https://huggingface.co/datasets/ARTeLab/ilpost) - finetuned model [here](https://huggingface.co/morenolq/bart-it-ilpost) - [WITS](https://huggingface.co/datasets/Silvia/WITS) - finetuned model [here](https://huggingface.co/morenolq/bart-it-WITS)
0432f3dea41fac4ec26cf4432158c518
mit
['bart', 'pytorch']
false
Usage In order to use the model, you can use the following code: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("morenolq/bart-it-fanpage") model = AutoModelForSeq2SeqLM.from_pretrained("morenolq/bart-it-fanpage") input_ids = tokenizer.encode("Il modello BART-IT è stato pre-addestrato su un corpus di testo italiano", return_tensors="pt") outputs = model.generate(input_ids, max_length=40, num_beams=4, early_stopping=True) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
b8fd9fcdc0c2d8779690034a6978c322
mit
['bart', 'pytorch']
false
Citation If you find this model useful for your research, please cite the following paper: ```bibtex @Article{BARTIT, AUTHOR = {La Quatra, Moreno and Cagliero, Luca}, TITLE = {BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization}, JOURNAL = {Future Internet}, VOLUME = {15}, YEAR = {2023}, NUMBER = {1}, ARTICLE-NUMBER = {15}, URL = {https://www.mdpi.com/1999-5903/15/1/15}, ISSN = {1999-5903}, DOI = {10.3390/fi15010015} } ```
e17c9259f9ff3884cf5b2748a291e419
mit
[]
false
Generates Ad copy, currently for ads for Amazon shopping (fine tuned for electronics and wearables). **Usage Examples:** Enter the bolded text below to get the Amazon ad generated by the model. **Big savings on the new** Roku Streaming Device **Mothers Day discounts for** Apple Watch Wireless Charger USB Charging Cable **Big savings on the new Sony** **Last minute shopping for Samsung headphones for** You can try entering brand and product names like Samsung Galaxy to see the ad text generator in action. Currently fine tuned on the EleutherAI/gpt-neo-125M model **Model Performance:** The model does quite well on the Electronics and Wearables categories on which it has been fine-tuned. There are, however, occasional hallucinations, though the ad copy is mostly coherent. In other domains, it doesn't do quite as well... Tesla for Christmas today, Honda on sale
6e7e14deee3e6aa51706dbbc05af4ce9
apache-2.0
['generated_from_trainer']
false
Tagged_One_100v9_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one100v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4255 - Precision: 0.3040 - Recall: 0.2132 - F1: 0.2506 - Accuracy: 0.8539
0eeccd9f0408bc57af4ce1b8e09152ed
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 40 | 0.5167 | 0.1936 | 0.0376 | 0.0630 | 0.8004 | | No log | 2.0 | 80 | 0.4406 | 0.2405 | 0.1441 | 0.1802 | 0.8385 | | No log | 3.0 | 120 | 0.4255 | 0.3040 | 0.2132 | 0.2506 | 0.8539 |
bdf7e6a3952062e70e36fc782a808854
apache-2.0
['automatic-speech-recognition', 'pl']
false
exp_w2v2t_pl_vp-sv_s571 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
302a350bc4c1b6d592731e8751e5a717
apache-2.0
[]
false
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This checkpoint is the original MobileBert Optimized Uncased English: [uncased_L-24_H-128_B-512_A-4_F-4_OPT](https://storage.googleapis.com/cloud-tpu-checkpoints/mobilebert/uncased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz) checkpoint.
478a5c3991aed98ebd9a5d779064ffb1
apache-2.0
[]
false
How to use MobileBERT in `transformers` ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="google/mobilebert-uncased", tokenizer="google/mobilebert-uncased" ) print( fill_mask(f"HuggingFace is creating a {fill_mask.tokenizer.mask_token} that the community uses to solve NLP tasks.") ) ```
b6059ca5a8d63b204b0dcda12bdc1973
mit
[]
false
milady on Stable Diffusion This is the `<milady>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<milady> 0](https://huggingface.co/sd-concepts-library/milady/resolve/main/concept_images/0.jpeg) ![<milady> 1](https://huggingface.co/sd-concepts-library/milady/resolve/main/concept_images/2.jpeg) ![<milady> 2](https://huggingface.co/sd-concepts-library/milady/resolve/main/concept_images/1.jpeg) ![<milady> 3](https://huggingface.co/sd-concepts-library/milady/resolve/main/concept_images/3.jpeg)
235a759dafe9ff5b13a7bae5e1fea8cd
cc-by-sa-4.0
[]
false
Model description This is a Japanese RoBERTa base model pre-trained on Japanese Wikipedia and the Japanese portion of CC-100. This model is trained with character-level tokenization and whole word masking.
2d52303135ee86bac7f6939919783ef6
cc-by-sa-4.0
[]
false
How to use You can use this model for masked language modeling as follows: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('ku-nlp/roberta-base-japanese-char-wwm') model = AutoModelForMaskedLM.from_pretrained('ku-nlp/roberta-base-japanese-char-wwm') sentence = '京都大学で自然言語処理を[MASK]する。' encoding = tokenizer(sentence, return_tensors='pt') ... ``` You can fine-tune this model on downstream tasks.
d195f8a3c3686f4e42fecd53ba6652a8
cc-by-sa-4.0
[]
false
Tokenization There is no need to tokenize texts in advance, and you can give raw texts to the tokenizer. The texts are tokenized into character-level tokens by [sentencepiece](https://github.com/google/sentencepiece).
210ecf93c427dddcc40ef141ddd3ba4c
cc-by-sa-4.0
[]
false
Training procedure This model was trained on Japanese Wikipedia (as of 20220220) and the Japanese portion of CC-100. It took two weeks using 8 NVIDIA A100 GPUs. The following hyperparameters were used during pre-training: - learning_rate: 1e-4 - per_device_train_batch_size: 62 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 3968 - max_seq_length: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear schedule with warmup - training_steps: 330000 - warmup_steps: 10000
e6fff935852f04d499107c211e477847
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2157 - Accuracy: 0.9265 - F1: 0.9267
298c9c76cb334707e807dbd98e29423f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8322 | 1.0 | 250 | 0.3176 | 0.905 | 0.9015 | | 0.2481 | 2.0 | 500 | 0.2157 | 0.9265 | 0.9267 |
4b6316e5e9ac9e5ff3d75c1bd6d47f83
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Please Note! This model is NOT the 19.2M images Characters Model on TrinArt, but an improved version of the original Trin-sama Twitter bot model. This model is intended to retain the original SD's aesthetics as much as possible while nudging the model to anime/manga style. Other TrinArt models can be found at: https://huggingface.co/naclbit/trinart_derrida_characters_v2_stable_diffusion https://huggingface.co/naclbit/trinart_characters_19.2m_stable_diffusion_v1
fa18afdec6aa69c475d133957663db2a
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Diffusers The model has been ported to `diffusers` by [ayan4m1](https://huggingface.co/ayan4m1) and can easily be run from one of the branches: - `revision="diffusers-60k"` for the checkpoint trained on 60,000 steps, - `revision="diffusers-95k"` for the checkpoint trained on 95,000 steps, - `revision="diffusers-115k"` for the checkpoint trained on 115,000 steps. For more information, please have a look at [the "Three flavors" section](
201ac76a703710597625c977acfba0e5
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Gradio We also support a [Gradio](https://github.com/gradio-app/gradio) web ui with diffusers to run inside a colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1RWvik_C7nViiR9bNsu3fvMR3STx6RvDx?usp=sharing)
6e8e5bdf5847fb85b76c69661dfff61e
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
using the 60,000 steps checkpoint pipe = StableDiffusionPipeline.from_pretrained("naclbit/trinart_stable_diffusion_v2", revision="diffusers-60k") pipe.to("cuda") image = pipe("A magical dragon flying in front of the Himalaya in manga style").images[0] image ``` ![dragon](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/a_magical_dragon_himalaya.png) If you want to run the pipeline faster or on a different hardware, please have a look at the [optimization docs](https://huggingface.co/docs/diffusers/optimization/fp16).
cb8fab7e390310aac95cb3a80fb944f0
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
!pip install diffusers==0.3.0 from diffusers import StableDiffusionImg2ImgPipeline import requests from PIL import Image from io import BytesIO url = "https://scitechdaily.com/images/Dog-Park.jpg" response = requests.get(url) init_image = Image.open(BytesIO(response.content)).convert("RGB") init_image = init_image.resize((768, 512))
e9e1ce68cfd11633d2bbbdc40efff889
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
using the 115,000 steps checkpoint pipe = StableDiffusionImg2ImgPipeline.from_pretrained("naclbit/trinart_stable_diffusion_v2", revision="diffusers-115k") pipe.to("cuda") images = pipe(prompt="Manga drawing of Brad Pitt", init_image=init_image, strength=0.75, guidance_scale=7.5).images image ``` If you want to run the pipeline faster or on a different hardware, please have a look at the [optimization docs](https://huggingface.co/docs/diffusers/optimization/fp16).
d0dac82d8cc81613c154cff4a9f16397
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Stable Diffusion TrinArt/Trin-sama AI finetune v2 trinart_stable_diffusion is a SD model finetuned by about 40,000 assorted high resolution manga/anime-style pictures for 8 epochs. This is the same model running on Twitter bot @trinsama (https://twitter.com/trinsama) Twitterボット「とりんさまAI」@trinsama (https://twitter.com/trinsama) で使用しているSDのファインチューン済モデルです。一定のルールで選別された約4万枚のアニメ・マンガスタイルの高解像度画像を用いて約8エポックの訓練を行いました。
03bd093105c1f6aed844cdbf61087987
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Version 2 V2 checkpoint uses dropouts, 10,000 more images and a new tagging strategy and trained longer to improve results while retaining the original aesthetics. バージョン2は画像を1万枚追加したほか、ドロップアウトの適用、タグ付けの改善とより長いトレーニング時間により、SDのスタイルを保ったまま出力内容の改善を目指しています。
9abe8225cf934dd8a1fa6bfab59fc391
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Three flavors Step 115000/95000 checkpoints were trained further, but you may use step 60000 checkpoint instead if style nudging is too much. ステップ115000/95000のチェックポイントでスタイルが変わりすぎると感じる場合は、ステップ60000のチェックポイントを使用してみてください。
28e9711c73580e63689a3f70d0706d70
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
img2img If you want to run **latent-diffusion**'s stock ddim img2img script with this model, **use_ema** must be set to False. **latent-diffusion** のscriptsフォルダに入っているddim img2imgをこのモデルで動かす場合、use_emaはFalseにする必要があります。
6c43df7bf8d38d5dd0a976477c3da903
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
Examples Each images were diffused using K. Crowson's k-lms (from k-diffusion repo) method for 50 steps. ![examples](https://pbs.twimg.com/media/FbPO12-VUAAf2CJ?format=jpg&name=900x900) ![examples](https://pbs.twimg.com/media/FbPO65cUIAAga8k?format=jpg&name=900x900) ![examples](https://pbs.twimg.com/media/FbPO_QuVsAAG6xE?format=png&name=900x900)
0ffc153bdd16ec42747161b2562eb3a7
mit
['text2text-generation']
false
Intro Trained on IndicNLGSuit [IndicQuestionGeneration](https://huggingface.co/datasets/ai4bharat/IndicQuestionGeneration) data for Bengali the model is finetuned from [IndicBART](https://huggingface.co/ai4bharat/IndicBART)
d8498765e942faa1f983500e5748195c
mit
['text2text-generation']
false
Finetuned Command python run_summarization.py --model_name_or_path bnQG_models/checkpoint-32000 --do_eval --train_file train_bn.json --validation_file valid_bn.json --output_dir bnQG_models --overwrite_output_dir --per_device_train_batch_size=2 --per_device_eval_batch_size=4 --predict_with_generate --text_column src --summary_column tgt --save_steps 4000 --evaluation_strategy steps --gradient_accumulation_steps 4 --eval_steps 1000 --learning_rate 0.001 --num_beams 4 --forced_bos_token "<2bn>" --num_train_epochs 10 --warmup_steps 10000
6832f44c508357c01a2e61619d47a723
mit
['text2text-generation']
false
Inference script = "সুভাষ ১৮৯৭ খ্রিষ্টাব্দের ২৩ জানুয়ারি ব্রিটিশ ভারতের অন্তর্গত বাংলা প্রদেশের উড়িষ্যা বিভাগের (অধুনা, ভারতের ওড়িশা রাজ্য) কটকে জন্মগ্রহণ করেন।" answer = "১৮৯৭ খ্রিষ্টাব্দের ২৩ জানুয়ারি" inp = answer +" [SEP] "+script + " </s> <2bn>" inp_tok = tokenizer(inp, add_special_tokens=False, return_tensors="pt", padding=True).input_ids model.eval()
a8cd5e249d068bab555dc8db90abd70a
mit
['text2text-generation']
false
Set dropouts to zero model_output=model.generate(inp_tok, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2bn>") ) decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
bfff15b33218a436a4f5c3227ea93b31
mit
['text2text-generation']
false
Citations @inproceedings{dabre2021indicbart, title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages}, author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, booktitle={Findings of the Association for Computational Linguistics}, } @misc{kumar2022indicnlg, title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages}, author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar}, year={2022}, eprint={2203.05437}, archivePrefix={arXiv}, primaryClass={cs.CL} }
f315d464abda21996ed1ce48843fd496
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
e851893c05ee54a2d4c76f566c7941b5
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ```
77dcdb9e1d64c686507c2de0a2d9573b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2) ------
5889da920c0679b2916cf4376779d241
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 384 word pieces is truncated.
d955afea4385d7608e979170fca25a8b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
8776c40ea4b90b75d9cb438b6d2a72ed
apache-2.0
['vision', 'image-classification']
false
Swin Transformer (large-sized model) Swin Transformer model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
8caa513d98e1a7d9f843d982be2bd282
apache-2.0
['vision', 'image-classification']
false
Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer)
d720cac1b86ab5ecdb76384a0454619d
apache-2.0
['vision', 'image-classification']
false
Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for fine-tuned versions on a task that interests you.
715dea939e871ef3fa0e0c115fb016e4
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window7-224") model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window7-224") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
1af69650d4f5feed600dba2a9dfa051b
apache-2.0
['vision', 'image-classification']
false
model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html
5a2d04760cbc4a5a3b5e37d42d268c28
apache-2.0
['vision', 'image-classification']
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-14030, author = {Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo}, title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, journal = {CoRR}, volume = {abs/2103.14030}, year = {2021}, url = {https://arxiv.org/abs/2103.14030}, eprinttype = {arXiv}, eprint = {2103.14030}, timestamp = {Thu, 08 Apr 2021 07:53:26 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
3e98d9d24aefa2cc0e16c712a6e73d92
apache-2.0
['image-classification', 'vision']
false
BEiT (large-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
b110c5cf8a1baed27596eab409bf3ba2
apache-2.0
['image-classification', 'vision']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-224') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
c901cecd063b174784df661660027132
apache-2.0
['translation']
false
opus-mt-tum-es * source languages: tum * target languages: es * OPUS readme: [tum-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.eval.txt)
14c4d1ee71d222bd25bba159bb4a08fa
apache-2.0
['generated_from_keras_callback']
false
toanbui1991/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5101 - Train End Logits Accuracy: 0.6065 - Train Start Logits Accuracy: 0.5692 - Validation Loss: 1.1679 - Validation End Logits Accuracy: 0.6823 - Validation Start Logits Accuracy: 0.6523 - Epoch: 0
22014b7c1a21b568af7e4c1b47667be2
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5101 | 0.6065 | 0.5692 | 1.1679 | 0.6823 | 0.6523 | 0 |
490bd6d1d3b2d6ee63c5faca8e6e8dfa
apache-2.0
['generated_from_trainer']
false
results This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2057 - Rouge2 Precision: 0.3564 - Rouge2 Recall: 0.2124 - Rouge2 Fmeasure: 0.256
049b9f291351f2841d4bfb1ec9fd70b5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | No log | 1.0 | 240 | 0.3146 | 0.2121 | 0.1134 | 0.1424 | | No log | 2.0 | 480 | 0.2444 | 0.2855 | 0.1519 | 0.19 | | 0.6451 | 3.0 | 720 | 0.2195 | 0.3225 | 0.1821 | 0.223 | | 0.6451 | 4.0 | 960 | 0.2078 | 0.355 | 0.2113 | 0.2548 | | 0.2978 | 5.0 | 1200 | 0.2057 | 0.3564 | 0.2124 | 0.256 |
5012b9f5e9a3ee0d5bb5de533cc0b0d8
apache-2.0
[]
false
Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class.
8a58642397ddba662c3faa430154977d
apache-2.0
[]
false
Training Data This model was trained on the [Quora Duplicate Questions](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) dataset. The model will predict a score between 0 and 1 how likely the two given questions are duplicates. Note: The model is not suitable to estimate the similarity of questions, e.g. the two questions "How to learn Java" and "How to learn Python" will result in a rahter low score, as these are not duplicates.
9d86f658acf9fdb44cbdc770404703cc
apache-2.0
[]
false
Usage and Performance Pre-trained models can be used like this: ``` from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Question 1', 'Question 2'), ('Question 3', 'Question 4')]) ``` You can use this model also without sentence_transformers and by just using Transformers ``AutoModel`` class
be77f7e2a82bc91c414c6606aae07ab3
apache-2.0
['generated_from_trainer']
false
small-mlm-glue-mnli-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.6551
cbbf818a9a15b2d15817555d6975109d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 7.0308 | 0.4 | 500 | 6.6001 | | 6.346 | 0.8 | 1000 | 6.3998 | | 6.1061 | 1.2 | 1500 | 6.3170 | | 5.9586 | 1.6 | 2000 | 6.2799 | | 5.8773 | 2.0 | 2500 | 6.2034 | | 5.7403 | 2.4 | 3000 | 6.1609 | | 5.6602 | 2.8 | 3500 | 6.1113 | | 5.5809 | 3.2 | 4000 | 6.1267 | | 5.5663 | 3.6 | 4500 | 6.0647 | | 5.6266 | 4.0 | 5000 | 6.1090 | | 5.4756 | 4.4 | 5500 | 6.0302 | | 5.4905 | 4.8 | 6000 | 6.0292 | | 5.3179 | 5.2 | 6500 | 5.9758 | | 5.3375 | 5.6 | 7000 | 6.0125 | | 5.3035 | 6.0 | 7500 | 5.9495 | | 5.1918 | 6.4 | 8000 | 5.9537 | | 5.2499 | 6.8 | 8500 | 5.9100 | | 5.1905 | 7.2 | 9000 | 5.8620 | | 5.1787 | 7.6 | 9500 | 5.9296 | | 5.1534 | 8.0 | 10000 | 5.9442 | | 5.1396 | 8.4 | 10500 | 5.8609 | | 5.1272 | 8.8 | 11000 | 5.8358 | | 4.9615 | 9.2 | 11500 | 5.8617 | | 5.0062 | 9.6 | 12000 | 5.8043 | | 5.0131 | 10.0 | 12500 | 5.8119 | | 4.9326 | 10.4 | 13000 | 5.7851 | | 4.9655 | 10.8 | 13500 | 5.7792 | | 4.9256 | 11.2 | 14000 | 5.7843 | | 4.9195 | 11.6 | 14500 | 5.7652 | | 4.8299 | 12.0 | 15000 | 5.7606 | | 4.8748 | 12.4 | 15500 | 5.7577 | | 4.7588 | 12.8 | 16000 | 5.7048 | | 4.8185 | 13.2 | 16500 | 5.7245 | | 4.7679 | 13.6 | 17000 | 5.7402 | | 4.7377 | 14.0 | 17500 | 5.7034 | | 4.7403 | 14.4 | 18000 | 5.7054 | | 4.6628 | 14.8 | 18500 | 5.7203 | | 4.6801 | 15.2 | 19000 | 5.6798 | | 4.6014 | 15.6 | 19500 | 5.6931 | | 4.618 | 16.0 | 20000 | 5.6620 | | 4.6037 | 16.4 | 20500 | 5.6441 | | 4.6004 | 16.8 | 21000 | 5.6262 | | 4.5432 | 17.2 | 21500 | 5.6726 | | 4.576 | 17.6 | 22000 | 5.6322 | | 4.5568 | 18.0 | 22500 | 5.6551 |
3b497a39c1d1e10ce15034ed58bc1cfd
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1591 - Accuracy: 0.939 - F1: 0.9391
b041e7703678b6bd4545002222a25505
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2497 | 1.0 | 1000 | 0.2133 | 0.9255 | 0.9252 | | 0.1498 | 2.0 | 2000 | 0.1652 | 0.934 | 0.9339 | | 0.0965 | 3.0 | 3000 | 0.1591 | 0.939 | 0.9391 |
17a70039718ff39c806bbfd49722cbb1
apache-2.0
['generated_from_trainer']
false
opus-mt-ar-en-finetuned-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_infopankki dataset. It achieves the following results on the evaluation set: - Loss: 0.7269 - Bleu: 51.6508 - Gen Len: 15.0812
ca00082c15cbb79875b0104f54573d63
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP
f62226925ccabce1cec78704fada0035
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.4974 | 1.0 | 1587 | 1.3365 | 36.9061 | 15.3385 | | 1.3768 | 2.0 | 3174 | 1.2139 | 39.5476 | 15.2079 | | 1.2887 | 3.0 | 4761 | 1.1265 | 41.2771 | 15.2034 | | 1.2076 | 4.0 | 6348 | 1.0556 | 42.6907 | 15.2687 | | 1.1512 | 5.0 | 7935 | 0.9975 | 43.9498 | 15.2072 | | 1.0797 | 6.0 | 9522 | 0.9491 | 45.224 | 15.2034 | | 1.0499 | 7.0 | 11109 | 0.9101 | 46.1387 | 15.1651 | | 1.0095 | 8.0 | 12696 | 0.8778 | 47.0586 | 15.1788 | | 0.9833 | 9.0 | 14283 | 0.8501 | 47.8083 | 15.162 | | 0.9601 | 10.0 | 15870 | 0.8267 | 48.5236 | 15.1784 | | 0.9457 | 11.0 | 17457 | 0.8059 | 49.1717 | 15.095 | | 0.9233 | 12.0 | 19044 | 0.7883 | 49.7742 | 15.1126 | | 0.8964 | 13.0 | 20631 | 0.7736 | 50.2168 | 15.0917 | | 0.8849 | 14.0 | 22218 | 0.7606 | 50.5583 | 15.0913 | | 0.8751 | 15.0 | 23805 | 0.7504 | 50.8481 | 15.1108 | | 0.858 | 16.0 | 25392 | 0.7417 | 51.1841 | 15.0989 | | 0.8673 | 17.0 | 26979 | 0.7353 | 51.4271 | 15.0939 | | 0.8548 | 18.0 | 28566 | 0.7306 | 51.535 | 15.0911 | | 0.8483 | 19.0 | 30153 | 0.7279 | 51.6102 | 15.078 | | 0.8614 | 20.0 | 31740 | 0.7269 | 51.6508 | 15.0812 |
9bcc8af3269181c958d0456c09e2de9c
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
Demo: How to use in ESPnet2 ```bash cd espnet git checkout 49a284e69308d81c142b89795de255b4ce290c54 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_g_fastspeech2 ```
9ebe75fb6b720b0aae02d8c06e3aacd7
cc-by-4.0
['espnet', 'audio', 'text-to-speech']
false
TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_fastspeech2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/g/tts_train_fastspeech2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 800 batch_size: 20 valid_batch_size: null batch_bins: 2500000 valid_batch_bins: null train_shape_file: - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape valid_shape_file: - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_g_phn/text - text - text - - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/train_g_phn/durations - durations - text_int - - dump/raw/train_g_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_g_phn/text - text - text - - exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/dev_g_phn/durations - durations - text_int - - dump/raw/dev_g_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 1.0 scheduler: noamlr scheduler_conf: model_size: 384 warmup_steps: 4000 token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - Y0 - m - v - h - E1 - k - a:1 - E:1 - f - G - j - T - a1 - p - c - au:1 - i:1 - O:1 - I:1 - E0 - I1 - r_0 - t_h - k_h - Y1 - ei1 - i0 - ou:1 - ei:1 - u:1 - O1 - N - l_0 - '91' - ai0 - au1 - ou0 - n_0 - ei0 - O0 - ou1 - ai:1 - '9:1' - ai1 - i1 - '90' - au0 - c_h - x - 9i:1 - C - p_h - u0 - Y:1 - J - 9i1 - u1 - 9i0 - N_0 - m_0 - J_0 - Oi1 - Yi0 - Yi1 - Oi0 - au:0 - '9:0' - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz tts: fastspeech2 tts_conf: adim: 384 aheads: 2 elayers: 4 eunits: 1536 dlayers: 4 dunits: 1536 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 postnet_layers: 5 postnet_filts: 5 postnet_chans: 256 use_masking: true use_scaled_pos_enc: true encoder_normalize_before: true decoder_normalize_before: true reduction_factor: 1 init_type: xavier_uniform init_enc_alpha: 1.0 init_dec_alpha: 1.0 transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false pitch_extract: dio pitch_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 reduction_factor: 1 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz energy_extract: energy energy_extract_conf: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null reduction_factor: 1 energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/g/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz required: - output_dir - token_list version: 0.10.7a1 distributed: false ``` </details>
9dd334c4910fb60b67366d9e8064507b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Sentence Transformers ```python from sentence_transformers import SentenceTransformer question = "<Q>How many models can I host on HuggingFace?" answer_1 = "<A>All plans come with unlimited private models and datasets." answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem." answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job." model = SentenceTransformer('clips/mfaq') embeddings = model.encode([question, answer_1, answer_3, answer_3]) print(embeddings) ```
20d9426ac5fee14fc2700e65622f8abf
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) question = "<Q>How many models can I host on HuggingFace?" answer_1 = "<A>All plans come with unlimited private models and datasets." answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem." answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job." tokenizer = AutoTokenizer.from_pretrained('clips/mfaq') model = AutoModel.from_pretrained('clips/mfaq')
4472d3bd61c842796c785a9c5d0a6592
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Citation information ``` @misc{debruyn2021mfaq, title={MFAQ: a Multilingual FAQ Dataset}, author={Maxime De Bruyn and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans}, year={2021}, eprint={2109.12870}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
6e86a70b1664909af636ae33ee542229
apache-2.0
['generated_from_trainer']
false
albert-large-v2-finetuned-wnli This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6919 - Accuracy: 0.5352
36919be99a8dad7c6dc62ae38d19fd90
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 17 | 0.7292 | 0.4366 | | No log | 2.0 | 34 | 0.6919 | 0.5352 | | No log | 3.0 | 51 | 0.7084 | 0.4648 | | No log | 4.0 | 68 | 0.7152 | 0.5352 | | No log | 5.0 | 85 | 0.7343 | 0.5211 |
be330d669d7249b4887fae60715b7359