license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['translation']
false
tur-ukr * source group: Turkish * target group: Ukrainian * OPUS readme: [tur-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md) * model: transformer-align * source language(s): tur * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.eval.txt)
13f08a56320ebed29499b5e922ec075a
apache-2.0
['translation']
false
System Info: - hf_name: tur-ukr - source_languages: tur - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'uk'] - src_constituents: {'tur'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt - src_alpha3: tur - tgt_alpha3: ukr - short_pair: tr-uk - chrF2_score: 0.624 - bleu: 42.5 - brevity_penalty: 0.983 - ref_len: 12988.0 - src_name: Turkish - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: tr - tgt_alpha2: uk - prefer_old: False - long_pair: tur-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
52b7254db62b2ad1f7a3605207b1c7fb
cc-by-4.0
['questions and answers generation']
false
Model Card of `lmqg/bart-large-tweetqa-qag` This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
18b1fa9a7adcc5e36fa1e54803dfead8
cc-by-4.0
['questions and answers generation']
false
Overview - **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large) - **Language:** en - **Training data:** [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
35580d170d548cf9d5254b613913d9b9
cc-by-4.0
['questions and answers generation']
false
model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-large-tweetqa-qag") output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
e797b3e863d45e41fb04127031a768db
cc-by-4.0
['questions and answers generation']
false
Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------------| | BERTScore | 91.27 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_1 | 44.55 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_2 | 31.15 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_3 | 21.58 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | Bleu_4 | 15.18 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | METEOR | 27.91 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | MoverScore | 62.25 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedF1Score (BERTScore) | 92.47 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedF1Score (MoverScore) | 64.66 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedPrecision (BERTScore) | 92.74 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedPrecision (MoverScore) | 65.39 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedRecall (BERTScore) | 92.21 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | QAAlignedRecall (MoverScore) | 64.03 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | | ROUGE_L | 34.99 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
9d870d1e8b9dd8bc4563a4fda6c3ba0e
cc-by-4.0
['questions and answers generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_tweetqa - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: facebook/bart-large - max_length: 256 - max_length_output: 128 - epoch: 14 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-tweetqa-qag/raw/main/trainer_config.json).
c674505a44dda10dc09cfab2b5b8ec60
apache-2.0
['generated_from_trainer']
false
Sentiment140_ALBERT_5E This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.6103 - Accuracy: 0.8533
19294a82439a37bacd66a3ed9de6ef82
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6713 | 0.08 | 50 | 0.5704 | 0.7333 | | 0.5742 | 0.16 | 100 | 0.4620 | 0.8 | | 0.5104 | 0.24 | 150 | 0.5536 | 0.74 | | 0.5313 | 0.32 | 200 | 0.5198 | 0.76 | | 0.5023 | 0.4 | 250 | 0.4286 | 0.8 | | 0.4871 | 0.48 | 300 | 0.4294 | 0.8267 | | 0.4513 | 0.56 | 350 | 0.4349 | 0.8133 | | 0.4647 | 0.64 | 400 | 0.4046 | 0.8333 | | 0.4827 | 0.72 | 450 | 0.4218 | 0.8333 | | 0.4517 | 0.8 | 500 | 0.4093 | 0.82 | | 0.4417 | 0.88 | 550 | 0.3999 | 0.84 | | 0.4701 | 0.96 | 600 | 0.3779 | 0.8867 | | 0.397 | 1.04 | 650 | 0.3730 | 0.8667 | | 0.3377 | 1.12 | 700 | 0.3833 | 0.8333 | | 0.411 | 1.2 | 750 | 0.3704 | 0.84 | | 0.3796 | 1.28 | 800 | 0.3472 | 0.86 | | 0.3523 | 1.36 | 850 | 0.3512 | 0.8733 | | 0.3992 | 1.44 | 900 | 0.3712 | 0.84 | | 0.3641 | 1.52 | 950 | 0.3718 | 0.82 | | 0.3973 | 1.6 | 1000 | 0.3508 | 0.84 | | 0.3576 | 1.68 | 1050 | 0.3600 | 0.86 | | 0.3701 | 1.76 | 1100 | 0.3287 | 0.8667 | | 0.3721 | 1.84 | 1150 | 0.3794 | 0.82 | | 0.3673 | 1.92 | 1200 | 0.3378 | 0.8733 | | 0.4223 | 2.0 | 1250 | 0.3508 | 0.86 | | 0.2745 | 2.08 | 1300 | 0.3835 | 0.86 | | 0.283 | 2.16 | 1350 | 0.3500 | 0.8533 | | 0.2769 | 2.24 | 1400 | 0.3334 | 0.8733 | | 0.2491 | 2.32 | 1450 | 0.3519 | 0.8867 | | 0.3237 | 2.4 | 1500 | 0.3438 | 0.86 | | 0.2662 | 2.48 | 1550 | 0.3513 | 0.8667 | | 0.2423 | 2.56 | 1600 | 0.3413 | 0.8867 | | 0.2655 | 2.64 | 1650 | 0.3126 | 0.8933 | | 0.2516 | 2.72 | 1700 | 0.3333 | 0.8733 | | 0.252 | 2.8 | 1750 | 0.3316 | 0.88 | | 0.2872 | 2.88 | 1800 | 0.3227 | 0.9 | | 0.306 | 2.96 | 1850 | 0.3383 | 0.8733 | | 0.248 | 3.04 | 1900 | 0.3474 | 0.8733 | | 0.1507 | 3.12 | 1950 | 0.4140 | 0.8667 | | 0.1994 | 3.2 | 2000 | 0.3729 | 0.8533 | | 0.167 | 3.28 | 2050 | 0.3782 | 0.8867 | | 0.1872 | 3.36 | 2100 | 0.4352 | 0.8867 | | 0.1611 | 3.44 | 2150 | 0.4511 | 0.8667 | | 0.2338 | 3.52 | 2200 | 0.4244 | 0.8533 | | 0.1538 | 3.6 | 2250 | 0.4226 | 0.8733 | | 0.1561 | 3.68 | 2300 | 0.4126 | 0.88 | | 0.2156 | 3.76 | 2350 | 0.4382 | 0.86 | | 0.1684 | 3.84 | 2400 | 0.4969 | 0.86 | | 0.1917 | 3.92 | 2450 | 0.4439 | 0.8667 | | 0.1584 | 4.0 | 2500 | 0.4759 | 0.86 | | 0.1038 | 4.08 | 2550 | 0.5042 | 0.8667 | | 0.0983 | 4.16 | 2600 | 0.5527 | 0.8533 | | 0.1404 | 4.24 | 2650 | 0.5801 | 0.84 | | 0.0844 | 4.32 | 2700 | 0.5884 | 0.86 | | 0.1347 | 4.4 | 2750 | 0.5865 | 0.8467 | | 0.1373 | 4.48 | 2800 | 0.5915 | 0.8533 | | 0.1506 | 4.56 | 2850 | 0.5976 | 0.8467 | | 0.1007 | 4.64 | 2900 | 0.6678 | 0.82 | | 0.1311 | 4.72 | 2950 | 0.6082 | 0.8533 | | 0.1402 | 4.8 | 3000 | 0.6180 | 0.8467 | | 0.1363 | 4.88 | 3050 | 0.6107 | 0.8533 | | 0.0995 | 4.96 | 3100 | 0.6103 | 0.8533 |
f44d0b89ccee2ea3a91b22bf72e67e6a
cc-by-4.0
[]
false
Usage ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("sarnikowski/electra-small-discriminator-da-256-cased") model = AutoModel.from_pretrained("sarnikowski/electra-small-discriminator-da-256-cased") ```
cb5090b6a90ac936d7c1d2c41305de38
cc-by-4.0
[]
false
Questions? If you have any questions feel free to open an issue on the [danish_transformers](https://github.com/sarnikowski/danish_transformers) repository, or send an email to p.sarnikowski@gmail.com
494a2a94ba6d07d842c31ad02a963917
apache-2.0
['translation']
false
opus-mt-gil-fr * source languages: gil * target languages: fr * OPUS readme: [gil-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.eval.txt)
e1f662e5d01d678a6b5721513d95671d
apache-2.0
['generated_from_trainer']
false
t5-small-entailement-Writer-T5-small This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5628
ccdcaefa193a6bd061647fe6c4ddd51e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 83 | 1.2943 | | No log | 2.0 | 166 | 0.9323 | | No log | 3.0 | 249 | 0.8443 | | No log | 4.0 | 332 | 0.7884 | | No log | 5.0 | 415 | 0.7582 | | No log | 6.0 | 498 | 0.7355 | | 1.2761 | 7.0 | 581 | 0.7178 | | 1.2761 | 8.0 | 664 | 0.7105 | | 1.2761 | 9.0 | 747 | 0.6972 | | 1.2761 | 10.0 | 830 | 0.6847 | | 1.2761 | 11.0 | 913 | 0.6774 | | 1.2761 | 12.0 | 996 | 0.6708 | | 0.7765 | 13.0 | 1079 | 0.6609 | | 0.7765 | 14.0 | 1162 | 0.6566 | | 0.7765 | 15.0 | 1245 | 0.6507 | | 0.7765 | 16.0 | 1328 | 0.6454 | | 0.7765 | 17.0 | 1411 | 0.6438 | | 0.7765 | 18.0 | 1494 | 0.6384 | | 0.693 | 19.0 | 1577 | 0.6347 | | 0.693 | 20.0 | 1660 | 0.6321 | | 0.693 | 21.0 | 1743 | 0.6254 | | 0.693 | 22.0 | 1826 | 0.6237 | | 0.693 | 23.0 | 1909 | 0.6215 | | 0.693 | 24.0 | 1992 | 0.6167 | | 0.6504 | 25.0 | 2075 | 0.6167 | | 0.6504 | 26.0 | 2158 | 0.6131 | | 0.6504 | 27.0 | 2241 | 0.6120 | | 0.6504 | 28.0 | 2324 | 0.6091 | | 0.6504 | 29.0 | 2407 | 0.6076 | | 0.6504 | 30.0 | 2490 | 0.6058 | | 0.615 | 31.0 | 2573 | 0.6031 | | 0.615 | 32.0 | 2656 | 0.6015 | | 0.615 | 33.0 | 2739 | 0.6015 | | 0.615 | 34.0 | 2822 | 0.6000 | | 0.615 | 35.0 | 2905 | 0.5998 | | 0.615 | 36.0 | 2988 | 0.5969 | | 0.586 | 37.0 | 3071 | 0.5959 | | 0.586 | 38.0 | 3154 | 0.5941 | | 0.586 | 39.0 | 3237 | 0.5923 | | 0.586 | 40.0 | 3320 | 0.5936 | | 0.586 | 41.0 | 3403 | 0.5929 | | 0.586 | 42.0 | 3486 | 0.5922 | | 0.5618 | 43.0 | 3569 | 0.5910 | | 0.5618 | 44.0 | 3652 | 0.5885 | | 0.5618 | 45.0 | 3735 | 0.5879 | | 0.5618 | 46.0 | 3818 | 0.5873 | | 0.5618 | 47.0 | 3901 | 0.5877 | | 0.5618 | 48.0 | 3984 | 0.5878 | | 0.5418 | 49.0 | 4067 | 0.5881 | | 0.5418 | 50.0 | 4150 | 0.5858 | | 0.5418 | 51.0 | 4233 | 0.5847 | | 0.5418 | 52.0 | 4316 | 0.5839 | | 0.5418 | 53.0 | 4399 | 0.5843 | | 0.5418 | 54.0 | 4482 | 0.5826 | | 0.5283 | 55.0 | 4565 | 0.5843 | | 0.5283 | 56.0 | 4648 | 0.5833 | | 0.5283 | 57.0 | 4731 | 0.5825 | | 0.5283 | 58.0 | 4814 | 0.5827 | | 0.5283 | 59.0 | 4897 | 0.5830 | | 0.5283 | 60.0 | 4980 | 0.5806 | | 0.5135 | 61.0 | 5063 | 0.5808 | | 0.5135 | 62.0 | 5146 | 0.5806 | | 0.5135 | 63.0 | 5229 | 0.5807 | | 0.5135 | 64.0 | 5312 | 0.5823 | | 0.5135 | 65.0 | 5395 | 0.5801 | | 0.5135 | 66.0 | 5478 | 0.5799 | | 0.5053 | 67.0 | 5561 | 0.5808 | | 0.5053 | 68.0 | 5644 | 0.5796 | | 0.5053 | 69.0 | 5727 | 0.5793 | | 0.5053 | 70.0 | 5810 | 0.5785 | | 0.5053 | 71.0 | 5893 | 0.5790 | | 0.5053 | 72.0 | 5976 | 0.5775 | | 0.4985 | 73.0 | 6059 | 0.5770 | | 0.4985 | 74.0 | 6142 | 0.5777 | | 0.4985 | 75.0 | 6225 | 0.5780 | | 0.4985 | 76.0 | 6308 | 0.5779 | | 0.4985 | 77.0 | 6391 | 0.5782 | | 0.4985 | 78.0 | 6474 | 0.5773 | | 0.4889 | 79.0 | 6557 | 0.5787 | | 0.4889 | 80.0 | 6640 | 0.5787 | | 0.4889 | 81.0 | 6723 | 0.5773 | | 0.4889 | 82.0 | 6806 | 0.5777 | | 0.4889 | 83.0 | 6889 | 0.5759 | | 0.4889 | 84.0 | 6972 | 0.5765 | | 0.4806 | 85.0 | 7055 | 0.5758 | | 0.4806 | 86.0 | 7138 | 0.5760 | | 0.4806 | 87.0 | 7221 | 0.5758 | | 0.4806 | 88.0 | 7304 | 0.5760 | | 0.4806 | 89.0 | 7387 | 0.5759 | | 0.4806 | 90.0 | 7470 | 0.5758 | | 0.4817 | 91.0 | 7553 | 0.5753 | | 0.4817 | 92.0 | 7636 | 0.5757 | | 0.4817 | 93.0 | 7719 | 0.5754 | | 0.4817 | 94.0 | 7802 | 0.5750 | | 0.4817 | 95.0 | 7885 | 0.5753 | | 0.4817 | 96.0 | 7968 | 0.5752 | | 0.4767 | 97.0 | 8051 | 0.5754 | | 0.4767 | 98.0 | 8134 | 0.5756 | | 0.4767 | 99.0 | 8217 | 0.5755 | | 0.4767 | 100.0 | 8300 | 0.5755 |
3ab786bbe90c5d7d2766289984c2e6bb
creativeml-openrail-m
[]
false
Use 'wewulzkz' as the keyword. A bit overcooked but gets the job done, I wanted a model that could create a good base for werewolves that I could then paint over, and this serves that purpose well. Based on SD 1.5. ![Image](https://i.imgur.com/QErdQTQ.jpg "Examples created with this model") It is also pretty good at applying the werewolf concept to other things. For example, below the prompt is just "A spooky cat" and "a spooky mouse" ![Image](https://i.imgur.com/CGvMl9e.png "Werewolfifying other things") Or this example is a not cherrypicked (first results) for just the prompt "An alligator" ![Image](https://i.imgur.com/zJWU004.png "Weregator")
045f36734b34de76e2d6a5d9628dd98d
mit
['summarization']
false
Pre-trained BART Model fine-tune on WikiLingua dataset The repository for the fine-tuned BART model (by sshleifer) using the **wiki_lingua** dataset (English) **Purpose:** Examine the performance of a fine-tuned model research purposes **Observation:** - Pre-trained model was trained on the XSum dataset, which summarize a not-too-long documents into one-liner summary - Fine-tuning this model using WikiLingua is appropriate since the summaries for that dataset are also short - In the end, however, the model cannot capture much clearer key points, but instead it mostly extracts the opening sentence - Some data pre-processing and models' hyperparameter are also need to be tuned more properly.
edf63d520d044708b74a9136ded75f8d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 24 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0
bd7c94e2b0acc80ba41428b481456ae4
apache-2.0
['translation']
false
opus-mt-fi-ts * source languages: fi * target languages: ts * OPUS readme: [fi-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ts/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ts/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ts/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ts/opus-2020-01-24.eval.txt)
9d63c9dacb513b75e4ab58739edb2ef8
cc-by-4.0
['answer extraction']
false
Model Card of `lmqg/mt5-base-koquad-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for answer extraction on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
759bc5bf28a58f2b9da5245cd2d27a02
cc-by-4.0
['answer extraction']
false
model prediction answers = model.generate_a("1990๋…„ ์˜ํ™” ใ€Š ๋‚จ๋ถ€๊ตฐ ใ€‹์—์„œ ๋‹จ์—ญ์œผ๋กœ ์˜ํ™”๋ฐฐ์šฐ ์ฒซ ๋ฐ๋ท”์— ์ด์–ด ๊ฐ™์€ ํ•ด KBS ๋“œ๋ผ๋งˆ ใ€Š์ง€๊ตฌ์ธใ€‹์—์„œ ๋‹จ์—ญ์œผ๋กœ ์ถœ์—ฐํ•˜์˜€๊ณ  ์ด๋“ฌํ•ด MBC ใ€Š์—ฌ๋ช…์˜ ๋ˆˆ๋™์žใ€‹๋ฅผ ํ†ตํ•ด ๋‹จ์—ญ์œผ๋กœ ์ถœ์—ฐํ•˜์˜€๋‹ค.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-koquad-ae") output = pipe("๋˜ํ•œ ์Šคํ”ผ์–ด์Šค๋Š” ๋งŽ์€ ์ƒˆ๋กœ์šด ์—ฌ์„ฑ ์•„ํ‹ฐ์ŠคํŠธ๋“ค์—๊ฒŒ ์˜ํ–ฅ์„ ๋ผ์ณค๋Š”๋ฐ, ๋Œ€ํ‘œ์ ์œผ๋กœ ๋ฐ๋ฏธ ๋กœ๋ฐ”ํ† , ์ผ€์ดํ‹ฐ ํŽ˜๋ฆฌ, ํฌ๋ฆฌ์Šคํ‹ฐ๋‹ˆ์•„ ๋“œ๋ฐ”์ง€, ๋ ˆ์ด๋”” ๊ฐ€๊ฐ€, ๋ฆฌํ‹€ ๋ถ€์ธ , ์…€๋ ˆ๋‚˜ ๊ณ ๋ฉ”์ฆˆ & ๋”์”ฌ, ํ”ฝ์‹œ ๋กœํŠธ ์ด ์žˆ๋‹ค. 2007๋…„ ๋น„์š˜์„ธ ๋†€์Šค๋Š” Total Request Live์™€์˜ ์ธํ„ฐ๋ทฐ์—์„œ '๋‚˜๋Š” ๋ธŒ๋ฆฌํŠธ๋‹ˆ๋ฅผ ์‚ฌ๋ž‘ํ•˜๊ณ  ํŒฌ์ด์—์š”. ํŠนํžˆ ์ƒˆ ์•จ๋ฒ” Blackout์„ ์ข‹์•„ํ•ด์š”'๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๋ฆฐ์ œ์ด ๋กœํ•œ์€ '์–ธ์ œ๋‚˜ ๋ธŒ๋ฆฌํŠธ๋‹ˆ ์Šคํ”ผ์–ด์Šค์—๊ฒŒ ์˜๊ฐ์„ ๋ฐ›๋Š”๋‹ค. ํ•™์ฐฝ์‹œ์ ˆ ๊ทธ๋…€์ฒ˜๋Ÿผ ํƒ€๋ธ”๋กœ์ด๋“œ์— ์˜ค๋ฅด๊ธฐ๋ฅผ ๊ฟˆ๊ฟ”์™”๋‹ค'๊ณ  ๋งํ•˜๋ฉฐ ๋กค ๋ชจ๋ธ๋กœ ๊ผฝ์•˜๋‹ค. ์Šคํ”ผ์–ด์Šค๋Š” ํ˜„๋Œ€ ์Œ์•…๊ฐ€๋“ค์—๊ฒŒ ์Œ์•…์  ์˜๊ฐ์œผ๋กœ ์–ธ๊ธ‰๋˜๊ธฐ๋„ ํ–ˆ๋‹ค. <hl> ๋งˆ์ผ๋ฆฌ ์‚ฌ์ด๋Ÿฌ์Šค๋Š” ์ž์‹ ์˜ ํžˆํŠธ๊ณก Party in the U.S.A. ๊ฐ€ ๋ธŒ๋ฆฌํŠธ๋‹ˆ์—๊ฒŒ ์˜๊ฐ๊ณผ ์˜ํ–ฅ์„ ๋ฐ›์€ ๊ณก์ด๋ผ๊ณ  ๋ฐํ˜”๋‹ค. <hl> ๋ฒ ๋ฆฌ ๋งค๋‹๋กœ์šฐ์˜ ์•จ๋ฒ” 15 Minutes ์—ญ์‹œ ๋ธŒ๋ฆฌํŠธ๋‹ˆ์—๊ฒŒ ์˜๊ฐ์„ ์–ป์—ˆ๋‹ค๊ณ  ์–ธ๊ธ‰๋˜์—ˆ๋‹ค.") ```
21506eb71cf9cef907fa50e670ae5fbf
cc-by-4.0
['answer extraction']
false
Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_koquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 69.49 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | AnswerF1Score | 77.32 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | BERTScore | 91.76 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_1 | 59.38 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_2 | 48.34 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_3 | 34.11 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | Bleu_4 | 20.6 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | METEOR | 51.78 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | MoverScore | 90.78 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | | ROUGE_L | 72.57 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
b074cddb3783e49adb8d26e6fbe50bc9
cc-by-4.0
['answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_koquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 5 - batch: 8 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-koquad-ae/raw/main/trainer_config.json).
39b193c76eb117719e20ee0559a4b6a3
apache-2.0
['deep-narrow']
false
T5-Efficient-LARGE-KV128 (Deep-Narrow version) T5-Efficient-LARGE-KV128 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the modelโ€™s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
7f8d1ad91be9f189d2ec884d5e2cb8ce
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-large-kv128** - is of model type **Large** with the following variations: - **kv** is **128** It has **1039.71** million parameters and thus requires *ca.* **4158.86 MB** of memory in full precision (*fp32*) or **2079.43 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
1c1dd5262e731ab918ac3bcbf5168cfe
mit
['conversational']
false
personachat-arabic (conversational AI) This is personachat-arabic, using a subset from the persona-chat validation dataset, machine translated to Arabic (from English) and fine-tuned from [akhooli/gpt2-small-arabic](https://huggingface.co/akhooli/gpt2-small-arabic) which is a limited text generation model. Usage: see the last section of this [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing) Note: model has limited training set which was machine translated (do not use for production).
355a4d63a87db4472f2bd1d89d8671de
apache-2.0
['generated_from_trainer']
false
insertion-prop05-ls01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2120 - Precision: 0.9800 - Recall: 0.9776 - F1: 0.9788 - Accuracy: 0.9924
fa069a87439443bba792e18db6f0b00e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - label_smoothing_factor: 0.1
5c055301ae95d3be80c68db0033865e0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2462 | 0.32 | 500 | 0.2160 | 0.9754 | 0.9697 | 0.9725 | 0.9902 | | 0.2194 | 0.64 | 1000 | 0.2128 | 0.9784 | 0.9763 | 0.9773 | 0.9919 | | 0.2171 | 0.96 | 1500 | 0.2120 | 0.9800 | 0.9776 | 0.9788 | 0.9924 |
2635c79814db5dc51e4bd44d66114a26
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE-NL24 (Deep-Narrow version) T5-Efficient-BASE-NL24 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the modelโ€™s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
bba1ba0e232b0c42d458505aeb2a013f
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base-nl24** - is of model type **Base** with the following variations: - **nl** is **24** It has **421.19** million parameters and thus requires *ca.* **1684.75 MB** of memory in full precision (*fp32*) or **842.37 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
ba505c7ce97259864199ec4c12932980
cc-by-4.0
['hi', 'en', 'codemix']
false
HingRoBERTa-Mixed HingRoBERTa-Mixed is a Hindi-English code-mixed BERT model trained on roman + devanagari text. It is a xlm-RoBERTa model fine-tuned on mixed script L3Cube-HingCorpus. <br> [dataset link] (https://github.com/l3cube-pune/code-mixed-nlp) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398) ``` @inproceedings{nayak-joshi-2022-l3cube, title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models", author = "Nayak, Ravindra and Joshi, Raviraj", booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.wildre-1.2", pages = "7--12", } ```
6acab6cf9cd88d9ca338258bc25957b7
apache-2.0
['translation']
false
opus-mt-fr-srn * source languages: fr * target languages: srn * OPUS readme: [fr-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-srn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.eval.txt)
bf8e195d3f10482559986697d226a33f
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Medium Vi v1 - Shiv Kumar Ganesh This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 1.0641 - Wer: 34.0974
e313cbe619ffde57d0f2148edff17f3e
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0005 | 31.0 | 500 | 0.7179 | 33.7464 | | 0.0002 | 62.0 | 1000 | 0.7837 | 32.4742 | | 0.0001 | 93.0 | 1500 | 0.8267 | 34.2729 | | 0.0001 | 124.0 | 2000 | 0.8677 | 35.1722 | | 0.0 | 156.0 | 2500 | 0.9045 | 35.3257 | | 0.0 | 187.0 | 3000 | 0.9316 | 33.9877 | | 0.0 | 218.0 | 3500 | 0.9585 | 34.0097 | | 0.0 | 249.0 | 4000 | 0.9846 | 33.3626 | | 0.0 | 281.0 | 4500 | 1.0082 | 33.4832 | | 0.0 | 312.0 | 5000 | 1.0247 | 33.7026 | | 0.0 | 343.0 | 5500 | 1.0391 | 32.8691 | | 0.0 | 374.0 | 6000 | 1.0516 | 32.9020 | | 0.0 | 406.0 | 6500 | 1.0606 | 33.6477 | | 0.0 | 437.0 | 7000 | 1.0641 | 34.0974 |
d3d7098c4bcdf6ceaafcb74fe33e9109
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0008 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP
4c8f13f32fd0014f5659cbb3321a7c88
apache-2.0
['generated_from_trainer']
false
Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.1, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0}, 'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'], 'is_split_by_sentences': True}, 'generation': {'batch_size': 64, 'metrics_configs': [{}, {'n': 1}, {}], 'scenario_configs': [{'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 128, 'prefix': '<|aligned|>', 'use_prompt_for_scoring': False}, {'display_as_html': True, 'generate_kwargs': {'do_sample': True, 'eos_token_id': 0, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'functions', 'num_samples': 128, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/functions_csnet.jsonl', 'use_prompt_for_scoring': True}], 'scorer_config': {}}, 'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'}, 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'codeparrot/codeparrot-small'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'debug-pt-conditional', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0008, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000.0, 'output_dir': 'training_output', 'per_device_train_batch_size': 8, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 10, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}}
84db4350c2c7930d866da99713e90eba
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.6380 - F1: 0.5542
e80f4f48c6e33850c2ca4ed6b93de741
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 13 | 1.0388 | 0.1801 | | No log | 2.0 | 26 | 0.7545 | 0.5053 | | No log | 3.0 | 39 | 0.6380 | 0.5542 |
f870f161a9d27bd08ce8c7af7a52337e
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3892 - F1: 0.6859
4648e6bb0f6df532ecd9b76de4686033
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1135 | 1.0 | 50 | 0.5347 | 0.5463 | | 0.4935 | 2.0 | 100 | 0.4424 | 0.6338 | | 0.3732 | 3.0 | 150 | 0.3892 | 0.6859 |
2b131da1864eaf7fbca6392b7dbec384
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-German Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
e1d7bd7ae1fab90f4546da07da6ba628
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "de", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-53-german") resampler = torchaudio.transforms.Resample(48_000, 16_000)
4c0d7fbd20f4a7764d87dd39f5a32739
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "de", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-53-german") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\โ€œ\%\โ€\๏ฟฝ\ใ‚ซ\รฆ\็„ก\เฝ“\ใ‚ซ\่‡ฃ\ัน\โ€ฆ\ยซ\ยป\รฐ\ฤฑ\โ€ž\ๅนบ\ื\ื‘\ๆฏ”\ัˆ\ืข\)\แปฉ\ะฒ\ล“\ั‡\+\โ€”\ัˆ\โ€š\ื \ะผ\ล„\ไนก\$\=\ืฉ\ั„\ๆ”ฏ\(\ยฐ\ะธ\ะบ\ฬ‡]' substitutions = { 'e' : '[\ษ™\รฉ\ฤ›\ฤ™\รช\แบฟ\แบฟ\รซ\ฤ—\ะต]', 'o' : '[\ล\รด\รด\รณ\รฒ\รธ\แป\ล\รต\ล‘\ะพ]', 'a' : '[\รก\ฤ\ฤ\ฤƒ\รฃ\รฅ\รข\ร \ฤ…\ะฐ]', 'c' : '[\ฤ\ฤ‡\รง\ั]', 'l' : '[\ล‚]', 'u' : '[\รบ\ลซ\แปฉ\ลฏ]', 'und' : '[\&]', 'r' : '[\ล™]', 'y' : '[\รฝ]', 's' : '[\ล›\ลก\ศ™\ลŸ]', 'i' : '[\ฤซ\ว\รญ\รฏ\รฎ\รฏ]', 'z' : '[\ลบ\ลพ\ลบ\ลผ]', 'n' : '[\รฑ\ล„\ล†]', 'g' : '[\ฤŸ]', 'ss' : '[\รŸ]', 't' : '[\ศ›\ลฅ]', 'd' : '[\ฤ\ฤ‘]', "'": '[\สฟ\เผ‹\โ€™\`\ยด\สป\`\โ€˜]', 'p': '\ั€' } resampler = torchaudio.transforms.Resample(48_000, 16_000)
db1417632630bef467230b0b3616981a
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` The model can also be evaluated with in 10% chunks which needs less ressources (to be tested). ``` import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import jiwer lang_id = "de" processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-53-german") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\โ€œ\%\โ€\๏ฟฝ\ใ‚ซ\รฆ\็„ก\เฝ“\ใ‚ซ\่‡ฃ\ัน\โ€ฆ\ยซ\ยป\รฐ\ฤฑ\โ€ž\ๅนบ\ื\ื‘\ๆฏ”\ัˆ\ืข\)\แปฉ\ะฒ\ล“\ั‡\+\โ€”\ัˆ\โ€š\ื \ะผ\ล„\ไนก\$\=\ืฉ\ั„\ๆ”ฏ\(\ยฐ\ะธ\ะบ\ฬ‡]' substitutions = { 'e' : '[\ษ™\รฉ\ฤ›\ฤ™\รช\แบฟ\แบฟ\รซ\ฤ—\ะต]', 'o' : '[\ล\รด\รด\รณ\รฒ\รธ\แป\ล\รต\ล‘\ะพ]', 'a' : '[\รก\ฤ\ฤ\ฤƒ\รฃ\รฅ\รข\ร \ฤ…\ะฐ]', 'c' : '[\ฤ\ฤ‡\รง\ั]', 'l' : '[\ล‚]', 'u' : '[\รบ\ลซ\แปฉ\ลฏ]', 'und' : '[\&]', 'r' : '[\ล™]', 'y' : '[\รฝ]', 's' : '[\ล›\ลก\ศ™\ลŸ]', 'i' : '[\ฤซ\ว\รญ\รฏ\รฎ\รฏ]', 'z' : '[\ลบ\ลพ\ลบ\ลผ]', 'n' : '[\รฑ\ล„\ล†]', 'g' : '[\ฤŸ]', 'ss' : '[\รŸ]', 't' : '[\ศ›\ลฅ]', 'd' : '[\ฤ\ฤ‘]', "'": '[\สฟ\เผ‹\โ€™\`\ยด\สป\`\โ€˜]', 'p': '\ั€' } resampler = torchaudio.transforms.Resample(48_000, 16_000)
4986e8c944e48228a5b25b4b872423fd
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() for x in substitutions: batch["sentence"] = re.sub(substitutions[x], x, batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch
06502ad49c22fdd9cb90dea3afe3d493
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch H, S, D, I = 0, 0, 0, 0 for i in range(10): print("test["+str(10*i)+"%:"+str(10*(i+1))+"%]") test_dataset = load_dataset("common_voice", "de", split="test["+str(10*i)+"%:"+str(10*(i+1))+"%]") test_dataset = test_dataset.map(speech_file_to_array_fn) result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = result["pred_strings"] targets = result["sentence"] chunk_metrics = jiwer.compute_measures(targets, predictions) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] WER = float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(WER*100)) ``` **Test Result**: 15.80 %
17f552d3c0a21f21c1d0915cd58c5e18
apache-2.0
['generated_from_keras_callback']
false
bert-finetuned-ner-per-v7 This model is a fine-tuned version of [BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2](https://huggingface.co/BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2) on an unknown dataset. It achieves the following results on the evaluation set:
2331e0ee6c5e4ece97fffbc7fd8cf0a5
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 313, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
3a4ccaf696d0ba6e367529b41ce67d3e
apache-2.0
['mobile', 'vison', 'image-classification']
false
Model Details <!-- Give an overview of your model, the relevant research paper, who trained it, etc. --> EfficientFormer-L3, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. This checkpoint of EfficientFormer-L3 was trained for 300 epochs. - Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren - Language(s): English - License: This model is licensed under the apache-2.0 license - Resources for more information: - [Research Paper](https://arxiv.org/abs/2206.01191) - [GitHub Repo](https://github.com/snap-research/EfficientFormer/) </model_details> <how_to_start>
188d23c753f854712a17c90160635430
apache-2.0
['generated_from_trainer']
false
t5-large-finetune-keyword-to-text-generation This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1471 - Rouge1: 2.175 - Rouge2: 0.3661 - Rougel: 1.7927 - Rougelsum: 1.7951 - Gen Len: 15.3252
4d0a905c5ba04aff73bef44983a20b24
apache-2.0
['generated_from_trainer']
false
Model description This model is designed to generate text from a single keyword. This project is intended to be used for generating vocabulary questions for ed-tech applications. NOTE!: Be sure to use the 'summarize: ' prefix before the word that you would like to un-summarize.
2e260999e88eee3d755241d90b7abc2e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 3.3083 | 1.0 | 3000 | 3.1706 | 2.1498 | 0.331 | 1.7579 | 1.761 | 16.6826 | | 3.2121 | 2.0 | 6000 | 3.1403 | 2.1555 | 0.3409 | 1.7659 | 1.769 | 16.208 | | 3.1286 | 3.0 | 9000 | 3.1300 | 2.1577 | 0.3511 | 1.7703 | 1.7733 | 15.9009 | | 3.0567 | 4.0 | 12000 | 3.1282 | 2.183 | 0.3584 | 1.7895 | 1.7909 | 15.7135 | | 2.9953 | 5.0 | 15000 | 3.1293 | 2.1589 | 0.3525 | 1.776 | 1.7781 | 15.678 | | 2.9483 | 6.0 | 18000 | 3.1308 | 2.1645 | 0.3556 | 1.7824 | 1.784 | 15.425 | | 2.9009 | 7.0 | 21000 | 3.1358 | 2.1622 | 0.3622 | 1.7848 | 1.7877 | 15.3348 | | 2.8752 | 8.0 | 24000 | 3.1387 | 2.1716 | 0.36 | 1.7936 | 1.7963 | 15.5296 | | 2.835 | 9.0 | 27000 | 3.1454 | 2.1806 | 0.3658 | 1.7941 | 1.7966 | 15.4625 | | 2.8352 | 10.0 | 30000 | 3.1471 | 2.175 | 0.3661 | 1.7927 | 1.7951 | 15.3252 |
0f71377a9d3f53d63c875bcdb7d42129
apache-2.0
['generated_from_keras_callback']
false
DamianCummins/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0556 - Validation Loss: 0.0608 - Train Precision: 0.9196 - Train Recall: 0.9304 - Train F1: 0.9250 - Train Accuracy: 0.9820 - Epoch: 0
f9a71230bcbe19ce15830144a0ef8fcb
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.0556 | 0.0608 | 0.9196 | 0.9304 | 0.9250 | 0.9820 | 0 |
4e1fde11fe3e0abcd0fab01095d13b42
apache-2.0
['generated_from_trainer']
false
M6_cross This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0084 - Pearson: 0.9811 - Spearmanr: 0.9075
1fd26518dfa35334a87d747f4fd79791
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 20 - eval_batch_size: 20 - seed: 25 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 6.0 - num_epochs: 5 - mixed_precision_training: Native AMP
1b3d344de76796529b2b6a97f2594a13
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.0059 | 1.0 | 105 | 0.0158 | 0.9633 | 0.9054 | | 0.001 | 2.0 | 210 | 0.0102 | 0.9770 | 0.9103 | | 0.0008 | 3.0 | 315 | 0.0083 | 0.9805 | 0.9052 | | 0.0011 | 4.0 | 420 | 0.0075 | 0.9812 | 0.9082 | | 0.0017 | 5.0 | 525 | 0.0084 | 0.9811 | 0.9075 |
83bdf850dad73cb49811332ddd716a7f
cc-by-4.0
['espnet', 'audio', 'automatic-speech-recognition']
false
`Shinji_Watanabe/laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.acc.ave` โ™ป๏ธ Imported from https://zenodo.org/record/4304245/ This model was trained by Shinji Watanabe using laborotv/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
72ec830019d037a92f4329874acce30b
apache-2.0
['thai', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base-thai-syllable-upos](https://huggingface.co/KoichiYasuoka/roberta-base-thai-syllable-upos).
9083688c8a785b31c2442188b495c07a
apache-2.0
['thai', 'token-classification', 'pos', 'dependency-parsing']
false
text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-base-thai-syllable-ud-goeswith") print(nlp("เธซเธฅเธฒเธขเธซเธฑเธงเธ”เธตเธเธงเนˆเธฒเธซเธฑเธงเน€เธ”เธตเธขเธง")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-thai-syllable-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("เธซเธฅเธฒเธขเธซเธฑเธงเธ”เธตเธเธงเนˆเธฒเธซเธฑเธงเน€เธ”เธตเธขเธง")) ```
4ed21be46d89a7e40b7254645cfab143
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2238 - Accuracy: 0.922 - F1: 0.9221
9c61ea197f9a9720c1cc1ce743ad7425
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.829 | 1.0 | 250 | 0.3173 | 0.9005 | 0.8980 | | 0.247 | 2.0 | 500 | 0.2238 | 0.922 | 0.9221 |
36012314e2698f4243a1a1f17ae417ad
apache-2.0
['generated_from_keras_callback']
false
amitjohn007/bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5685 - Epoch: 2
c6ab106f6cf2ce19c0dcc6f2db4d70b6
apache-2.0
['generated_from_keras_callback']
false
long-t5-local-base This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on an unknown dataset. It achieves the following results on the evaluation set:
75b574f1278b48ec96c43e0aa75d1008
apache-2.0
['audio', 'TTS']
false
load the model and tokenizer from fastspeech2_hf.modeling_fastspeech2 import FastSpeech2ForPretraining, FastSpeech2Tokenizer model = FastSpeech2ForPretraining.from_pretrained("ontocord/fastspeech2-en") tokenizer = FastSpeech2Tokenizer.from_pretrained("ontocord/fastspeech2-en")
e5d4a10ff71e6a43f6a59fa6e8548196
apache-2.0
['audio', 'TTS']
false
some helper routines from IPython.display import Audio as IPAudio, display as IPdisplay import torch import torchaudio def play_audio(waveform, sample_rate): waveform = waveform.numpy() if len(waveform.shape)==1: IPdisplay(IPAudio(waveform, rate=sample_rate)) return num_channels, num_frames = waveform.shape if num_channels <= 1: IPdisplay(IPAudio(waveform[0], rate=sample_rate)) elif num_channels == 2: IPdisplay(IPAudio((waveform[0], waveform[1]), rate=sample_rate)) else: raise ValueError("Waveform with more than 2 channels are not supported.")
2b6d1ecd6939a08150d0146f21310313
apache-2.0
['audio', 'TTS']
false
you can run in half mode on gpu. model = model.cuda().half() sentences = [ "Advanced text to speech models such as Fast Speech can synthesize speech significantly faster than previous auto regressive models with comparable quality. The training of Fast Speech model relies on an auto regressive teacher model for duration prediction and knowledge distillation, which can ease the one to many mapping problem in T T S. However, Fast Speech has several disadvantages, 1, the teacher student distillation pipeline is complicated, 2, the duration extracted from the teacher model is not accurate enough, and the target mel spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. ", "Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition " "in being comparatively modern. ", "For although the Chinese took impressions from wood blocks engraved in relief for centuries before the woodcutters of the Netherlands, by a similar process " "produced the block books, which were the immediate predecessors of the true printed book, " "the invention of movable metal letters in the middle of the fifteenth century may justly be considered as the invention of the art of printing. ", "And it is worth mention in passing that, as an example of fine typography, " "the earliest book printed with movable types, the Gutenberg, or \"forty-two line Bible\" of about 1455, " "has never been surpassed. ", "Printing, then, for our purpose, may be considered as the art of making books by means of movable types. " "Now, as all books not primarily intended as picture-books consist principally of types composed to form letterpress,", ] batch = tokenizer(sentences, return_tensors="pt", padding=True) model.eval() with torch.no_grad(): out = model(use_postnet=False, **batch) wav =out[-2] for line, phone, w in zip(sentences, tokenizer.batch_decode(batch['input_ids']), wav): print ("txt:", line) print ("phoneme:", phone) play_audio(w.type(torch.FloatTensor), model.config.sampling_rate) ```
d4a420df4a682555cc9dfac15146ea7a
apache-2.0
['audio', 'TTS']
false
Github Code Repo Current code for this model can be found [here](https://github.com/ontocord/fastspeech2_hf) This is a work in progress (WIP) port of the model and code from [this repo] (https://github.com/ming024/FastSpeech2). The datasets on which this model was trained: - LJSpeech: a single-speaker English dataset consists of 13100 short audio clips of a female speaker reading passages from 7 non-fiction books, approximately 24 hours in total. - LibriTTS: a multi-speaker English dataset containing 585 hours of speech by 2456 speakers.
1487ceec1a820cd45b7950c00cb45b16
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Model Dreambooth concept any-ely-wd-Noah_Titan-3500 ฤ‘ฦฐแปฃc train bแปŸi hr16 bแบฑng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br> Test concept bแบฑng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br> Hoแบทc test bแบฑng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) แบขnh mแบซu cแปงa concept: WIP
cea4730f7d849b588bd5e96a1c4bf3d3
apache-2.0
['sexism detector']
false
twitter_sexismo-finetuned-exist2021 This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) on the EXIST dataset It achieves the following results on the evaluation set: - Loss: 0.47 - Accuracy: 0.80 - F1: 0.83 - F2: 0.89
060be7796453a7be86485f87ad92df78
apache-2.0
['sexism detector']
false
Training procedure The model has been trained to get the best F2 score.The F-measure is calculated as the harmonic mean of precision and recall, giving each the same weighting. It allows a model to be evaluated taking both the precision and recall into account using a single score, which is helpful when describing the performance of the model and in comparing models. The Fbeta-measure is a generalization of the F-measure that adds a configuration parameter called beta. A default beta value is 1.0, which is the same as the F-measure. A smaller beta value, such as 0.5, gives more weight to precision and less to recall, whereas a larger beta value, such as 2.0, gives less weight to precision and more weight to recall in the calculation of the score. It is a useful metric to use when both precision and recall are important but slightly more attention is needed on one or the other, such as when false negatives are more important than false positives, or the reverse.F2-measure puts more attention on minimizing false negatives. We want to detect the sexist comments.
610cc83b4e136efb78ca3d22000251b7
apache-2.0
['sexism detector']
false
Training hyperparameters The following hyperparameters were used during training: - my_learning_rate = 5E-5 - my_adam_epsilon = 1E-8 - my_number_of_epochs = 8 - my_warmup = 3 - my_mini_batch_size = 32 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8
d9ae1ae9d1d1469f064a523a4830d557
apache-2.0
['sexism detector']
false
Training results Epoch T. Loss V. Loss Accuracy F1 Precision Recall F2 1 0.478700 0.443148 0.804386 0.830160 0.750689 0.928450 0.886467 2 0.298000 0.460549 0.823684 0.841107 0.784661 0.906303 0.879048 3 0.063600 0.706177 0.817544 0.829508 0.799368 0.862010 0.848708 4 0.078700 1.060862 0.816667 0.836078 0.774709 0.908007 0.877800 5 0.005900 1.069239 0.808772 0.821604 0.790551 0.855196 0.841435 6 0.008300 1.184729 0.808772 0.821604 0.790551 0.855196 0.841435 7 0.001400 1.238865 0.816667 0.829388 0.796238 0.865417 0.850636 8 0.000100 1.267197 0.815789 0.827303 0.799682 0.856899 0.844810 9 0.000100 1.267815 0.808772 0.818937 0.799028 0.839864 0.831366 10 0.000300 1.275827 0.807895 0.818257 0.797735 0.839864 0.831086
e8689a542ec96253b782f970fc35eb2f
apache-2.0
['sexism detector']
false
usage pipelines model_checkpoint = "robertou2/twitter_sexismo-finetuned-robertuito-exist2021" pipeline_nlp = pipeline("text-classification", model=model_checkpoint) pipeline_nlp("mujer al volante peligro!")
76f95e6ad0566175e1be7214804f30c2
apache-2.0
['sexism detector']
false
Retos Uno de los principales retos que se encontrรณ en este proceso ha sido disponer de un dataset en espaรฑol. Se ha logrado conseguir (previa solicitud) el dataset utilizado en [EXIST:sEXism Identification in Social neTworks](http://nlp.uned.es/exist2021/), el cual fue un gran punto de partida para comenzar con el modelo. Lamentablemente este un dataset presenta limitaciones debido a licencias y polรญticas para ser compartido libremente. Este dataset incorpora cualquier tipo de expresiรณn sexista o fenรณmenos relacionados, incluidas las afirmaciones descriptivas o informadas donde el mensaje sexista es un informe o una descripciรณn de un comportamiento sexista. se han utilizado los 3,541 tweets etiquetados en espaรฑol. Luego se logrรณ disponer de otro dataset en espaรฑol [MeTwo: Machismo and Sexism Twitter Identification dataset](https://github.com/franciscorodriguez92/MeTwo). Este dataset contiene los id de cada tweet con su etiqueta respectiva, lo que nos permitiรณ obtener el texto del tweet e incrementar el dataset original. Un desafรญo ha sido iniciar los procesos de finetuned en las prueba, esto pues se dispone de diversas variables para validar y testear (desde modelos como: BETO o Roberta, hasta hiperparรกmetros: como learning rate), y solo se disponede un plazo acotado de dos semanas, ademรกs de la curva de aprendizaje. Para este desafรญo, se han basado las primeras pruebas en los parรกmetros presentados por de Paula et al. (2021), lo cual brindรณ un punto de partida y un reto a vencer, el **_0.790 de accuracy_** obtenidos por el trabajo previo en la identificaciรณn de tweets sexistas en espaรฑol. En este รกmbito se realizaron diversas pruebas en paralelo para encontrar el mejor modelo. Luego de un proceso colaborativo de finetuned se ha logrado obtener un **83% de accuracy**.
52a531da48d3b85b3fefc0442269592b
apache-2.0
['sexism detector']
false
Trabajos Futuros Se propone incrementar el dataset desarrollado. Para esto es posible descargar cantidades superiores de tweets en espaรฑol y aplicar tรฉcnicas de active learning para obtener un grupo reducido de tweets a etiquetar vรญa crowdsourcing, y en donde estos datos etiquetados puedan servir para etiquetar el resto. Tambiรฉn se pueden utilizar tรฉcnicas de Data Augmentation, para duplicar y extender el dataset. Realizar mรกs pruebas con otros modelos y mejorar el modelo es otro reto que se propone como trabajos futuros.
1bba0ffb93679072cd1eae080913804e
apache-2.0
['sexism detector']
false
Posibles Aplicaciones Primero es sumamente importante dar mayor visibilidad al problema de _sexismo en redes sociales_, principalmente en espaรฑol. El proceso de Transfer Learning logra reutilizar y aprovechar modelos previamente entrenados, y lo que se desea es que nuevos grupos de investigaciรณn, estudiantes, etc. utilicen la base del actual modelo para desarrollar los propios y crear un mejor modelo. De esta manera, se podrรญa construir una herramienta que pueda identificar en tiempo real los tweets sexistas y eliminarlos antes de su propagaciรณn.
a4157484f8ee5dda7f78325f07e7fab7
apache-2.0
['sexism detector']
false
Referencias 1 de Paula, A. F. M., da Silva, R. F., & Schlicht, I. B. (2021). Sexism Prediction in Spanish and English Tweets Using Monolingual and Multilingual BERT and Ensemble Models. arXiv preprint arXiv:2111.04551. Rodrรญguez-Sรกnchez, F., Carrillo-de-Albornoz, J., Plaza, L., Gonzalo, J., Rosso, P., Comet, M., & Donoso, T. (2021). Overview of exist 2021: sexism identification in social networks. Procesamiento del Lenguaje Natural, 67, 195-207.
9b2ce060b8ba9bb9b3956e7f9178604b
creativeml-openrail-m
[]
false
isopixel-diffusion-v1 Stable Diffusion v2-768 model trained on to generate isometric pixel art <div style="display: flex; flex-direction: row; flex-wrap: wrap"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1669957996471-6303f37c3926de1f7ec42d3e.png" width="256"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1669958023998-6303f37c3926de1f7ec42d3e.png" width="256"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1669958037455-6303f37c3926de1f7ec42d3e.png" width="256"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1669958067857-6303f37c3926de1f7ec42d3e.png" width="256"> <img src="https://s3.amazonaws.com/moonup/production/uploads/1669958100092-6303f37c3926de1f7ec42d3e.png" width="256"> </div>
5563657a24e8483791a5894c5e037b82
creativeml-openrail-m
[]
false
How to use - Download the model and use it on your desired UI (Tested on AUTOMATIC1111's) Currently only .ckpt version is supported - Trigger the style in your prompt with the **isopixel** token, look at the next section for more examples
afa77374083d546210580eec57559042
creativeml-openrail-m
[]
false
Examples **isometric bedroom, isopixel style** Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768 <img src="https://s3.amazonaws.com/moonup/production/uploads/1669958684775-6303f37c3926de1f7ec42d3e.png" width="512"/> **isometric sushi store, isopixel style** Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768 <img src="https://s3.amazonaws.com/moonup/production/uploads/1669958822683-6303f37c3926de1f7ec42d3e.png" width="512"/> **isometric gas station, isopixel style** Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768 <img src="https://s3.amazonaws.com/moonup/production/uploads/1669958976478-6303f37c3926de1f7ec42d3e.png" width="512"/> **isometric magical forest, isopixel style** Steps: 50, Sampler: Euler a, CFG scale: 7.5, Size: 768x768 <img src="https://s3.amazonaws.com/moonup/production/uploads/1669959188129-6303f37c3926de1f7ec42d3e.png" width="512"/>
b2a2f1457527fe6845a908281ad34a3b
creativeml-openrail-m
[]
false
Tips - Always use 768x768 - High step count on Euler_a gives the best results - Low CFG scale outputs great results - You can use a tool like Pixelator to achieve a better effect. This model **isn't pixel perfect** (yet ๐Ÿ˜‰) Please consider supporting further research on my Patreon: <a href="https://www.patreon.com/user?u=29466374" target="_blank"> <img src="https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white" alt="Patreon"/> </a> If you have any question, suggestion for new models or need help in general with SD related stuff, don't hesistate to reach out on Twitter: <a href="https://twitter.com/nerijs" target="_blank"> <img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"/> </a>
5f446b2d5edca98c61dc6b2636f4b573
apache-2.0
['generated_from_trainer']
false
vit-base-patch16-224-in21k-finetuned-lora-food101 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.1448 - Accuracy: 0.96
2cbd31d000b72c4b3dec89997d3eff4b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
02427c5c458bd7774e7d1c4fe8e1488d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 9 | 0.5069 | 0.896 | | 2.1627 | 2.0 | 18 | 0.1891 | 0.946 | | 0.3451 | 3.0 | 27 | 0.1448 | 0.96 | | 0.2116 | 4.0 | 36 | 0.1509 | 0.958 | | 0.1711 | 5.0 | 45 | 0.1498 | 0.958 |
4fc4937de2d3ea7ddeb18328c4de2f14
apache-2.0
['generated_from_trainer']
false
insertion-prop05-vocab This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0209 - Precision: 0.9815 - Recall: 0.9787 - F1: 0.9801 - Accuracy: 0.9929
6ce7bc6678465c52ef0808373e8f8c7a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0687 | 0.32 | 500 | 0.0275 | 0.9770 | 0.9694 | 0.9732 | 0.9904 | | 0.0327 | 0.64 | 1000 | 0.0221 | 0.9791 | 0.9783 | 0.9787 | 0.9924 | | 0.0289 | 0.96 | 1500 | 0.0209 | 0.9815 | 0.9787 | 0.9801 | 0.9929 |
d716aa5282409d325a311ad4936b62df
apache-2.0
['generated_from_trainer']
false
distilbert_add_GLUE_Experiment_logit_kd_cola_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6839 - Matthews Correlation: 0.0
e3e7fdbf8af18227ae9f22117745657b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.82 | 1.0 | 34 | 0.6841 | 0.0 | | 0.7971 | 2.0 | 68 | 0.6840 | 0.0 | | 0.7966 | 3.0 | 102 | 0.6841 | 0.0 | | 0.7953 | 4.0 | 136 | 0.6840 | 0.0 | | 0.7977 | 5.0 | 170 | 0.6839 | 0.0 | | 0.7955 | 6.0 | 204 | 0.6839 | 0.0 | | 0.7978 | 7.0 | 238 | 0.6841 | 0.0 | | 0.7974 | 8.0 | 272 | 0.6840 | 0.0 | | 0.7949 | 9.0 | 306 | 0.6847 | 0.0 | | 0.7978 | 10.0 | 340 | 0.6840 | 0.0 | | 0.7962 | 11.0 | 374 | 0.6841 | 0.0 |
ba6ec6b39e2131446300b053f9cf4fde
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper medium Serbian El Greco This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0,google/fleurs sr,sr_rs dataset. It achieves the following results on the evaluation set: - Loss: 0.4868 - Wer: 12.1408
335419246c0708623997c6cbd4e821b0
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.0222 | 2.72 | 1000 | 0.3442 | 14.0834 | | 0.0032 | 5.43 | 2000 | 0.4106 | 14.5285 | | 0.0011 | 8.15 | 3000 | 0.4331 | 12.8693 | | 0.0029 | 10.87 | 4000 | 0.3948 | 12.6265 | | 0.0012 | 13.59 | 5000 | 0.4512 | 12.6669 | | 0.0009 | 16.3 | 6000 | 0.4890 | 12.7479 | | 0.001 | 19.02 | 7000 | 0.4868 | 12.1408 | | 0.0016 | 21.74 | 8000 | 0.4780 | 12.7074 | | 0.0002 | 24.46 | 9000 | 0.4902 | 12.2218 | | 0.0012 | 27.17 | 10000 | 0.5059 | 12.6669 |
f8d80624927cb46e62a22ba94f7047fb
apache-2.0
['generated_from_trainer']
false
distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_12_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1194 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707
65cc386090d875c80f5355e4dcbad880
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.0877 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 2.0 | 30 | 0.0806 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 3.0 | 45 | 0.0758 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 4.0 | 60 | 0.0741 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 5.0 | 75 | 0.0741 | 0.12 | 0.0194 | 0.0333 | 0.9830 |
19f86528d7bc13081c652e3b1783eb69
apache-2.0
['generated_from_trainer']
false
wav2vec2-common_voice-ur-demo-dist This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.5252
80b0019a9d9c0ba4722b5643f5e6ed8d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 8 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP
aa46ceb4c8be2acbff8236f5d648bc12
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.0956 | 0.11 | 100 | inf | 1.0 | | 3.4569 | 0.22 | 200 | inf | 1.0 | | 3.0492 | 0.32 | 300 | inf | 0.9973 | | 3.0042 | 0.43 | 400 | inf | 0.9993 | | 1.7725 | 0.54 | 500 | inf | 0.9112 | | 1.875 | 0.65 | 600 | inf | 0.8314 | | 1.2135 | 0.75 | 700 | inf | 0.8312 | | 1.0577 | 0.86 | 800 | inf | 0.7337 | | 1.4374 | 0.97 | 900 | inf | 0.7513 | | 1.0388 | 1.08 | 1000 | inf | 0.7077 | | 0.8839 | 1.18 | 1100 | inf | 0.6833 | | 0.8233 | 1.29 | 1200 | inf | 0.6503 | | 0.7636 | 1.4 | 1300 | inf | 0.6851 | | 0.8722 | 1.51 | 1400 | inf | 0.6185 | | 0.6055 | 1.61 | 1500 | inf | 0.6085 | | 0.7535 | 1.72 | 1600 | inf | 0.6130 | | 0.4796 | 1.83 | 1700 | inf | 0.5901 | | 0.739 | 1.94 | 1800 | inf | 0.5643 | | 0.4362 | 2.05 | 1900 | inf | 0.5895 | | 0.66 | 2.15 | 2000 | inf | 0.5468 | | 0.5365 | 2.26 | 2100 | inf | 0.5337 | | 0.4798 | 2.37 | 2200 | inf | 0.5412 | | 0.5259 | 2.48 | 2300 | inf | 0.5764 | | 0.5697 | 2.58 | 2400 | inf | 0.5408 | | 0.7113 | 2.69 | 2500 | inf | 0.6217 | | 0.6562 | 2.8 | 2600 | inf | 0.5362 | | 0.3337 | 2.91 | 2700 | inf | 0.5397 | | 0.392 | 3.01 | 2800 | inf | 0.5299 | | 0.4472 | 3.12 | 2900 | inf | 0.5332 | | 0.3124 | 3.23 | 3000 | inf | 0.5202 | | 0.6489 | 3.34 | 3100 | inf | 0.5360 | | 0.2983 | 3.44 | 3200 | inf | 0.5146 | | 0.287 | 3.55 | 3300 | inf | 0.5170 | | 0.5538 | 3.66 | 3400 | inf | 0.5304 | | 0.3668 | 3.77 | 3500 | inf | 0.4904 | | 0.6103 | 3.88 | 3600 | inf | 0.5044 | | 0.2878 | 3.98 | 3700 | inf | 0.5208 | | 0.4004 | 4.09 | 3800 | inf | 0.5121 | | 0.3397 | 4.2 | 3900 | inf | 0.5394 | | 0.3226 | 4.31 | 4000 | inf | 0.4968 | | 0.2259 | 4.41 | 4100 | inf | 0.4905 | | 0.254 | 4.52 | 4200 | inf | 0.4879 | | 0.3353 | 4.63 | 4300 | inf | 0.4836 | | 0.3923 | 4.74 | 4400 | inf | 0.4746 | | 0.3685 | 4.84 | 4500 | inf | 0.4933 | | 0.2538 | 4.95 | 4600 | inf | 0.4721 | | 0.2082 | 5.06 | 4700 | inf | 0.4727 | | 0.3077 | 5.17 | 4800 | inf | 0.4689 | | 0.2114 | 5.27 | 4900 | inf | 0.4725 | | 0.2047 | 5.38 | 5000 | inf | 0.4756 | | 0.1977 | 5.49 | 5100 | inf | 0.4716 | | 0.2005 | 5.6 | 5200 | inf | 0.4675 | | 0.1636 | 5.71 | 5300 | inf | 0.4673 | | 0.3709 | 5.81 | 5400 | inf | 0.4767 | | 0.2338 | 5.92 | 5500 | inf | 0.4543 | | 0.172 | 6.03 | 5600 | inf | 0.4607 | | 0.2413 | 6.14 | 5700 | inf | 0.4639 | | 0.1997 | 6.24 | 5800 | inf | 0.4640 | | 0.2536 | 6.35 | 5900 | inf | 0.4840 | | 0.3206 | 6.46 | 6000 | inf | 0.4685 | | 0.2491 | 6.57 | 6100 | inf | 0.4666 | | 0.2215 | 6.67 | 6200 | inf | 0.4498 | | 0.245 | 6.78 | 6300 | inf | 0.4534 | | 0.2336 | 6.89 | 6400 | inf | 0.4520 | | 0.2885 | 7.0 | 6500 | inf | 0.4550 | | 0.5927 | 7.1 | 6600 | inf | 0.4602 | | 0.124 | 7.21 | 6700 | inf | 0.4706 | | 0.2169 | 7.32 | 6800 | inf | 0.4498 | | 0.3245 | 7.43 | 6900 | inf | 0.4544 | | 0.3848 | 7.53 | 7000 | inf | 0.4411 | | 0.2226 | 7.64 | 7100 | inf | 0.4518 | | 0.286 | 7.75 | 7200 | inf | 0.4503 | | 0.2474 | 7.86 | 7300 | inf | 0.4433 | | 0.1786 | 7.97 | 7400 | inf | 0.4507 | | 0.1477 | 8.07 | 7500 | inf | 0.4494 | | 0.1193 | 8.18 | 7600 | inf | 0.4501 | | 0.1709 | 8.29 | 7700 | inf | 0.4656 | | 0.1695 | 8.4 | 7800 | inf | 0.4525 | | 0.2417 | 8.5 | 7900 | inf | 0.4437 | | 0.2656 | 8.61 | 8000 | inf | 0.4434 | | 0.1599 | 8.72 | 8100 | inf | 0.4418 | | 0.1847 | 8.83 | 8200 | inf | 0.4451 | | 0.2093 | 8.93 | 8300 | inf | 0.4441 | | 0.0869 | 9.04 | 8400 | inf | 0.4410 | | 0.2049 | 9.15 | 8500 | inf | 0.4402 | | 0.1679 | 9.26 | 8600 | inf | 0.4320 | | 0.0796 | 9.36 | 8700 | inf | 0.4427 | | 0.1241 | 9.47 | 8800 | inf | 0.4372 | | 0.1841 | 9.58 | 8900 | inf | 0.4408 | | 0.0661 | 9.69 | 9000 | inf | 0.4362 | | 0.1172 | 9.8 | 9100 | inf | 0.4370 | | 0.0539 | 9.9 | 9200 | inf | 0.4369 | | 0.1262 | 10.01 | 9300 | inf | 0.4313 | | 0.1006 | 10.12 | 9400 | inf | 0.4379 | | 0.0892 | 10.23 | 9500 | inf | 0.4434 | | 0.1302 | 10.33 | 9600 | inf | 0.4431 | | 0.2019 | 10.44 | 9700 | inf | 0.4403 | | 0.0934 | 10.55 | 9800 | inf | 0.4392 | | 0.1628 | 10.66 | 9900 | inf | 0.4407 | | 0.1419 | 10.76 | 10000 | inf | 0.4379 | | 0.1327 | 10.87 | 10100 | inf | 0.4458 | | 0.1889 | 10.98 | 10200 | inf | 0.4682 | | 0.1053 | 11.09 | 10300 | inf | 0.4532 | | 0.0761 | 11.19 | 10400 | inf | 0.4572 | | 0.1382 | 11.3 | 10500 | inf | 0.4374 | | 0.1336 | 11.41 | 10600 | inf | 0.4326 | | 0.1427 | 11.52 | 10700 | inf | 0.4340 | | 0.1167 | 11.63 | 10800 | inf | 0.4336 | | 0.1042 | 11.73 | 10900 | inf | 0.4379 | | 0.1159 | 11.84 | 11000 | inf | 0.4766 | | 0.1872 | 11.95 | 11100 | inf | 0.4931 | | 0.2099 | 12.06 | 11200 | inf | 0.5170 | | 0.2515 | 12.16 | 11300 | inf | 0.5017 | | 0.1527 | 12.27 | 11400 | inf | 0.4959 | | 0.2435 | 12.38 | 11500 | inf | 0.5174 | | 0.2271 | 12.49 | 11600 | inf | 0.5045 | | 0.3953 | 12.59 | 11700 | inf | 0.5567 | | 0.2862 | 12.7 | 11800 | inf | 0.5608 | | 0.3511 | 12.81 | 11900 | inf | 0.5612 | | 0.2356 | 12.92 | 12000 | inf | 0.5421 | | 0.1181 | 13.02 | 12100 | inf | 0.5096 | | 0.3625 | 13.13 | 12200 | inf | 0.5252 | | 0.3627 | 13.24 | 12300 | inf | 0.5340 | | 0.2822 | 13.35 | 12400 | inf | 0.5579 | | 0.3136 | 13.46 | 12500 | inf | 0.5314 | | 0.3516 | 13.56 | 12600 | inf | 0.5411 | | 0.4331 | 13.67 | 12700 | inf | 0.5514 | | 0.5406 | 13.78 | 12800 | inf | 0.5441 | | 0.5346 | 13.89 | 12900 | inf | 0.5311 | | 0.3645 | 13.99 | 13000 | inf | 0.5354 | | 0.3339 | 14.1 | 13100 | inf | 0.5292 | | 0.3335 | 14.21 | 13200 | inf | 0.5577 | | 0.3436 | 14.32 | 13300 | inf | 0.5475 | | 0.1934 | 14.42 | 13400 | inf | 0.5255 | | 0.3422 | 14.53 | 13500 | inf | 0.5302 | | 0.4293 | 14.64 | 13600 | inf | 0.5368 | | 0.363 | 14.75 | 13700 | inf | 0.5325 | | 0.2851 | 14.85 | 13800 | inf | 0.5260 | | 0.3106 | 14.96 | 13900 | inf | 0.5252 |
13801f3bbf624bf413d12efeb40095d6
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3568 - Accuracy: 0.86 - F1: 0.8679
e4221e384b4b1740849e38fd5d1e95d1
apache-2.0
['translation']
false
swe-epo * source group: Swedish * target group: Esperanto * OPUS readme: [swe-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-epo/README.md) * model: transformer-align * source language(s): swe * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.eval.txt)
4e9432bd4657991b31a54552148b478d
apache-2.0
['translation']
false
System Info: - hf_name: swe-epo - source_languages: swe - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sv', 'eo'] - src_constituents: {'swe'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-epo/opus-2020-06-16.test.txt - src_alpha3: swe - tgt_alpha3: epo - short_pair: sv-eo - chrF2_score: 0.498 - bleu: 29.7 - brevity_penalty: 0.958 - ref_len: 10987.0 - src_name: Swedish - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: sv - tgt_alpha2: eo - prefer_old: False - long_pair: swe-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
cca8d06f54f088a26214439aa24458b7
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2469 - Accuracy: 0.9458
244ac8b9906a48816c36e39486b88f25