license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0
77ffd8903c06edf101416e3409b61537
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3717 | 1.0 | 6375 | 0.0522 | 0.9893 | | 0.3453 | 2.0 | 12750 | 0.0370 | 0.9906 | | 0.3736 | 3.0 | 19125 | 0.0308 | 0.9916 | | 0.3224 | 4.0 | 25500 | 0.0269 | 0.9939 | | 0.2846 | 5.0 | 31875 | 0.0236 | 0.9949 |
340d6efd3fb270b1af2fd5cb57e22a96
agpl-3.0
[]
false
FastText model trained on Icelandic This model is trained on the lemmas of the Icelandic Gigaword Corpus version 20.05. It is trained using the gensim package, version 4.1.0. and parameters were set to default (100 dimensions, windows size 5) This model can not be loaded directly since it uses gensim, clone the repository and run the following to use it. ```python import gensim model = gensim.models.FastText.load("./rmh.w2v.model") ```
f18becb2e37b3aa4fc7c33473d11d45d
agpl-3.0
[]
false
Example output ```bash In [1]: model.wv.most_similar("england") Out[1]: [('englands', 0.8778558969497681), ('southland', 0.8573296070098877), ('skotland', 0.846065878868103), ('englaland', 0.8320872187614441), ('hoogland', 0.8299505114555359), ('hoagland', 0.8277317881584167), ('totland', 0.8265103697776794), ('lackland', 0.8234561681747437), ('skarpengland', 0.8227219581604004), ('langland', 0.8222305774688721)] In [2]: model.wv.most_similar("kanína") Out[2]: [('loðkanína', 0.9271067976951599), ('dvergkanína', 0.9106121063232422), ('angórakanína', 0.895512044429779), ('angórukanína', 0.8741581439971924), ('feldkanína', 0.8696010708808899), ('kanínubangsi', 0.8562541604042053), ('holdakanína', 0.8543838858604431), ('villikanína', 0.8525990843772888), ('silkikanína', 0.8515204191207886), ('kaníni', 0.8445548415184021)] ```
785094e32b815826280bcf549b6ad0ca
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 512 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 3.0 - mixed_precision_training: Native AMP
fcbecc70f3b73696a4cb107fe0b9a2e8
apache-2.0
['generated_from_keras_callback']
false
abyaugustinek/distilbert-base-uncased-finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.3693 - Validation Loss: 1.2106 - Train Precision: 0.0 - Train Recall: 0.0 - Train F1: 0.0 - Train Accuracy: 0.6565 - Epoch: 2
57d907b9ec256ebd57673fe845b486a6
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
f38925f413b713607460e21a0de2632b
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 2.0691 | 1.5942 | 0.0 | 0.0 | 0.0 | 0.6565 | 0 | | 1.4705 | 1.2376 | 0.0 | 0.0 | 0.0 | 0.6565 | 1 | | 1.3693 | 1.2106 | 0.0 | 0.0 | 0.0 | 0.6565 | 2 |
b176c180ce92cf3ddb4b8b1613959a5d
apache-2.0
['generated_from_trainer']
false
canine-s-finetuned-sst2 This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5259 - Accuracy: 0.8578
3e04e2af0b5017b6999fc4dc765c908d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3524 | 1.0 | 4210 | 0.4762 | 0.8257 | | 0.2398 | 2.0 | 8420 | 0.4169 | 0.8567 | | 0.1797 | 3.0 | 12630 | 0.5259 | 0.8578 | | 0.152 | 4.0 | 16840 | 0.5996 | 0.8532 | | 0.1026 | 5.0 | 21050 | 0.6676 | 0.8578 |
daa970f436b9cbb0f48678c5b2242c37
apache-2.0
['translation']
false
fin-eng * source group: Finnish * target group: English * OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md) * model: transformer-align * source language(s): fin * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-08-05.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.zip) * test set translations: [opus-2020-08-05.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.test.txt) * test set scores: [opus-2020-08-05.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.eval.txt)
3211e27092bef267f336a9b5462d6a22
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-fineng.fin.eng | 25.3 | 0.536 | | newstest2015-enfi-fineng.fin.eng | 26.9 | 0.547 | | newstest2016-enfi-fineng.fin.eng | 29.0 | 0.571 | | newstest2017-enfi-fineng.fin.eng | 32.3 | 0.594 | | newstest2018-enfi-fineng.fin.eng | 23.8 | 0.517 | | newstest2019-fien-fineng.fin.eng | 29.0 | 0.565 | | newstestB2016-enfi-fineng.fin.eng | 24.5 | 0.527 | | newstestB2017-enfi-fineng.fin.eng | 27.4 | 0.557 | | newstestB2017-fien-fineng.fin.eng | 27.4 | 0.557 | | Tatoeba-test.fin.eng | 53.4 | 0.697 |
87e6f792eab32798ba3b1819d604b851
apache-2.0
['translation']
false
System Info: - hf_name: fin-eng - source_languages: fin - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fi', 'en'] - src_constituents: {'fin'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus-2020-08-05.test.txt - src_alpha3: fin - tgt_alpha3: eng - short_pair: fi-en - chrF2_score: 0.6970000000000001 - bleu: 53.4 - brevity_penalty: 0.99 - ref_len: 74651.0 - src_name: Finnish - tgt_name: English - train_date: 2020-08-05 - src_alpha2: fi - tgt_alpha2: en - prefer_old: False - long_pair: fin-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
e2732edc82d1aea60e27df7faa9c426a
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6895
330dae0675777945786531d85ae88a7d
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s198 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f2e7464d33285680152fb1c5eb828635
apache-2.0
[]
false
Ernie-M ERNIE-M, proposed by Baidu, is a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. The insight is to integrate back-translation into the pre-training process by generating pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks. We proposed two novel methods to align the representation of multiple languages: Cross-Attention Masked Language Modeling(CAMLM): In CAMLM, we learn the multilingual semantic representation by restoring the MASK tokens in the input sentences. Back-Translation masked language modeling(BTMLM): We use BTMLM to train our model to generate pseudo-parallel sentences from the monolingual sentences. The generated pairs are then used as the input of the model to further align the cross-lingual semantics, thus enhancing the multilingual representation. ![ernie-m](ernie_m.png)
e8267efb420aade9d7adb4bbe3d7652f
apache-2.0
[]
false
XNLI XNLI is a subset of MNLI and has been translated into 14 different kinds of languages including some low-resource languages. The goal of the task is to predict testual entailment (whether sentence A implies / contradicts / neither sentence B). | Model | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur | Avg | | ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | Cross-lingual Transfer | | | | | | | | | | | | | | | | | | XLM | 85.0 | 78.7 | 78.9 | 77.8 | 76.6 | 77.4 | 75.3 | 72.5 | 73.1 | 76.1 | 73.2 | 76.5 | 69.6 | 68.4 | 67.3 | 75.1 | | Unicoder | 85.1 | 79.0 | 79.4 | 77.8 | 77.2 | 77.2 | 76.3 | 72.8 | 73.5 | 76.4 | 73.6 | 76.2 | 69.4 | 69.7 | 66.7 | 75.4 | | XLM-R | 85.8 | 79.7 | 80.7 | 78.7 | 77.5 | 79.6 | 78.1 | 74.2 | 73.8 | 76.5 | 74.6 | 76.7 | 72.4 | 66.5 | 68.3 | 76.2 | | INFOXLM | **86.4** | **80.6** | 80.8 | 78.9 | 77.8 | 78.9 | 77.6 | 75.6 | 74.0 | 77.0 | 73.7 | 76.7 | 72.0 | 66.4 | 67.1 | 76.2 | | **ERNIE-M** | 85.5 | 80.1 | **81.2** | **79.2** | **79.1** | **80.4** | **78.1** | **76.8** | **76.3** | **78.3** | **75.8** | **77.4** | **72.9** | **69.5** | **68.8** | **77.3** | | XLM-R Large | 89.1 | 84.1 | 85.1 | 83.9 | 82.9 | 84.0 | 81.2 | 79.6 | 79.8 | 80.8 | 78.1 | 80.2 | 76.9 | 73.9 | 73.8 | 80.9 | | INFOXLM Large | **89.7** | 84.5 | 85.5 | 84.1 | 83.4 | 84.2 | 81.3 | 80.9 | 80.4 | 80.8 | 78.9 | 80.9 | 77.9 | 74.8 | 73.7 | 81.4 | | VECO Large | 88.2 | 79.2 | 83.1 | 82.9 | 81.2 | 84.2 | 82.8 | 76.2 | 80.3 | 74.3 | 77.0 | 78.4 | 71.3 | **80.4** | **79.1** | 79.9 | | **ERNIR-M Large** | 89.3 | **85.1** | **85.7** | **84.4** | **83.7** | **84.5** | 82.0 | **81.2** | **81.2** | **81.9** | **79.2** | **81.0** | **78.6** | 76.2 | 75.4 | **82.0** | | Translate-Train-All | | | | | | | | | | | | | | | | | | XLM | 85.0 | 80.8 | 81.3 | 80.3 | 79.1 | 80.9 | 78.3 | 75.6 | 77.6 | 78.5 | 76.0 | 79.5 | 72.9 | 72.8 | 68.5 | 77.8 | | Unicoder | 85.6 | 81.1 | 82.3 | 80.9 | 79.5 | 81.4 | 79.7 | 76.8 | 78.2 | 77.9 | 77.1 | 80.5 | 73.4 | 73.8 | 69.6 | 78.5 | | XLM-R | 85.4 | 81.4 | 82.2 | 80.3 | 80.4 | 81.3 | 79.7 | 78.6 | 77.3 | 79.7 | 77.9 | 80.2 | 76.1 | 73.1 | 73.0 | 79.1 | | INFOXLM | 86.1 | 82.0 | 82.8 | 81.8 | 80.9 | 82.0 | 80.2 | 79.0 | 78.8 | 80.5 | 78.3 | 80.5 | 77.4 | 73.0 | 71.6 | 79.7 | | **ERNIE-M** | **86.2** | **82.5** | **83.8** | **82.6** | **82.4** | **83.4** | **80.2** | **80.6** | **80.5** | **81.1** | **79.2** | **80.5** | **77.7** | **75.0** | **73.3** | **80.6** | | XLM-R Large | 89.1 | 85.1 | 86.6 | 85.7 | 85.3 | 85.9 | 83.5 | 83.2 | 83.1 | 83.7 | 81.5 | **83.7** | **81.6** | 78.0 | 78.1 | 83.6 | | VECO Large | 88.9 | 82.4 | 86.0 | 84.7 | 85.3 | 86.2 | **85.8** | 80.1 | 83.0 | 77.2 | 80.9 | 82.8 | 75.3 | **83.1** | **83.0** | 83.0 | | **ERNIE-M Large** | **89.5** | **86.5** | **86.9** | **86.1** | **86.0** | **86.8** | 84.1 | **83.8** | **84.1** | **84.5** | **82.1** | 83.5 | 81.1 | 79.4 | 77.9 | **84.2** |
596c850edbf583464785a9e3eb444d12
apache-2.0
[]
false
Cross-lingual Named Entity Recognition * datasets:CoNLI | Model | en | nl | es | de | Avg | | ------------------------------ | --------- | --------- | --------- | --------- | --------- | | *Fine-tune on English dataset* | | | | | | | mBERT | 91.97 | 77.57 | 74.96 | 69.56 | 78.52 | | XLM-R | 92.25 | **78.08** | 76.53 | **69.60** | 79.11 | | **ERNIE-M** | **92.78** | 78.01 | **79.37** | 68.08 | **79.56** | | XLM-R LARGE | 92.92 | 80.80 | 78.64 | 71.40 | 80.94 | | **ERNIE-M LARGE** | **93.28** | **81.45** | **78.83** | **72.99** | **81.64** | | *Fine-tune on all dataset* | | | | | | | XLM-R | 91.08 | 89.09 | 87.28 | 83.17 | 87.66 | | **ERNIE-M** | **93.04** | **91.73** | **88.33** | **84.20** | **89.32** | | XLM-R LARGE | 92.00 | 91.60 | **89.52** | 84.60 | 89.43 | | **ERNIE-M LARGE** | **94.01** | **93.81** | 89.23 | **86.20** | **90.81** |
302ac5d7c3d34cdc73c9d618a6dd72aa
apache-2.0
[]
false
Cross-lingual Question Answering * datasets:MLQA | Model | en | es | de | ar | hi | vi | zh | Avg | | ----------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | --------------- | | mBERT | 77.7 / 65.2 | 64.3 / 46.6 | 57.9 / 44.3 | 45.7 / 29.8 | 43.8 / 29.7 | 57.1 / 38.6 | 57.5 / 37.3 | 57.7 / 41.6 | | XLM | 74.9 / 62.4 | 68.0 / 49.8 | 62.2 / 47.6 | 54.8 / 36.3 | 48.8 / 27.3 | 61.4 / 41.8 | 61.1 / 39.6 | 61.6 / 43.5 | | XLM-R | 77.1 / 64.6 | 67.4 / 49.6 | 60.9 / 46.7 | 54.9 / 36.6 | 59.4 / 42.9 | 64.5 / 44.7 | 61.8 / 39.3 | 63.7 / 46.3 | | INFOXLM | 81.3 / 68.2 | 69.9 / 51.9 | 64.2 / 49.6 | 60.1 / 40.9 | 65.0 / 47.5 | 70.0 / 48.6 | 64.7 / **41.2** | 67.9 / 49.7 | | **ERNIE-M** | **81.6 / 68.5** | **70.9 / 52.6** | **65.8 / 50.7** | **61.8 / 41.9** | **65.4 / 47.5** | **70.0 / 49.2** | **65.6** / 41.0 | **68.7 / 50.2** | | XLM-R LARGE | 80.6 / 67.8 | 74.1 / 56.0 | 68.5 / 53.6 | 63.1 / 43.5 | 62.9 / 51.6 | 71.3 / 50.9 | 68.0 / 45.4 | 70.7 / 52.7 | | INFOXLM LARGE | **84.5 / 71.6** | **75.1 / 57.3** | **71.2 / 56.2** | **67.6 / 47.6** | 72.5 / 54.2 | **75.2 / 54.1** | 69.2 / 45.4 | 73.6 / 55.2 | | **ERNIE-M LARGE** | 84.4 / 71.5 | 74.8 / 56.6 | 70.8 / 55.9 | 67.4 / 47.2 | **72.6 / 54.7** | 75.0 / 53.7 | **71.1 / 47.5** | **73.7 / 55.3** |
956a284a0fe5cde90ad23a2bdfb702e4
apache-2.0
[]
false
Cross-lingual Paraphrase Identification * datasets:PAWS-X | Model | en | de | es | fr | ja | ko | zh | Avg | | ---------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | Cross-lingual Transfer | | | | | | | | | | mBERT | 94.0 | 85.7 | 87.4 | 87.0 | 73.0 | 69.6 | 77.0 | 81.9 | | XLM | 94.0 | 85.9 | 88.3 | 87.4 | 69.3 | 64.8 | 76.5 | 80.9 | | MMTE | 93.1 | 85.1 | 87.2 | 86.9 | 72.0 | 69.2 | 75.9 | 81.3 | | XLM-R LARGE | 94.7 | 89.7 | 90.1 | 90.4 | 78.7 | 79.0 | 82.3 | 86.4 | | VECO LARGE | **96.2** | 91.3 | 91.4 | 92.0 | 81.8 | 82.9 | 85.1 | 88.7 | | **ERNIE-M LARGE** | 96.0 | **91.9** | **91.4** | **92.2** | **83.9** | **84.5** | **86.9** | **89.5** | | Translate-Train-All | | | | | | | | | | VECO LARGE | 96.4 | 93.0 | 93.0 | 93.5 | 87.2 | 86.8 | 87.9 | 91.1 | | **ERNIE-M LARGE** | **96.5** | **93.5** | **93.3** | **93.8** | **87.9** | **88.4** | **89.2** | **91.8** |
4dd289d2903017e9c179d25ff4011ebc
apache-2.0
[]
false
Cross-lingual Sentence Retrieval * dataset:Tatoeba | Model | Avg | | --------------------------------------- | -------- | | XLM-R LARGE | 75.2 | | VECO LARGE | 86.9 | | **ERNIE-M LARGE** | **87.9** | | **ERNIE-M LARGE( after fine-tuning)** | **93.3** |
18e68a3c9efdd87bfee8544f65f0dce5
apache-2.0
[]
false
Citation Info ```text @article{Ouyang2021ERNIEMEM, title={ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora}, author={Xuan Ouyang and Shuohuan Wang and Chao Pang and Yu Sun and Hao Tian and Hua Wu and Haifeng Wang}, journal={ArXiv}, year={2021}, volume={abs/2012.15674} } ```
20c3a33d8068969a6a70951ee595dedf
mit
['generated_from_trainer']
false
predict-perception-xlmr-focus-victim This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2546 - Rmse: 0.6301 - Rmse Focus::a Sulla vittima: 0.6301 - Mae: 0.5441 - Mae Focus::a Sulla vittima: 0.5441 - R2: 0.7205 - R2 Focus::a Sulla vittima: 0.7205 - Cos: 0.8261 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.7802 - Rsa: nan
13317a201b8aa07b2ec59e3fa17f2c99
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sulla vittima | Mae | Mae Focus::a Sulla vittima | R2 | R2 Focus::a Sulla vittima | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0607 | 1.0 | 15 | 0.9261 | 1.2017 | 1.2017 | 0.9557 | 0.9557 | -0.0166 | -0.0166 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan | | 1.0107 | 2.0 | 30 | 0.9481 | 1.2159 | 1.2159 | 0.9861 | 0.9861 | -0.0408 | -0.0408 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan | | 0.9921 | 3.0 | 45 | 0.9068 | 1.1892 | 1.1892 | 0.9548 | 0.9548 | 0.0045 | 0.0045 | 0.4783 | 0.0 | 0.5 | 0.6332 | nan | | 0.7769 | 4.0 | 60 | 0.5014 | 0.8842 | 0.8842 | 0.7121 | 0.7121 | 0.4496 | 0.4496 | 0.7391 | 0.0 | 0.5 | 0.6232 | nan | | 0.5763 | 5.0 | 75 | 0.4019 | 0.7917 | 0.7917 | 0.6737 | 0.6737 | 0.5588 | 0.5588 | 0.8261 | 0.0 | 0.5 | 0.8155 | nan | | 0.4378 | 6.0 | 90 | 0.3594 | 0.7486 | 0.7486 | 0.5957 | 0.5957 | 0.6055 | 0.6055 | 0.7391 | 0.0 | 0.5 | 0.4442 | nan | | 0.3595 | 7.0 | 105 | 0.3452 | 0.7337 | 0.7337 | 0.6333 | 0.6333 | 0.6210 | 0.6210 | 0.5652 | 0.0 | 0.5 | 0.2649 | nan | | 0.3192 | 8.0 | 120 | 0.3275 | 0.7147 | 0.7147 | 0.6205 | 0.6205 | 0.6405 | 0.6405 | 0.7391 | 0.0 | 0.5 | 0.6561 | nan | | 0.2482 | 9.0 | 135 | 0.2978 | 0.6815 | 0.6815 | 0.5754 | 0.5754 | 0.6731 | 0.6731 | 0.7391 | 0.0 | 0.5 | 0.6715 | nan | | 0.2416 | 10.0 | 150 | 0.3018 | 0.6860 | 0.6860 | 0.5954 | 0.5954 | 0.6687 | 0.6687 | 0.5652 | 0.0 | 0.5 | 0.2553 | nan | | 0.2292 | 11.0 | 165 | 0.2764 | 0.6565 | 0.6565 | 0.5522 | 0.5522 | 0.6966 | 0.6966 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan | | 0.1752 | 12.0 | 180 | 0.3070 | 0.6920 | 0.6920 | 0.5680 | 0.5680 | 0.6629 | 0.6629 | 0.7391 | 0.0 | 0.5 | 0.6715 | nan | | 0.1956 | 13.0 | 195 | 0.2923 | 0.6752 | 0.6752 | 0.5499 | 0.5499 | 0.6791 | 0.6791 | 0.8261 | 0.0 | 0.5 | 0.7843 | nan | | 0.1424 | 14.0 | 210 | 0.3163 | 0.7023 | 0.7023 | 0.6060 | 0.6060 | 0.6528 | 0.6528 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan | | 0.152 | 15.0 | 225 | 0.2436 | 0.6164 | 0.6164 | 0.5127 | 0.5127 | 0.7326 | 0.7326 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan | | 0.1277 | 16.0 | 240 | 0.2471 | 0.6208 | 0.6208 | 0.5367 | 0.5367 | 0.7287 | 0.7287 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.1269 | 17.0 | 255 | 0.2573 | 0.6334 | 0.6334 | 0.5329 | 0.5329 | 0.7175 | 0.7175 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.1058 | 18.0 | 270 | 0.2538 | 0.6291 | 0.6291 | 0.5530 | 0.5530 | 0.7214 | 0.7214 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan | | 0.107 | 19.0 | 285 | 0.2568 | 0.6328 | 0.6328 | 0.5464 | 0.5464 | 0.7181 | 0.7181 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.1185 | 20.0 | 300 | 0.2452 | 0.6183 | 0.6183 | 0.5317 | 0.5317 | 0.7309 | 0.7309 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan | | 0.1029 | 21.0 | 315 | 0.2419 | 0.6142 | 0.6142 | 0.5415 | 0.5415 | 0.7344 | 0.7344 | 0.7391 | 0.0 | 0.5 | 0.2347 | nan | | 0.0908 | 22.0 | 330 | 0.2462 | 0.6196 | 0.6196 | 0.5261 | 0.5261 | 0.7297 | 0.7297 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.0901 | 23.0 | 345 | 0.2528 | 0.6279 | 0.6279 | 0.5330 | 0.5330 | 0.7225 | 0.7225 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.0979 | 24.0 | 360 | 0.2800 | 0.6607 | 0.6607 | 0.5682 | 0.5682 | 0.6927 | 0.6927 | 0.9130 | 0.0 | 0.5 | 0.8408 | nan | | 0.0992 | 25.0 | 375 | 0.2502 | 0.6246 | 0.6246 | 0.5517 | 0.5517 | 0.7254 | 0.7254 | 0.6522 | 0.0 | 0.5 | 0.2372 | nan | | 0.0846 | 26.0 | 390 | 0.2570 | 0.6331 | 0.6331 | 0.5524 | 0.5524 | 0.7178 | 0.7178 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.0717 | 27.0 | 405 | 0.2562 | 0.6321 | 0.6321 | 0.5456 | 0.5456 | 0.7187 | 0.7187 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.0739 | 28.0 | 420 | 0.2570 | 0.6330 | 0.6330 | 0.5471 | 0.5471 | 0.7179 | 0.7179 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.0828 | 29.0 | 435 | 0.2553 | 0.6309 | 0.6309 | 0.5446 | 0.5446 | 0.7198 | 0.7198 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan | | 0.086 | 30.0 | 450 | 0.2546 | 0.6301 | 0.6301 | 0.5441 | 0.5441 | 0.7205 | 0.7205 | 0.8261 | 0.0 | 0.5 | 0.7802 | nan |
027e669cb66b05826321db3035266c42
apache-2.0
[]
false
NT5, a T5 model trained to perform numerical reasoning T5-small model pre-trained on 3 million (partly synthetic) texts and fine-tuned on [DROP](https://allennlp.org/drop.html). It was introduced in the paper [NT5?! Training T5 to Perform Numerical Reasoning](https://arxiv.org/abs/2104.07307) by Yang et al. and first released in [this repository](https://github.com/lesterpjy/numeric-t5). As the original implementation was in Tensorflow 2, I've converted the weigths to PyTorch. This model corresponds to RC Experiment 1 (see the paper), their best performing model. Disclaimer: The team releasing NT5 did not write a model card for this model so this model card has been written by me.
104019d0d6dd056b7b641affdeb2909e
apache-2.0
[]
false
Model description The NT5 model is a T5 model, in other words, an encoder-decoder Transformer. In order to encourage numerical reasoning, the model was further pre-trained on three datasets designed to strengthen skills necessary for numerical reasoning over text (NRoT) and general reading comprehension before being fine-tuned on the Discrete Reasoning over Text (DROP) dataset.
2bdde5edf4fcd427e5685a159d9cd68f
apache-2.0
[]
false
How to use Here is how to use this model: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration context = """Saint Jean de Brébeuf was a French Jesuit missionary who travelled to New France in 1625. There he worked primarily with the Huron for the rest of his life, except for a few years in France from 1629 to 1633. He learned their language and culture, writing extensively about each to aid other missionaries. In 1649, Br´ebeuf and another missionary were captured when an Iroquois raid took over a Huron village . Together with Huron captives, the missionaries were ritually tortured and killed on March 16, 1649. Br´ebeuf was beatified in 1925 and among eight Jesuit missionaries canonized as saints in the Roman Catholic Church in 1930.""" question = "How many years did Saint Jean de Brébeuf stay in New France before he went back to France for a few years?" tokenizer = T5Tokenizer.from_pretrained("nielsr/nt5-small-rc1") model = T5ForConditionalGeneration.from_pretrained("nielsr/nt5-small-rc1")
cbab1a466320f970aa6ec9f4bf7f85c8
apache-2.0
[]
false
encode context & question input_text = f"answer_me: {question} context: {context}" encoded_query = tokenizer( input_text, return_tensors='pt', padding='max_length', truncation=True, max_length=512)
cc35e06b0795987fcf4bb81ec93d4164
apache-2.0
[]
false
generate answer generated_answer = model.generate(input_ids=encoded_query["input_ids"], attention_mask=encoded_query["attention_mask"], max_length=54) decoded_answer = tokenizer.decode(generated_answer.numpy()[0]) print("T5 Answer: ", decoded_answer) T5 Answer: 4 ```
0206b701bb8ce5425985d35b00611411
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @misc{yang2021nt5, title={NT5?! Training T5 to Perform Numerical Reasoning}, author={Peng-Jian Yang and Ying Ting Chen and Yuechan Chen and Daniel Cer}, year={2021}, eprint={2104.07307}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @article{DBLP:journals/corr/abs-1903-00161, author = {Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner}, title = {{DROP:} {A} Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs}, journal = {CoRR}, volume = {abs/1903.00161}, year = {2019}, url = {http://arxiv.org/abs/1903.00161}, archivePrefix = {arXiv}, eprint = {1903.00161}, timestamp = {Wed, 03 Jul 2019 07:17:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1903-00161.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } a service of Schloss Dagstuhl - Leibniz Center for Informatics\\\\thomebrowsesearchabout ```
303f666e1be09c2f44281661aa6fcc19
apache-2.0
['generated_from_trainer']
false
xlsr-53-bemba-10hrs This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3190 - Wer: 0.4032
8114f74ea6d95958cc231b792bd0ec62
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 10 - mixed_precision_training: Native AMP
05d2e54a393b19fd9df39eb00243c964
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3207 | 1.07 | 400 | 0.3720 | 0.5923 | | 0.5688 | 2.14 | 800 | 0.3073 | 0.5002 | | 0.3927 | 3.22 | 1200 | 0.2678 | 0.4521 | | 0.316 | 4.29 | 1600 | 0.2703 | 0.4261 | | 0.2531 | 5.36 | 2000 | 0.2663 | 0.4198 | | 0.2051 | 6.43 | 2400 | 0.2614 | 0.4037 | | 0.1584 | 7.51 | 2800 | 0.2853 | 0.4046 | | 0.1343 | 8.58 | 3200 | 0.3072 | 0.4121 | | 0.1031 | 9.65 | 3600 | 0.3190 | 0.4032 |
6ac1f9ccbba2ee80f8b200c41bbcce7a
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2r_de_vp-100k_age_teens-2_sixties-8_s877 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
167e0e2804f9762ef988d8faa511072d
apache-2.0
['generated_from_trainer']
false
BERT_Mod_2 This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - eval_loss: 0.5659 - eval_accuracy: 0.9037 - eval_runtime: 0.3838 - eval_samples_per_second: 2271.724 - eval_steps_per_second: 143.285 - epoch: 0.01 - step: 49
4ae7342428fd4314b4bf3ad993dde921
apache-2.0
['generated_from_trainer']
false
sarcasm-detection-Bert-base-uncased-POS This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1904 - Accuracy: 0.591
d71bfa0d7df0a746c0de07ba5034e251
apache-2.0
['translation']
false
opus-mt-sv-xh * source languages: sv * target languages: xh * OPUS readme: [sv-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-xh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.eval.txt)
c1562634ad6b5d0e3f3593be9f59a857
mit
['generated_from_trainer']
false
microsoft-deberta-v3-large_ner_conll2003 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0293 - Precision: 0.9667 - Recall: 0.9724 - F1: 0.9695 - Accuracy: 0.9945
a9716e0083ae09c3efc13735bd18cde5
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5
01b921de3884b2169f9277974c53d448
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0986 | 1.0 | 878 | 0.0323 | 0.9453 | 0.9596 | 0.9524 | 0.9921 | | 0.0212 | 2.0 | 1756 | 0.0270 | 0.9571 | 0.9675 | 0.9623 | 0.9932 | | 0.009 | 3.0 | 2634 | 0.0280 | 0.9638 | 0.9714 | 0.9676 | 0.9940 | | 0.0035 | 4.0 | 3512 | 0.0290 | 0.9657 | 0.9712 | 0.9685 | 0.9943 | | 0.0022 | 5.0 | 4390 | 0.0293 | 0.9667 | 0.9724 | 0.9695 | 0.9945 |
c8a2dd46a829169d9713795455c7adab
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-imdb-lm This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8512
069daa280ff32642a6d14bbb88eeec0b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.9577 | 1.0 | 7315 | 3.8818 | | 3.8965 | 2.0 | 14630 | 3.8570 | | 3.8561 | 3.0 | 21945 | 3.8512 |
39dcaa62e7d473d959860a81ec392c58
apache-2.0
['automatic-speech-recognition', 'sv-SE']
false
exp_w2v2t_sv-se_vp-it_s975 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ba6f01f474ed74027b87d4a4fd188777
mit
['generated_from_trainer']
false
roberta-large-finetuned-code-mixed-DS This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1340 - Accuracy: 0.7203 - Precision: 0.6584 - Recall: 0.6548 - F1: 0.6558
6a59685800b019f17b25ba03a61e7256
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
1bd27eacc11ceb46219fb7174b2ee178
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.9729 | 1.0 | 248 | 0.7491 | 0.6922 | 0.6434 | 0.6625 | 0.6358 | | 0.7474 | 1.99 | 496 | 0.6947 | 0.7183 | 0.6712 | 0.6915 | 0.6760 | | 0.5938 | 2.99 | 744 | 0.7370 | 0.7123 | 0.6624 | 0.6839 | 0.6642 | | 0.4264 | 3.98 | 992 | 0.8820 | 0.7123 | 0.6540 | 0.6636 | 0.6492 | | 0.2806 | 4.98 | 1240 | 1.2022 | 0.7404 | 0.6807 | 0.6694 | 0.6742 | | 0.2239 | 5.98 | 1488 | 1.3933 | 0.7223 | 0.6593 | 0.6587 | 0.6568 | | 0.1585 | 6.97 | 1736 | 1.8543 | 0.7304 | 0.6730 | 0.6763 | 0.6737 | | 0.1302 | 7.97 | 1984 | 2.0783 | 0.7143 | 0.6495 | 0.6520 | 0.6504 | | 0.1008 | 8.96 | 2232 | 2.3523 | 0.7183 | 0.6588 | 0.6561 | 0.6552 | | 0.0793 | 9.96 | 2480 | 2.5260 | 0.7163 | 0.6516 | 0.6566 | 0.6538 | | 0.0498 | 10.96 | 2728 | 2.6074 | 0.7425 | 0.6902 | 0.6817 | 0.6830 | | 0.0484 | 11.95 | 2976 | 2.6758 | 0.7284 | 0.6687 | 0.6734 | 0.6709 | | 0.0409 | 12.95 | 3224 | 2.8658 | 0.7425 | 0.6817 | 0.6756 | 0.6781 | | 0.0239 | 13.94 | 3472 | 2.9484 | 0.7465 | 0.6980 | 0.6818 | 0.6870 | | 0.025 | 14.94 | 3720 | 3.0827 | 0.7304 | 0.6778 | 0.6577 | 0.6641 | | 0.0286 | 15.94 | 3968 | 3.0011 | 0.7183 | 0.6509 | 0.6475 | 0.6491 | | 0.0264 | 16.93 | 4216 | 3.1581 | 0.7264 | 0.6645 | 0.6563 | 0.6595 | | 0.009 | 17.93 | 4464 | 3.1200 | 0.7223 | 0.6589 | 0.6561 | 0.6569 | | 0.012 | 18.92 | 4712 | 3.1364 | 0.7203 | 0.6573 | 0.6503 | 0.6525 | | 0.017 | 19.92 | 4960 | 3.1340 | 0.7203 | 0.6584 | 0.6548 | 0.6558 |
2913e542d6be06f564d36df187ee783f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2214 - Accuracy: 0.9275 - F1: 0.9274
2a692206e4111e57aea849f0e69f1c74
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8568 | 1.0 | 250 | 0.3328 | 0.9 | 0.8947 | | 0.2576 | 2.0 | 500 | 0.2214 | 0.9275 | 0.9274 |
117e967171d8761d5588d79cf73e10c4
bsd-3-clause
[]
false
Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-Mono 6B** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 6B* and further pre-trained on a Python programming language dataset, and "6B" refers to the number of trainable parameters.
14c3a073ee1120d5405fe75bd9716fab
bsd-3-clause
[]
false
Training data This checkpoint (CodeGen-Mono 6B) was firstly initialized with *CodeGen-Multi 6B*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
541e6c244c008c8793f28ee70354d4e7
bsd-3-clause
[]
false
How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-mono") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-mono") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ```
4271680068ffb19041edf87c678a81d5
mit
[]
false
Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
a4c9ec6ec72c5c0a4c394de1282f454d
mit
[]
false
Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
8bd5d0476e64f7c4781169404739bfeb
mit
[]
false
Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ```
485fe184a45506443ea6aa0adf9082f5
mit
[]
false
Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
3e14c990444ad72e6c08e244ad6b36d0
mit
['vision']
false
GIT (GenerativeImage2Text), large-sized, fine-tuned on TextVQA GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextVQA. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team.
751110eb7303e9f077de1b01ec19800c
mit
['vision']
false
Intended uses & limitations You can use the raw model for visual question answering (VQA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you.
cc1683e448b6f54516b9b8dbb77876cb
mit
['vision']
false
Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. Next, the model was fine-tuned on TextVQA. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details.
f6323826fe4a2dffe88a87725626a773
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4722
2da3f69ea2c8ecb2634ae7d32711c117
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7117 | 1.0 | 157 | 2.4977 | | 2.5783 | 2.0 | 314 | 2.4241 | | 2.5375 | 3.0 | 471 | 2.4358 |
fa935f5a81f0cf6a8bcb25e106bac180
mit
[]
false
Model Details **Model Description:** RoBERTa base OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa base model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version. - **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list - **Model Type:** Fine-tuned transformer-based language model - **Language(s):** English - **License:** MIT - **Related Models:** [RoBERTa base](https://huggingface.co/roberta-base), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection). - [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) - [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/) - [Explore the detector model here](https://huggingface.co/openai-detector )
84612bbf7f542457fd19809ab16ac5ce
mit
[]
false
Downstream Use The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
229065d4b416603af937b1dadc44e5b9
mit
[]
false
Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
34338f5bb0901cc0db4744f24d3c8865
mit
[]
false
Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.** Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
176761778958ef627e18fbec06c98c27
mit
[]
false
Risks and Limitations In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research. In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write: > We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective. The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
5550c9fa3c513164f9f95b02b3f9b0eb
mit
[]
false
Bias Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa base and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa base](https://huggingface.co/roberta-base) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
07d9a6ca6ef966493c28d41cf69a3c48
mit
[]
false
Training Data The model is a sequence classifier based on RoBERTa base (see the [RoBERTa base model card](https://huggingface.co/roberta-base) for more details on the RoBERTa base training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
859e79409bc56e8b1b892fefb4704a87
mit
[]
false
Training Procedure The model developers write that: > We based a sequence classifier on RoBERTaBASE (125 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model. They later state: > To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
1506de0f0c83571d5eed9740f797293b
mit
[]
false
Testing Data, Factors and Metrics The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by: > testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
4213d6a4bfbc86de4b591da39a83d7d1
mit
[]
false
Results The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf): > Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
22d59bd4d6ceb56463b2648af9236f1f
mit
[]
false
compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown
92f85173b1d742c1cffa9aa1188f58d9
mit
[]
false
Technical Specifications The model developers write that: See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
649bdb209b415142cbe15af2d2da9f54
mit
[]
false
Citation Information ```bibtex @article{solaiman2019release, title={Release strategies and the social impacts of language models}, author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others}, journal={arXiv preprint arXiv:1908.09203}, year={2019} } ``` APA: - Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
d4a12b6f633c2bb40c0e19a4301c0b7b
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0826 - Accuracy: 0.9761 - Precision: 0.9727 - Recall: 0.9654 - F1: 0.9691
d28c8f82ecdbbb62f7161f96333eed97
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Breton Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Breton using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz.
2cec800c0ec89fa1fdcc5a182018e238
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "br", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton") resampler = torchaudio.transforms.Resample(48_000, 16_000)
4f2fbb4cea77a35233c2c61aaf366692
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "br", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-breton") model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-breton") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)\/\«\»\½\…]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
d2a43cf9f4a3198338f3b3bd122ea22f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
1c54da87bb35cbcc02764faf59a58c90
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 54.04%
d13b103d571d2554f93f98a51a648eb4
mit
['generated_from_trainer']
false
bart-large-cnn-finetuned-roundup-2-2 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1521 - Rouge1: 52.6634 - Rouge2: 32.537 - Rougel: 33.3148 - Rougelsum: 50.148 - Gen Len: 142.0
66b40cd46a21d2a3d4e514f2b025db7e
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP
0cfeb50b16369f84740a92d25bf14d29
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 167 | 1.2139 | 52.546 | 32.4912 | 32.9529 | 49.8241 | 142.0 | | No log | 2.0 | 334 | 1.1521 | 52.6634 | 32.537 | 33.3148 | 50.148 | 142.0 |
8b8e77eeb37fc4eb30bba3643e6e9875
mit
['generated_from_trainer']
false
camembert-base-finetuned-sans-symbole-dd This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2642 - Precision: 0.8856 - Recall: 0.9176 - F1: 0.9013 - Accuracy: 0.9364
e1b126a0389fb24d3750c41707779063
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1961 | 1.0 | 4317 | 0.2216 | 0.8675 | 0.9039 | 0.8853 | 0.9319 | | 0.161 | 2.0 | 8634 | 0.2243 | 0.8614 | 0.9158 | 0.8878 | 0.9237 | | 0.1169 | 3.0 | 12951 | 0.2507 | 0.8752 | 0.9154 | 0.8949 | 0.9329 | | 0.0875 | 4.0 | 17268 | 0.2642 | 0.8856 | 0.9176 | 0.9013 | 0.9364 |
0bb5b6726acac3b52c4acea0d41c63a6
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1699 - F1: 0.8725
6b7f51781f12c1507acd66687748c2b7
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
16fea87efb8666f6c725e38b2d447d2b
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5975 | 1.0 | 191 | 0.2612 | 0.8237 | | 0.2798 | 2.0 | 382 | 0.1699 | 0.8725 |
c2ec34fca873a08845376d287060a735
cc-by-sa-4.0
['coptic', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained with [UD_Coptic](https://universaldependencies.org/cop/) for POS-tagging and dependency-parsing, derived from [roberta-base-coptic](https://huggingface.co/KoichiYasuoka/roberta-base-coptic). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
71e1acdbdea777cf9b8a7f1702caedcf
cc-by-sa-4.0
['coptic', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-coptic-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-coptic-upos") ``` or ``` import esupar nlp=esupar.load("KoichiYasuoka/roberta-base-coptic-upos") ```
006ac87629364cb09d0b84b59c16ae9b
afl-3.0
[]
false
Example of usage: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("kyryl0s/gpt2-uk-xxs") model = AutoModelForCausalLM.from_pretrained("kyryl0s/gpt2-uk-xxs") input_ids = tokenizer.encode("Путін — ", add_special_tokens=False, return_tensors='pt') outputs = model.generate( input_ids, do_sample=True, num_return_sequences=3, max_length=50 ) for i, out in enumerate(outputs): print("{}: {}".format(i, tokenizer.decode(out))) ```
c41e617a8c51a8713ffdf33d22beca98
apache-2.0
['generated_from_trainer']
false
distilbart-cnn-6-6-finetuned-xsum-intro-test This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.9036 - Rouge1: 32.0474 - Rouge2: 12.3779 - Rougel: 23.5491 - Rougelsum: 24.251 - Gen Len: 60.8594
aa47502aa82a7f53b25e4fff85512e41
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9432 | 1.0 | 12753 | 1.9036 | 32.0474 | 12.3779 | 23.5491 | 24.251 | 60.8594 |
109ef3f646171fb24b81322d595dff10
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2209 - Accuracy: 0.9225 - F1: 0.9226
67dc8224c7f26f158270c6403ab91d6c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8477 | 1.0 | 250 | 0.3204 | 0.9025 | 0.9000 | | 0.2559 | 2.0 | 500 | 0.2209 | 0.9225 | 0.9226 |
9e427b35faa86db72614e09302942a50
mit
['generated_from_trainer']
false
distilbert-base-turkish-cased-finetuned-emotion This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the turkish-multiclass-dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.4861 - F1: {'f1': 0.8276613385259164}
124128da502ebca2770793fe867b1dca
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------------------------:| | 0.2578 | 1.0 | 313 | 0.5459 | {'f1': 0.8212239281513611} | | 0.381 | 2.0 | 626 | 0.4861 | {'f1': 0.8276613385259164} |
970d08625cd1e32e66fce11bdccf34cf
creativeml-openrail-m
['text-to-image']
false
noggles6000 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
a18e8d2a7793ca60c6c942668ac598c5
creativeml-openrail-m
['text-to-image']
false
Model by alxdfy This your the Stable Diffusion model fine-tuned the noggles6000 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **nounsbud.jpg** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept: nounsbud.jpg ![nounsbud.jpg 0](https://huggingface.co/alxdfy/noggles6000/resolve/main/concept_images/nounsbud.jpg)
2bae90acdfc29bff3268265fd4013f0b
apache-2.0
['translation']
false
opus-mt-ro-fi * source languages: ro * target languages: fi * OPUS readme: [ro-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.eval.txt)
c38bed1ae6875cef924c68e3a98c02f5
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-big-he-en Neural machine translation model for translating from Hebrew (he) to English (en). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
956855ade5e85c9b7b96d37e04dd7c44