sha
null
last_modified
null
library_name
stringclasses
154 values
text
stringlengths
1
900k
metadata
stringlengths
2
348k
pipeline_tag
stringclasses
45 values
id
stringlengths
5
122
tags
listlengths
1
1.84k
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
tokens_length
listlengths
1
723
input_texts
listlengths
1
61
embeddings
listlengths
768
768
null
null
transformers
**Usage HuggingFace Transformers for question generation task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-QuestionGeneration") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) questions = tokenizer.batch_decode(questions, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` questions = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) tokenizer.batch_decode(questions, skip_special_tokens=True) ``` output: 1. *What is the impact of human induced climate change on tropical cyclones?* 2. *What is the impact of climate change on tropical cyclones?* 3. *What is the impact of human induced climate change on tropical cyclone formation?* 4. *How many tropical cyclones will occur in the mid-latitudes?* 5. *What is the impact of climate change on the formation of tropical cyclones?* 6. *Is it possible for a tropical cyclone to form in the middle latitudes?* 7. *How many tropical cyclones will be formed in the mid-latitudes?* 8. *How many tropical cyclones will there be in the mid-latitudes?* 9. *How many tropical cyclones will form in the mid-latitudes?* 10. *What is the impact of global warming on tropical cyclones?* 11. *How long does it take for a tropical cyclone to form?* 12. 'What are the impacts of climate change on tropical cyclones?* 13. *What are the effects of climate change on tropical cyclones?* 14. *How many tropical cyclones will be able to form in the middle latitudes?* 15. *What is the impact of climate change on tropical cyclone formation?* 16. *What is the effect of climate change on tropical cyclones?* 17. *How long does it take for a tropical cyclone to form in the middle latitude?* 18. *How many tropical cyclones will occur in the middle latitudes?* 19. *How many tropical cyclones are likely to form in the midlatitudes?* 20. *How many tropical cyclones are likely to form in the middle latitudes?* 21. *How many tropical cyclones are expected to form in the midlatitudes?* 22. *How many tropical cyclones will be formed in the middle latitudes?* 23. *How many tropical cyclones will there be in the middle latitudes?* 24. *How long will it take for a tropical cyclone to form in the middle latitude?* 25. *What is the impact of global warming on tropical cyclone formation?* 26. *How many tropical cyclones will form in the middle latitudes?* 27. *How many tropical cyclones can we expect to form in the middle latitudes?* 28. *Is it possible for a tropical cyclone to form in the middle latitude?* 29. *What is the effect of climate change on tropical cyclone formation?* 30. *What are the effects of climate change on tropical cyclone formation?* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
{}
text2text-generation
AlekseyKulnevich/Pegasus-QuestionGeneration
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
Usage HuggingFace Transformers for question generation task Decoder configuration examples: Input text you can see here output: 1. *What is the impact of human induced climate change on tropical cyclones?* 2. *What is the impact of climate change on tropical cyclones?* 3. *What is the impact of human induced climate change on tropical cyclone formation?* 4. *How many tropical cyclones will occur in the mid-latitudes?* 5. *What is the impact of climate change on the formation of tropical cyclones?* 6. *Is it possible for a tropical cyclone to form in the middle latitudes?* 7. *How many tropical cyclones will be formed in the mid-latitudes?* 8. *How many tropical cyclones will there be in the mid-latitudes?* 9. *How many tropical cyclones will form in the mid-latitudes?* 10. *What is the impact of global warming on tropical cyclones?* 11. *How long does it take for a tropical cyclone to form?* 12. 'What are the impacts of climate change on tropical cyclones?* 13. *What are the effects of climate change on tropical cyclones?* 14. *How many tropical cyclones will be able to form in the middle latitudes?* 15. *What is the impact of climate change on tropical cyclone formation?* 16. *What is the effect of climate change on tropical cyclones?* 17. *How long does it take for a tropical cyclone to form in the middle latitude?* 18. *How many tropical cyclones will occur in the middle latitudes?* 19. *How many tropical cyclones are likely to form in the midlatitudes?* 20. *How many tropical cyclones are likely to form in the middle latitudes?* 21. *How many tropical cyclones are expected to form in the midlatitudes?* 22. *How many tropical cyclones will be formed in the middle latitudes?* 23. *How many tropical cyclones will there be in the middle latitudes?* 24. *How long will it take for a tropical cyclone to form in the middle latitude?* 25. *What is the impact of global warming on tropical cyclone formation?* 26. *How many tropical cyclones will form in the middle latitudes?* 27. *How many tropical cyclones can we expect to form in the middle latitudes?* 28. *Is it possible for a tropical cyclone to form in the middle latitude?* 29. *What is the effect of climate change on tropical cyclone formation?* 30. *What are the effects of climate change on tropical cyclone formation?* Also you can play with the following parameters in generate method: -top_k -top_p Meaning of parameters to generate text you can see here
[]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 40 ]
[ "passage: TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.027585696429014206, 0.007313489448279142, -0.007987958379089832, 0.028276974335312843, 0.1623227447271347, 0.029489101842045784, 0.14388062059879303, 0.1242440864443779, 0.009850045666098595, -0.03792005777359009, 0.13300150632858276, 0.18938398361206055, -0.010199892334640026, 0.117705...
null
null
transformers
**Usage HuggingFace Transformers for summarization task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-Summarization") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True, truncation=True, padding='longest', return_tensors='pt') input_ids = input_['input_ids'] input_mask = input_['attention_mask'] summary = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, min_length=100, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=10) questions = tokenizer.batch_decode(summary, skip_special_tokens=True) ``` **Decoder configuration examples:** [**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105) ``` summary = model.generate(input_ids=input_ids, attention_mask=input_mask, num_beams=32, min_length=100, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=1) tokenizer.batch_decode(summary, skip_special_tokens=True) ``` output: 1. *global warming will expand the range of tropical cyclones in the mid-latitudes of the world, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) and the US National Oceanic and Atmospheric Administration (NOAA) The study shows that a warming climate will allow more of these types of storms to form over a wider range than they have been able to do over the past three million years. "As the climate warms, it's likely that these storms will become more frequent and more intense," said the authors of this study.* ``` summary = model.generate(input_ids=input_ids, attention_mask=input_mask, top_k=30, no_repeat_ngram_size=2, early_stopping=True, min_length=100, num_return_sequences=1) tokenizer.batch_decode(summary, skip_special_tokens=True) ``` output: 1. *tropical cyclones in the mid-latitudes of the world will likely form more of these types of storms, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) on the impact of human induced climate change on these storms. The study shows that a warming climate will increase the likelihood of a subtropical cyclone forming over a wider range of latitudes, including the equator, than it has been for the past three million years, and that it will be more likely to form over the tropics.* Also you can play with the following parameters in generate method: -top_k -top_p [**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
{}
text2text-generation
AlekseyKulnevich/Pegasus-Summarization
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
Usage HuggingFace Transformers for summarization task Decoder configuration examples: Input text you can see here output: 1. *global warming will expand the range of tropical cyclones in the mid-latitudes of the world, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) and the US National Oceanic and Atmospheric Administration (NOAA) The study shows that a warming climate will allow more of these types of storms to form over a wider range than they have been able to do over the past three million years. "As the climate warms, it's likely that these storms will become more frequent and more intense," said the authors of this study.* output: 1. *tropical cyclones in the mid-latitudes of the world will likely form more of these types of storms, according to a new study published by the Intergovernmental Panel on Climate change (IPCC) on the impact of human induced climate change on these storms. The study shows that a warming climate will increase the likelihood of a subtropical cyclone forming over a wider range of latitudes, including the equator, than it has been for the past three million years, and that it will be more likely to form over the tropics.* Also you can play with the following parameters in generate method: -top_k -top_p Meaning of parameters to generate text you can see here
[]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 40 ]
[ "passage: TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.027585696429014206, 0.007313489448279142, -0.007987958379089832, 0.028276974335312843, 0.1623227447271347, 0.029489101842045784, 0.14388062059879303, 0.1242440864443779, 0.009850045666098595, -0.03792005777359009, 0.13300150632858276, 0.18938398361206055, -0.010199892334640026, 0.117705...
null
null
transformers
This is a fine-tuned version of GPT-2, trained with the entire corpus of Plato's works. By generating text samples you should be able to generate ancient Greek philosophy on the fly!
{"language": "en", "tags": ["text-generation"], "pipeline_tag": "text-generation", "widget": [{"text": "The Gods"}, {"text": "What is"}]}
text-generation
Alerosae/SocratesGPT-2
[ "transformers", "pytorch", "gpt2", "feature-extraction", "text-generation", "en", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt2 #feature-extraction #text-generation #en #endpoints_compatible #text-generation-inference #region-us
This is a fine-tuned version of GPT-2, trained with the entire corpus of Plato's works. By generating text samples you should be able to generate ancient Greek philosophy on the fly!
[]
[ "TAGS\n#transformers #pytorch #gpt2 #feature-extraction #text-generation #en #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 47 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #feature-extraction #text-generation #en #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.0282889436930418, 0.0009478123974986374, -0.007868082262575626, 0.01403009332716465, 0.1635991632938385, 0.04000629112124443, 0.02275751158595085, 0.14084503054618835, -0.017932232469320297, -0.003716195933520794, 0.13020537793636322, 0.20026414096355438, 0.011470472440123558, 0.0121601...
null
null
transformers
# XLM-RoBERTa large model whole word masking finetuned on SQuAD Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets ## Used QA Datasets SQuAD + SberQuAD [SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read! ## Evaluation results The results obtained are the following (SberQUaD): ``` f1 = 84.3 exact_match = 65.3
{"language": ["en", "ru", "multilingual"], "license": "apache-2.0"}
question-answering
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "en", "ru", "multilingual", "arxiv:1912.09723", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
2022-03-02T23:29:04+00:00
[ "1912.09723" ]
[ "en", "ru", "multilingual" ]
TAGS #transformers #pytorch #xlm-roberta #question-answering #en #ru #multilingual #arxiv-1912.09723 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# XLM-RoBERTa large model whole word masking finetuned on SQuAD Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets ## Used QA Datasets SQuAD + SberQuAD SberQuAD original paper is here! Recommend to read! ## Evaluation results The results obtained are the following (SberQUaD): ''' f1 = 84.3 exact_match = 65.3
[ "# XLM-RoBERTa large model whole word masking finetuned on SQuAD\nPretrained model using a masked language modeling (MLM) objective. \nFine tuned on English and Russian QA datasets", "## Used QA Datasets\nSQuAD + SberQuAD\n\nSberQuAD original paper is here! Recommend to read!", "## Evaluation results\nThe resul...
[ "TAGS\n#transformers #pytorch #xlm-roberta #question-answering #en #ru #multilingual #arxiv-1912.09723 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# XLM-RoBERTa large model whole word masking finetuned on SQuAD\nPretrained model using a masked language modeling (MLM) objective. \nFine tu...
[ 62, 49, 31, 32 ]
[ "passage: TAGS\n#transformers #pytorch #xlm-roberta #question-answering #en #ru #multilingual #arxiv-1912.09723 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n# XLM-RoBERTa large model whole word masking finetuned on SQuAD\nPretrained model using a masked language modeling (MLM) objective. \nFine...
[ -0.11333110928535461, -0.008323971182107925, -0.0032044195104390383, 0.05671990290284157, 0.12199054658412933, -0.025212423875927925, 0.07439691573381424, 0.053967222571372986, 0.011802053079009056, 0.025689464062452316, 0.1041514053940773, -0.002655280288308859, -0.0005556242540478706, 0....
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentence-compression-roberta This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3465 - Accuracy: 0.8473 - F1: 0.6835 - Precision: 0.6835 - Recall: 0.6835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.5312 | 1.0 | 50 | 0.5251 | 0.7591 | 0.0040 | 0.75 | 0.0020 | | 0.4 | 2.0 | 100 | 0.4003 | 0.8200 | 0.5341 | 0.7113 | 0.4275 | | 0.3355 | 3.0 | 150 | 0.3465 | 0.8473 | 0.6835 | 0.6835 | 0.6835 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "sentence-compression-roberta", "results": []}]}
token-classification
AlexMaclean/sentence-compression-roberta
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
sentence-compression-roberta ============================ This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3465 * Accuracy: 0.8473 * F1: 0.6835 * Precision: 0.6835 * Recall: 0.6835 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu113 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps...
[ "TAGS\n#transformers #pytorch #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eva...
[ 50, 116, 4, 33 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* ...
[ -0.08405987173318863, 0.07927323877811432, -0.0014021005481481552, 0.12081235647201538, 0.18684478104114532, 0.025732723996043205, 0.11298175156116486, 0.12693557143211365, -0.09936793148517609, 0.018990756943821907, 0.13045480847358704, 0.17961297929286957, 0.003609556006267667, 0.1471073...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentence-compression This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2973 - Accuracy: 0.8912 - F1: 0.8367 - Precision: 0.8495 - Recall: 0.8243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.2686 | 1.0 | 10000 | 0.2667 | 0.8894 | 0.8283 | 0.8725 | 0.7884 | | 0.2205 | 2.0 | 20000 | 0.2704 | 0.8925 | 0.8372 | 0.8579 | 0.8175 | | 0.1476 | 3.0 | 30000 | 0.2973 | 0.8912 | 0.8367 | 0.8495 | 0.8243 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "sentence-compression", "results": []}]}
token-classification
AlexMaclean/sentence-compression
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
sentence-compression ==================== This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2973 * Accuracy: 0.8912 * F1: 0.8367 * Precision: 0.8495 * Recall: 0.8243 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu113 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps...
[ "TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size:...
[ 54, 116, 4, 33 ]
[ "passage: TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_si...
[ -0.0983004942536354, 0.09497351944446564, -0.002057088539004326, 0.12381253391504288, 0.17055901885032654, 0.022288301959633827, 0.10824963450431824, 0.1306510865688324, -0.1081969141960144, 0.009128167293965816, 0.12471678853034973, 0.18594181537628174, 0.011309826746582985, 0.13527183234...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. It achieves the following results on the evaluation set: - Loss: 0.2388 - Wer: 0.3681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.3748 | 0.07 | 500 | 3.8784 | 1.0 | | 2.8068 | 0.14 | 1000 | 2.8289 | 0.9826 | | 1.6698 | 0.22 | 1500 | 0.8811 | 0.7127 | | 1.3488 | 0.29 | 2000 | 0.5166 | 0.5369 | | 1.2239 | 0.36 | 2500 | 0.4105 | 0.4741 | | 1.1537 | 0.43 | 3000 | 0.3585 | 0.4448 | | 1.1184 | 0.51 | 3500 | 0.3336 | 0.4292 | | 1.0968 | 0.58 | 4000 | 0.3195 | 0.4180 | | 1.0737 | 0.65 | 4500 | 0.3075 | 0.4141 | | 1.0677 | 0.72 | 5000 | 0.3015 | 0.4089 | | 1.0462 | 0.8 | 5500 | 0.2971 | 0.4077 | | 1.0392 | 0.87 | 6000 | 0.2870 | 0.3997 | | 1.0178 | 0.94 | 6500 | 0.2805 | 0.3963 | | 0.992 | 1.01 | 7000 | 0.2748 | 0.3935 | | 1.0197 | 1.09 | 7500 | 0.2691 | 0.3884 | | 1.0056 | 1.16 | 8000 | 0.2682 | 0.3889 | | 0.9826 | 1.23 | 8500 | 0.2647 | 0.3868 | | 0.9815 | 1.3 | 9000 | 0.2603 | 0.3832 | | 0.9717 | 1.37 | 9500 | 0.2561 | 0.3807 | | 0.9605 | 1.45 | 10000 | 0.2523 | 0.3783 | | 0.96 | 1.52 | 10500 | 0.2494 | 0.3788 | | 0.9442 | 1.59 | 11000 | 0.2478 | 0.3760 | | 0.9564 | 1.66 | 11500 | 0.2454 | 0.3733 | | 0.9436 | 1.74 | 12000 | 0.2439 | 0.3747 | | 0.938 | 1.81 | 12500 | 0.2411 | 0.3716 | | 0.9353 | 1.88 | 13000 | 0.2397 | 0.3698 | | 0.9271 | 1.95 | 13500 | 0.2388 | 0.3681 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-fr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 fr", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 36.81, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 35.55, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 39.94, "name": "Test WER"}]}]}]}
automatic-speech-recognition
AlexN/xls-r-300m-fr-0
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "fr", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible...
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - FR dataset. It achieves the following results on the evaluation set: * Loss: 0.2388 * Wer: 0.3681 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 2.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_step...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperpar...
[ 111, 130, 4, 39 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #fr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyper...
[ -0.1240810826420784, 0.1457705795764923, -0.004968408495187759, 0.0332498699426651, 0.11069892346858978, 0.01461311336606741, 0.10714900493621826, 0.1480925977230072, -0.07630407810211182, 0.11132235825061798, 0.0806264579296112, 0.07479729503393173, 0.0921223908662796, 0.11087358742952347...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2700 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["fr"], "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-fr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 fr", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 21.58, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 36.03, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 38.86, "name": "Test WER"}]}]}]}
automatic-speech-recognition
AlexN/xls-r-300m-fr
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "fr", "dataset:mozilla-foundation/common_voice_8_0", "model-index", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #fr #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us
# This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2700 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
[ "# \n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #fr #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n", "# \n\nThis model is a fine-tuned version ...
[ 103, 48, 6, 12, 8, 3, 117, 39 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #fr #dataset-mozilla-foundation/common_voice_8_0 #model-index #endpoints_compatible #region-us \n# \n\nThis model is a fine-tuned versi...
[ -0.10766394436359406, 0.13645069301128387, -0.0051962872967123985, 0.014750988222658634, 0.1119554340839386, 0.010088654235005379, 0.08672897517681122, 0.12784604728221893, -0.06601430475711823, 0.10629156231880188, 0.051224250346422195, -0.004118363838642836, 0.1103917583823204, 0.1113221...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PT dataset. It achieves the following results on the evaluation set: - Loss: 0.2290 - Wer: 0.2382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0952 | 0.64 | 500 | 3.0982 | 1.0 | | 1.7975 | 1.29 | 1000 | 0.7887 | 0.5651 | | 1.4138 | 1.93 | 1500 | 0.5238 | 0.4389 | | 1.344 | 2.57 | 2000 | 0.4775 | 0.4318 | | 1.2737 | 3.21 | 2500 | 0.4648 | 0.4075 | | 1.2554 | 3.86 | 3000 | 0.4069 | 0.3678 | | 1.1996 | 4.5 | 3500 | 0.3914 | 0.3668 | | 1.1427 | 5.14 | 4000 | 0.3694 | 0.3572 | | 1.1372 | 5.78 | 4500 | 0.3568 | 0.3501 | | 1.0831 | 6.43 | 5000 | 0.3331 | 0.3253 | | 1.1074 | 7.07 | 5500 | 0.3332 | 0.3352 | | 1.0536 | 7.71 | 6000 | 0.3131 | 0.3152 | | 1.0248 | 8.35 | 6500 | 0.3024 | 0.3023 | | 1.0075 | 9.0 | 7000 | 0.2948 | 0.3028 | | 0.979 | 9.64 | 7500 | 0.2796 | 0.2853 | | 0.9594 | 10.28 | 8000 | 0.2719 | 0.2789 | | 0.9172 | 10.93 | 8500 | 0.2620 | 0.2695 | | 0.9047 | 11.57 | 9000 | 0.2537 | 0.2596 | | 0.8777 | 12.21 | 9500 | 0.2438 | 0.2525 | | 0.8629 | 12.85 | 10000 | 0.2409 | 0.2493 | | 0.8575 | 13.5 | 10500 | 0.2366 | 0.2440 | | 0.8361 | 14.14 | 11000 | 0.2317 | 0.2385 | | 0.8126 | 14.78 | 11500 | 0.2290 | 0.2382 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["pt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "xls-r-300m-pt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0 pt", "type": "mozilla-foundation/common_voice_8_0", "args": "pt"}, "metrics": [{"type": "wer", "value": 19.361, "name": "Test WER"}, {"type": "cer", "value": 5.533, "name": "Test CER"}, {"type": "wer", "value": 19.36, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 47.812, "name": "Validation WER"}, {"type": "cer", "value": 18.805, "name": "Validation CER"}, {"type": "wer", "value": 48.01, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "pt"}, "metrics": [{"type": "wer", "value": 49.21, "name": "Test WER"}]}]}]}
automatic-speech-recognition
AlexN/xls-r-300m-pt
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hf-asr-leaderboard", "pt", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible...
2022-03-02T23:29:04+00:00
[]
[ "pt" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hf-asr-leaderboard #pt #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - PT dataset. It achieves the following results on the evaluation set: * Loss: 0.2290 * Wer: 0.2382 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 15.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_step...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hf-asr-leaderboard #pt #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperpar...
[ 111, 131, 4, 39 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #robust-speech-event #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hf-asr-leaderboard #pt #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyper...
[ -0.13396859169006348, 0.14691133797168732, -0.005080720409750938, 0.03463460132479668, 0.11000129580497742, 0.011546251364052296, 0.10062769055366516, 0.14367224276065826, -0.08043120801448822, 0.1196020320057869, 0.07696160674095154, 0.08200006186962128, 0.08795248717069626, 0.12306997925...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cola This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.7552 - Matthews Correlation: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5494768667363472}}]}]}
text-classification
Alireza1044/albert-base-v2-cola
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# cola This model is a fine-tuned version of albert-base-v2 on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.7552 - Matthews Correlation: 0.5495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# cola\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE COLA dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7552\n- Matthews Correlation: 0.5495", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", ...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# cola\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE COLA dataset.\nIt achieves the following res...
[ 64, 54, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# cola\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE COLA dataset.\nIt achieves the following ...
[ -0.09513246268033981, 0.19777244329452515, -0.0024429953191429377, 0.11103436350822449, 0.12592263519763947, 0.02398180216550827, 0.08647486567497253, 0.15393927693367004, -0.07258731871843338, 0.08527370542287827, 0.07683658599853516, 0.025949105620384216, 0.0733930692076683, 0.1437214761...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5383 - Accuracy: 0.8501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "mnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE MNLI", "type": "glue", "args": "mnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.8500813669650122}}]}]}
text-classification
Alireza1044/albert-base-v2-mnli
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# mnli This model is a fine-tuned version of albert-base-v2 on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5383 - Accuracy: 0.8501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# mnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5383\n- Accuracy: 0.8501", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training...
[ "TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# mnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MNLI dataset.\nIt achieves the following results on the e...
[ 60, 57, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# mnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MNLI dataset.\nIt achieves the following results on th...
[ -0.0916154608130455, 0.179710254073143, -0.002514180261641741, 0.11677908152341843, 0.1334308385848999, 0.0239237230271101, 0.07813864201307297, 0.16349239647388458, -0.06933952867984772, 0.06404046714305878, 0.07196847349405289, 0.04275976121425629, 0.06653963774442673, 0.1418124884366989...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mrpc This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4171 - Accuracy: 0.8627 - F1: 0.9011 - Combined Score: 0.8819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model_index": [{"name": "mrpc", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metric": {"name": "F1", "type": "f1", "value": 0.901060070671378}}]}]}
text-classification
Alireza1044/albert-base-v2-mrpc
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# mrpc This model is a fine-tuned version of albert-base-v2 on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.4171 - Accuracy: 0.8627 - F1: 0.9011 - Combined Score: 0.8819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# mrpc\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4171\n- Accuracy: 0.8627\n- F1: 0.9011\n- Combined Score: 0.8819", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\n...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# mrpc\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MRPC dataset.\nIt achieves the following res...
[ 64, 71, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# mrpc\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE MRPC dataset.\nIt achieves the following ...
[ -0.12874697148799896, 0.17154471576213837, -0.0008269163081422448, 0.1085033044219017, 0.1380055993795395, 0.029001468792557716, 0.06919711083173752, 0.1521012783050537, -0.07092459499835968, 0.08931997418403625, 0.11382538080215454, 0.054140474647283554, 0.059958625584840775, 0.1542724519...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3608 - Accuracy: 0.9138 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "qnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE QNLI", "type": "glue", "args": "qnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9137836353651839}}]}]}
text-classification
Alireza1044/albert-base-v2-qnli
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# qnli This model is a fine-tuned version of albert-base-v2 on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3608 - Accuracy: 0.9138 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# qnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3608\n- Accuracy: 0.9138", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# qnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QNLI dataset.\nIt achieves the following res...
[ 64, 56, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# qnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QNLI dataset.\nIt achieves the following ...
[ -0.09498390555381775, 0.20804716646671295, -0.0029824580997228622, 0.11239534616470337, 0.11723923683166504, 0.020433178171515465, 0.07301582396030426, 0.1743221879005432, -0.054998014122247696, 0.07411462068557739, 0.06556806713342667, 0.021855205297470093, 0.07447473704814911, 0.12361826...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qqp This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3695 - Accuracy: 0.9050 - F1: 0.8723 - Combined Score: 0.8886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model_index": [{"name": "qqp", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE QQP", "type": "glue", "args": "qqp"}, "metric": {"name": "F1", "type": "f1", "value": 0.8722569490623753}}]}]}
text-classification
Alireza1044/albert-base-v2-qqp
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# qqp This model is a fine-tuned version of albert-base-v2 on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.3695 - Accuracy: 0.9050 - F1: 0.8723 - Combined Score: 0.8886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# qqp\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QQP dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3695\n- Accuracy: 0.9050\n- F1: 0.8723\n- Combined Score: 0.8886", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMo...
[ "TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# qqp\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QQP dataset.\nIt achieves the following results on the eva...
[ 60, 71, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# qqp\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE QQP dataset.\nIt achieves the following results on the ...
[ -0.1311829388141632, 0.2145756334066391, -0.0021640623454004526, 0.1031651720404625, 0.12284251302480698, 0.022289156913757324, 0.03582530841231346, 0.17136408388614655, -0.03508622199296951, 0.07061418145895004, 0.10373027622699738, 0.04077291861176491, 0.07875052094459534, 0.148151054978...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rte This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Accuracy: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "rte", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE RTE", "type": "glue", "args": "rte"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.6859205776173285}}]}]}
text-classification
Alireza1044/albert-base-v2-rte
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# rte This model is a fine-tuned version of albert-base-v2 on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Accuracy: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# rte\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE RTE dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7994\n- Accuracy: 0.6859", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training a...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# rte\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE RTE dataset.\nIt achieves the following resul...
[ 64, 56, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# rte\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE RTE dataset.\nIt achieves the following re...
[ -0.08039207011461258, 0.18601466715335846, -0.0025320155546069145, 0.11675693839788437, 0.12055297940969467, 0.02688533440232277, 0.07955661416053772, 0.1726115196943283, -0.07213418930768967, 0.06303782016038895, 0.07522247731685638, 0.030456921085715294, 0.0638410672545433, 0.12054561823...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sst2 This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3808 - Accuracy: 0.9232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "args": "sst2"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9231651376146789}}]}]}
text-classification
Alireza1044/albert-base-v2-sst2
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# sst2 This model is a fine-tuned version of albert-base-v2 on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.3808 - Accuracy: 0.9232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# sst2\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE SST2 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3808\n- Accuracy: 0.9232", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# sst2\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE SST2 dataset.\nIt achieves the following res...
[ 64, 56, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# sst2\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE SST2 dataset.\nIt achieves the following ...
[ -0.08821505308151245, 0.20620565116405487, -0.0029546243604272604, 0.10553278028964996, 0.11946570873260498, 0.02374749816954136, 0.09588044881820679, 0.1638031154870987, -0.06755287945270538, 0.076508067548275, 0.07668092846870422, 0.021375300362706184, 0.07456497102975845, 0.133883625268...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # stsb This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.3978 - Pearson: 0.9090 - Spearmanr: 0.9051 - Combined Score: 0.9071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["spearmanr"], "model_index": [{"name": "stsb", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "args": "stsb"}, "metric": {"name": "Spearmanr", "type": "spearmanr", "value": 0.9050744778895732}}]}]}
text-classification
Alireza1044/albert-base-v2-stsb
[ "transformers", "pytorch", "tensorboard", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# stsb This model is a fine-tuned version of albert-base-v2 on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.3978 - Pearson: 0.9090 - Spearmanr: 0.9051 - Combined Score: 0.9071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# stsb\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE STSB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3978\n- Pearson: 0.9090\n- Spearmanr: 0.9051\n- Combined Score: 0.9071", "## Model description\n\nMore information needed", "## Intended uses & limitatio...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# stsb\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE STSB dataset.\nIt achieves the following res...
[ 64, 75, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# stsb\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE STSB dataset.\nIt achieves the following ...
[ -0.1128661185503006, 0.17603805661201477, -0.002005733083933592, 0.09696338325738907, 0.13785767555236816, 0.021405089646577835, 0.0908341333270073, 0.1489601731300354, -0.07734060287475586, 0.09056562185287476, 0.11096359044313431, 0.06925028562545776, 0.06762708723545074, 0.1645802557468...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6898 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model_index": [{"name": "wnli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.5633802816901409}}]}]}
text-classification
Alireza1044/albert-base-v2-wnli
[ "transformers", "pytorch", "albert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# wnli This model is a fine-tuned version of albert-base-v2 on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6898 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
[ "# wnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE WNLI dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6898\n- Accuracy: 0.5634", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training...
[ "TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# wnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE WNLI dataset.\nIt achieves the following results on the e...
[ 60, 56, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #albert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# wnli\n\nThis model is a fine-tuned version of albert-base-v2 on the GLUE WNLI dataset.\nIt achieves the following results on th...
[ -0.10206145793199539, 0.17233018577098846, -0.0026526805013418198, 0.11983185261487961, 0.13451358675956726, 0.027681270614266396, 0.0735999271273613, 0.16511216759681702, -0.06237892061471939, 0.05866992101073265, 0.06944018602371216, 0.03470563143491745, 0.06448089331388474, 0.1275691986...
null
null
transformers
A simple model trained on dialogues of characters in NBC series, `The Office`. The model can do a binary classification between `Michael Scott` and `Dwight Shrute`'s dialogues. <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} </style> <table class="tg"> <thead> <tr> <th class="tg-c3ow" colspan="2">Label Definitions</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Label 0</td> <td class="tg-c3ow">Michael</td> </tr> <tr> <td class="tg-c3ow">Label 1</td> <td class="tg-c3ow">Dwight</td> </tr> </tbody> </table>
{}
text-classification
Alireza1044/bert_classification_lm
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
A simple model trained on dialogues of characters in NBC series, 'The Office'. The model can do a binary classification between 'Michael Scott' and 'Dwight Shrute''s dialogues. <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top} </style> <table class="tg"> <thead> <tr> <th class="tg-c3ow" colspan="2">Label Definitions</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Label 0</td> <td class="tg-c3ow">Michael</td> </tr> <tr> <td class="tg-c3ow">Label 1</td> <td class="tg-c3ow">Dwight</td> </tr> </tbody> </table>
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 36 ]
[ "passage: TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.026536712422966957, 0.04976736754179001, -0.007731540594249964, 0.02341027930378914, 0.20494870841503143, 0.04218224436044693, 0.07166644185781479, 0.1081078052520752, 0.06540437042713165, -0.032089509069919586, 0.10898502916097641, 0.22890737652778625, -0.03745893016457558, 0.115667238...
null
null
transformers
#HarryBoy
{"tags": ["conversational"]}
text-generation
AllwynJ/HarryBoy
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#HarryBoy
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.009697278961539268, 0.03208012506365776, -0.007204889785498381, 0.004809224978089333, 0.16726240515708923, 0.014898733235895634, 0.09765533357858658, 0.13672804832458496, -0.007841327227652073, -0.031050153076648712, 0.14490588009357452, 0.20411323010921478, -0.006439372431486845, 0.066...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart50-ft-si-en This model was trained from scratch on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 5.0476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.98 | 30 | 5.6367 | | No log | 1.98 | 60 | 4.1221 | | No log | 2.98 | 90 | 3.1880 | | No log | 3.98 | 120 | 3.1175 | | No log | 4.98 | 150 | 3.3575 | | No log | 5.98 | 180 | 3.7855 | | No log | 6.98 | 210 | 4.3530 | | No log | 7.98 | 240 | 4.7216 | | No log | 8.98 | 270 | 4.9202 | | No log | 9.98 | 300 | 5.0476 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.6.0 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "model_index": [{"name": "mbart50-ft-si-en", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}}]}]}
text2text-generation
Aloka/mbart50-ft-si-en
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #mbart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
mbart50-ft-si-en ================ This model was trained from scratch on an unkown dataset. It achieves the following results on the evaluation set: * Loss: 5.0476 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.6.0 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilo...
[ "TAGS\n#transformers #pytorch #tensorboard #mbart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 16\n* eval...
[ 50, 140, 4, 31 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #mbart #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 16\n* e...
[ -0.0995524674654007, 0.07197006046772003, -0.0036191577091813087, 0.08292358368635178, 0.14912199974060059, -0.005378391593694687, 0.13180148601531982, 0.14728084206581116, -0.1470889449119568, 0.054091695696115494, 0.13212016224861145, 0.15098732709884644, 0.027994683012366295, 0.15568953...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7272 - Matthews Correlation: 0.5343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5219 | 1.0 | 535 | 0.5340 | 0.4215 | | 0.3467 | 2.0 | 1070 | 0.5131 | 0.5181 | | 0.2331 | 3.0 | 1605 | 0.6406 | 0.5040 | | 0.1695 | 4.0 | 2140 | 0.7272 | 0.5343 | | 0.1212 | 5.0 | 2675 | 0.8399 | 0.5230 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5343023846000738, "name": "Matthews Correlation"}]}]}]}
text-classification
Alstractor/distilbert-base-uncased-finetuned-cola
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.7272 * Matthews Correlation: 0.5343 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.9.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning...
[ 67, 98, 4, 34 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn...
[ -0.10332313179969788, 0.1015724390745163, -0.002312843920662999, 0.12276768684387207, 0.16639408469200134, 0.033813461661338806, 0.12579520046710968, 0.125900536775589, -0.08453615009784698, 0.023038864135742188, 0.12107968330383301, 0.15842050313949585, 0.022394772619009018, 0.11741199344...
null
null
transformers
# Wav2vec2-base for Danish This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model. This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz. The pre-training was done using the fairseq library in January 2021. It needs to be fine-tuned to perform speech recognition. # Finetuning In order to finetune the model to speech recognition, you can draw inspiration from this [notebook tutorial](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F) or [this blog post tutorial](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2).
{"language": "da", "license": "apache-2.0", "tags": ["speech"]}
null
Alvenir/wav2vec2-base-da
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "da", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "da" ]
TAGS #transformers #pytorch #wav2vec2 #pretraining #speech #da #license-apache-2.0 #endpoints_compatible #region-us
# Wav2vec2-base for Danish This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model. This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz. The pre-training was done using the fairseq library in January 2021. It needs to be fine-tuned to perform speech recognition. # Finetuning In order to finetune the model to speech recognition, you can draw inspiration from this notebook tutorial or this blog post tutorial.
[ "# Wav2vec2-base for Danish\nThis wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model.\n\nThis model was pretrained on 16kHz sampl...
[ "TAGS\n#transformers #pytorch #wav2vec2 #pretraining #speech #da #license-apache-2.0 #endpoints_compatible #region-us \n", "# Wav2vec2-base for Danish\nThis wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately n...
[ 43, 130, 29 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #pretraining #speech #da #license-apache-2.0 #endpoints_compatible #region-us \n# Wav2vec2-base for Danish\nThis wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunatel...
[ -0.10071468353271484, -0.013230538927018642, 0.0010635626967996359, -0.036669325083494186, 0.012606720440089703, -0.086862713098526, 0.09636928886175156, 0.03393274545669556, -0.10973719507455826, 0.03005971573293209, 0.1106831431388855, -0.028992226347327232, 0.05578625202178955, 0.027531...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-schizophreniaReddit2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 490 | 1.8093 | | 1.9343 | 2.0 | 980 | 1.7996 | | 1.8856 | 3.0 | 1470 | 1.7966 | | 1.8552 | 4.0 | 1960 | 1.7844 | | 1.8267 | 5.0 | 2450 | 1.7839 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "roberta-base-finetuned-schizophreniaReddit2", "results": []}]}
fill-mask
Amalq/roberta-base-finetuned-schizophreniaReddit2
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-schizophreniaReddit2 =========================================== This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.7785 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training...
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* ev...
[ 53, 98, 4, 33 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n*...
[ -0.11048822104930878, 0.03647564724087715, -0.0020721186883747578, 0.12634307146072388, 0.17713390290737152, 0.031141702085733414, 0.12979549169540405, 0.101325124502182, -0.08795546740293503, 0.02785218134522438, 0.13905705511569977, 0.17301462590694427, 0.006365228444337845, 0.1206375211...
null
null
transformers
# Question Answering NLU Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering, leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of training an intent classifier or a slot tagger, for example, we can ask the model intent- and slot-related questions in natural language: ``` Context : Yes. No. I'm looking for a cheap flight to Boston. Question: Is the user looking to book a flight? Answer : Yes Question: Is the user asking about departure time? Answer : No Question: What price is the user looking for? Answer : cheap Question: Where is the user flying from? Answer : (empty) ``` Note the "Yes. No. " prepended in the context. Those are to allow the model to answer intent-related questions (e.g. "Is the user looking for a restaurant?"). Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: [Language model is all you need: Natural language understanding as question answering](https://assets.amazon.science/33/ea/800419b24a09876601d8ab99bfb9/language-model-is-all-you-need-natural-language-understanding-as-question-answering.pdf). ## Model training Instructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the [Amazon Science repository](https://github.com/amazon-research/question-answering-nlu). ## Intended use and limitations This model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned on relevant data. ## Use in transformers: ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline tokenizer = AutoTokenizer.from_pretrained("AmazonScience/qanlu", use_auth_token=True) model = AutoModelForQuestionAnswering.from_pretrained("AmazonScience/qanlu", use_auth_token=True) qa_pipeline = pipeline('question-answering', model=model, tokenizer=tokenizer) qa_input = { 'context': 'Yes. No. I want a cheap flight to Boston.', 'question': 'What is the destination?' } answer = qa_pipeline(qa_input) ``` ## Citation If you use this work, please cite: ``` @inproceedings{namazifar2021language, title={Language model is all you need: Natural language understanding as question answering}, author={Namazifar, Mahdi and Papangelis, Alexandros and Tur, Gokhan and Hakkani-T{\"u}r, Dilek}, booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7803--7807}, year={2021}, organization={IEEE} } ``` ## License This library is licensed under the CC BY NC License.
{"language": "en", "license": "cc-by-4.0", "datasets": ["atis"], "widget": [{"context": "Yes. No. I'm looking for a cheap flight to Boston."}]}
question-answering
AmazonScience/qanlu
[ "transformers", "pytorch", "roberta", "question-answering", "en", "dataset:atis", "license:cc-by-4.0", "endpoints_compatible", "has_space", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #question-answering #en #dataset-atis #license-cc-by-4.0 #endpoints_compatible #has_space #region-us
# Question Answering NLU Question Answering NLU (QANLU) is an approach that maps the NLU task into question answering, leveraging pre-trained question-answering models to perform well on few-shot settings. Instead of training an intent classifier or a slot tagger, for example, we can ask the model intent- and slot-related questions in natural language: Note the "Yes. No. " prepended in the context. Those are to allow the model to answer intent-related questions (e.g. "Is the user looking for a restaurant?"). Thus, by asking questions for each intent and slot in natural language, we can effectively construct an NLU hypothesis. For more details, please read the paper: Language model is all you need: Natural language understanding as question answering. ## Model training Instructions for how to train and evaluate a QANLU model, as well as the necessary code for ATIS are in the Amazon Science repository. ## Intended use and limitations This model has been fine-tuned on ATIS (English) and is intended to demonstrate the power of this approach. For other domains or tasks, it should be further fine-tuned on relevant data. ## Use in transformers: If you use this work, please cite: ## License This library is licensed under the CC BY NC License.
[ "# Question Answering NLU\n\nQuestion Answering NLU (QANLU) is an approach that maps the NLU task into question answering, \nleveraging pre-trained question-answering models to perform well on few-shot settings. Instead of \ntraining an intent classifier or a slot tagger, for example, we can ask the model intent- a...
[ "TAGS\n#transformers #pytorch #roberta #question-answering #en #dataset-atis #license-cc-by-4.0 #endpoints_compatible #has_space #region-us \n", "# Question Answering NLU\n\nQuestion Answering NLU (QANLU) is an approach that maps the NLU task into question answering, \nleveraging pre-trained question-answering mo...
[ 50, 175, 37, 54, 15, 15 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #question-answering #en #dataset-atis #license-cc-by-4.0 #endpoints_compatible #has_space #region-us \n# Question Answering NLU\n\nQuestion Answering NLU (QANLU) is an approach that maps the NLU task into question answering, \nleveraging pre-trained question-answering...
[ -0.015085995197296143, 0.08361505717039108, -0.003181988839060068, 0.03289590775966644, 0.08432452380657196, 0.009148509241640568, 0.1015951931476593, 0.08219926059246063, 0.03434319794178009, 0.05287091061472893, -0.001177681377157569, -0.028286512941122055, 0.054025303572416306, -0.10815...
null
null
transformers
# indian-foods Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### idli ![idli](images/idli.jpg) #### kachori ![kachori](images/kachori.jpg) #### pani puri ![pani puri](images/pani_puri.jpg) #### samosa ![samosa](images/samosa.jpg) #### vada pav ![vada pav](images/vada_pav.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
image-classification
Amrrs/indian-foods
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
# indian-foods Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### idli !idli #### kachori !kachori #### pani puri !pani puri #### samosa !samosa #### vada pav !vada pav
[ "# indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### idli\n\n!idli", "#### kachori\n\n!kachori", "#### pani puri\n\n!pani puri", "#### sam...
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport ...
[ 53, 43, 4, 7, 8, 9, 7, 7 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n# indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nRepo...
[ -0.06414994597434998, 0.14793093502521515, -0.00047112914035096765, 0.07455983757972717, 0.19373425841331482, 0.031689442694187164, 0.008278673514723778, 0.14859828352928162, 0.041509829461574554, 0.029740627855062485, 0.12354275584220886, 0.17431139945983887, 0.037033308297395706, 0.21971...
null
null
transformers
# south-indian-foods Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### dosai ![dosai](images/dosai.jpg) #### idiyappam ![idiyappam](images/idiyappam.jpg) #### idli ![idli](images/idli.jpg) #### puttu ![puttu](images/puttu.jpg) #### vadai ![vadai](images/vadai.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
image-classification
Amrrs/south-indian-foods
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# south-indian-foods Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### dosai !dosai #### idiyappam !idiyappam #### idli !idli #### puttu !puttu #### vadai !vadai
[ "# south-indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### dosai\n\n!dosai", "#### idiyappam\n\n!idiyappam", "#### idli\n\n!idli", "#### p...
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# south-indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any i...
[ 49, 46, 4, 7, 11, 7, 7, 7 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n# south-indian-foods\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport an...
[ -0.11193650215864182, 0.1870627999305725, -0.0007494018063880503, 0.0954122245311737, 0.2046598643064499, 0.04216891527175903, 0.09554238617420197, 0.15689313411712646, 0.1686960756778717, 0.0031046466901898384, 0.11089686304330826, 0.20598538219928741, 0.05520227923989296, 0.1809096932411...
null
null
transformers
# Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ta", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model = Wav2Vec2ForCTC.from_pretrained("Amrrs/wav2vec2-large-xlsr-53-tamil") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 82.94 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
{"language": "ta", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Tamil by Amrrs", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 82.94, "name": "Test WER"}]}]}]}
automatic-speech-recognition
Amrrs/wav2vec2-large-xlsr-53-tamil
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ta", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "ta" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# Wav2Vec2-Large-XLSR-53-Tamil Fine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. Test Result: 82.94 % ## Training The Common Voice 'train', 'validation' datasets were used for training. The script used for training can be found here
[ "# Wav2Vec2-Large-XLSR-53-Tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be ev...
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# Wav2Vec2-Large-XLSR-53-Tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil using ...
[ 84, 60, 20, 30, 32 ]
[ "passage: TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n# Wav2Vec2-Large-XLSR-53-Tamil\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Tamil usi...
[ -0.14625538885593414, -0.025803357362747192, -0.0006738790543749928, -0.005033901892602444, 0.11739180982112885, -0.05176210775971413, 0.17070166766643524, 0.13181763887405396, -0.0018764821579679847, -0.010653254576027393, 0.017079900950193405, 0.015753189101815224, 0.06341883540153503, 0...
null
null
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 479512837 - CO2 Emissions (in grams): 123.88023112815048 ## Validation Metrics - Loss: 0.6220805048942566 - Accuracy: 0.7961119332705503 - Macro F1: 0.7616345204219084 - Micro F1: 0.7961119332705503 - Weighted F1: 0.795387503907883 - Macro Precision: 0.782839455262034 - Micro Precision: 0.7961119332705503 - Weighted Precision: 0.7992606754484262 - Macro Recall: 0.7451485972167191 - Micro Recall: 0.7961119332705503 - Weighted Recall: 0.7961119332705503 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-Feedback1-479512837 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-Feedback1-479512837", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "unk", "tags": "autonlp", "datasets": ["Anamika/autonlp-data-Feedback1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 123.88023112815048}
text-classification
Anamika/autonlp-Feedback1-479512837
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "autonlp", "unk", "dataset:Anamika/autonlp-data-Feedback1", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #xlm-roberta #text-classification #autonlp #unk #dataset-Anamika/autonlp-data-Feedback1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 479512837 - CO2 Emissions (in grams): 123.88023112815048 ## Validation Metrics - Loss: 0.6220805048942566 - Accuracy: 0.7961119332705503 - Macro F1: 0.7616345204219084 - Micro F1: 0.7961119332705503 - Weighted F1: 0.795387503907883 - Macro Precision: 0.782839455262034 - Micro Precision: 0.7961119332705503 - Weighted Precision: 0.7992606754484262 - Macro Recall: 0.7451485972167191 - Micro Recall: 0.7961119332705503 - Weighted Recall: 0.7961119332705503 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 479512837\n- CO2 Emissions (in grams): 123.88023112815048", "## Validation Metrics\n\n- Loss: 0.6220805048942566\n- Accuracy: 0.7961119332705503\n- Macro F1: 0.7616345204219084\n- Micro F1: 0.7961119332705503\n- Weighted F1:...
[ "TAGS\n#transformers #pytorch #xlm-roberta #text-classification #autonlp #unk #dataset-Anamika/autonlp-data-Feedback1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 479512837\n- CO2 Emissions (...
[ 72, 44, 153, 17 ]
[ "passage: TAGS\n#transformers #pytorch #xlm-roberta #text-classification #autonlp #unk #dataset-Anamika/autonlp-data-Feedback1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 479512837\n- CO2 Emission...
[ -0.12092582136392593, 0.22555725276470184, -0.0023491138126701117, 0.07799478620290756, 0.1279972493648529, 0.039161939173936844, 0.0434514544904232, 0.12688925862312317, 0.0006545293144881725, 0.16451212763786316, 0.09219343960285187, 0.1701546013355255, 0.06920769810676575, 0.14138543605...
null
null
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 473312409 - CO2 Emissions (in grams): 25.128735714898614 ## Validation Metrics - Loss: 0.6010786890983582 - Accuracy: 0.7990650945370823 - Macro F1: 0.7429662929144928 - Micro F1: 0.7990650945370823 - Weighted F1: 0.7977660363770382 - Macro Precision: 0.7744390888231261 - Micro Precision: 0.7990650945370823 - Weighted Precision: 0.800444194278352 - Macro Recall: 0.7198278524814119 - Micro Recall: 0.7990650945370823 - Weighted Recall: 0.7990650945370823 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-fa-473312409 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["Anamika/autonlp-data-fa"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 25.128735714898614}
text-classification
Anamika/autonlp-fa-473312409
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "en", "dataset:Anamika/autonlp-data-fa", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Anamika/autonlp-data-fa #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 473312409 - CO2 Emissions (in grams): 25.128735714898614 ## Validation Metrics - Loss: 0.6010786890983582 - Accuracy: 0.7990650945370823 - Macro F1: 0.7429662929144928 - Micro F1: 0.7990650945370823 - Weighted F1: 0.7977660363770382 - Macro Precision: 0.7744390888231261 - Micro Precision: 0.7990650945370823 - Weighted Precision: 0.800444194278352 - Macro Recall: 0.7198278524814119 - Micro Recall: 0.7990650945370823 - Weighted Recall: 0.7990650945370823 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 473312409\n- CO2 Emissions (in grams): 25.128735714898614", "## Validation Metrics\n\n- Loss: 0.6010786890983582\n- Accuracy: 0.7990650945370823\n- Macro F1: 0.7429662929144928\n- Micro F1: 0.7990650945370823\n- Weighted F1:...
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Anamika/autonlp-data-fa #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 473312409\n- CO2 Emissions (in grams): 2...
[ 66, 43, 151, 17 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-Anamika/autonlp-data-fa #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 473312409\n- CO2 Emissions (in grams)...
[ -0.09413326531648636, 0.1769489049911499, -0.00299017783254385, 0.06963860243558884, 0.10115846246480942, 0.043640803545713425, 0.062168169766664505, 0.12283053249120712, 0.010188386775553226, 0.12656262516975403, 0.09690967947244644, 0.17336419224739075, 0.06794267892837524, 0.14914137125...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra_large_discriminator_squad2_512 This model is a fine-tuned version of [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "model-index": [{"name": "electra_large_discriminator_squad2_512", "results": []}]}
question-answering
Andranik/TestQA2
[ "transformers", "pytorch", "electra", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #question-answering #generated_from_trainer #endpoints_compatible #region-us
# electra_large_discriminator_squad2_512 This model is a fine-tuned version of ahotrod/electra_large_discriminator_squad2_512 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
[ "# electra_large_discriminator_squad2_512\n\nThis model is a fine-tuned version of ahotrod/electra_large_discriminator_squad2_512 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore in...
[ "TAGS\n#transformers #pytorch #electra #question-answering #generated_from_trainer #endpoints_compatible #region-us \n", "# electra_large_discriminator_squad2_512\n\nThis model is a fine-tuned version of ahotrod/electra_large_discriminator_squad2_512 on an unknown dataset.", "## Model description\n\nMore inform...
[ 37, 55, 6, 12, 8, 3, 90, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #electra #question-answering #generated_from_trainer #endpoints_compatible #region-us \n# electra_large_discriminator_squad2_512\n\nThis model is a fine-tuned version of ahotrod/electra_large_discriminator_squad2_512 on an unknown dataset.## Model description\n\nMore informati...
[ -0.09914527833461761, 0.07870863378047943, -0.0021741194650530815, 0.08219228684902191, 0.18595188856124878, 0.028339460492134094, 0.10602907836437225, 0.1038270816206932, -0.112612284719944, 0.0624292753636837, 0.07965591549873352, 0.07667040079832077, 0.029544295743107796, 0.089128404855...
null
null
transformers
This is a pretrained model that was loaded from t5-base. It has been adapted and changed by changing the max_length and summary_length.
{}
text2text-generation
AndreLiu1225/t5-news
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is a pretrained model that was loaded from t5-base. It has been adapted and changed by changing the max_length and summary_length.
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 48 ]
[ "passage: TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.01584368571639061, 0.001455417019315064, -0.00658801756799221, 0.0177968367934227, 0.18000324070453644, 0.01899094320833683, 0.1102970764040947, 0.13923293352127075, -0.029492201283574104, -0.031411342322826385, 0.1258108913898468, 0.215000182390213, -0.002026807749643922, 0.09281328320...
null
null
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # model-QA-5-epoch-RU This model is a fine-tuned version of [AndrewChar/diplom-prod-epoch-4-datast-sber-QA](https://huggingface.co/AndrewChar/diplom-prod-epoch-4-datast-sber-QA) on sberquad dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1991 - Validation Loss: 0.0 - Epoch: 5 ## Model description Модель отвечающая на вопрос по контектсу это дипломная работа ## Intended uses & limitations Контекст должен содержать не более 512 токенов ## Training and evaluation data DataSet SberSQuAD {'exact_match': 54.586, 'f1': 73.644} ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_re': 2e-06 'decay_steps': 2986, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.1991 | | 5 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
{"language": "ru", "tags": ["generated_from_keras_callback"], "datasets": ["sberquad"], "model-index": [{"name": "model-QA-5-epoch-RU", "results": []}]}
question-answering
AndrewChar/model-QA-5-epoch-RU
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "ru", "dataset:sberquad", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #tf #distilbert #question-answering #generated_from_keras_callback #ru #dataset-sberquad #endpoints_compatible #region-us
model-QA-5-epoch-RU =================== This model is a fine-tuned version of AndrewChar/diplom-prod-epoch-4-datast-sber-QA on sberquad dataset. It achieves the following results on the evaluation set: * Train Loss: 1.1991 * Validation Loss: 0.0 * Epoch: 5 Model description ----------------- Модель отвечающая на вопрос по контектсу это дипломная работа Intended uses & limitations --------------------------- Контекст должен содержать не более 512 токенов Training and evaluation data ---------------------------- DataSet SberSQuAD {'exact\_match': 54.586, 'f1': 73.644} Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_re': 2e-06 'decay\_steps': 2986, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.15.0 * TensorFlow 2.7.0 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_re': 2e-06 'decay\\_steps': 2986, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name':...
[ "TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #ru #dataset-sberquad #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'Po...
[ 50, 177, 4, 31 ]
[ "passage: TAGS\n#transformers #tf #distilbert #question-answering #generated_from_keras_callback #ru #dataset-sberquad #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': ...
[ -0.0619235560297966, 0.014837060123682022, -0.005463727749884129, 0.0682763084769249, 0.15865950286388397, 0.05109139531850815, 0.13201691210269928, 0.11504366248846054, -0.06728874146938324, 0.11687682569026947, 0.15098436176776886, 0.12610431015491486, 0.06211525574326515, 0.078574255108...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1355 - Wer: 0.1532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 2.5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0826 | 0.07 | 1000 | 0.4637 | 0.4654 | | 1.118 | 0.15 | 2000 | 0.2595 | 0.2687 | | 1.1268 | 0.22 | 3000 | 0.2635 | 0.2661 | | 1.0919 | 0.29 | 4000 | 0.2417 | 0.2566 | | 1.1013 | 0.37 | 5000 | 0.2414 | 0.2567 | | 1.0898 | 0.44 | 6000 | 0.2546 | 0.2731 | | 1.0808 | 0.51 | 7000 | 0.2399 | 0.2535 | | 1.0719 | 0.59 | 8000 | 0.2353 | 0.2528 | | 1.0446 | 0.66 | 9000 | 0.2427 | 0.2545 | | 1.0347 | 0.73 | 10000 | 0.2266 | 0.2402 | | 1.0457 | 0.81 | 11000 | 0.2290 | 0.2448 | | 1.0124 | 0.88 | 12000 | 0.2295 | 0.2448 | | 1.025 | 0.95 | 13000 | 0.2138 | 0.2345 | | 1.0107 | 1.03 | 14000 | 0.2108 | 0.2294 | | 0.9758 | 1.1 | 15000 | 0.2019 | 0.2204 | | 0.9547 | 1.17 | 16000 | 0.2000 | 0.2178 | | 0.986 | 1.25 | 17000 | 0.2018 | 0.2200 | | 0.9588 | 1.32 | 18000 | 0.1992 | 0.2138 | | 0.9413 | 1.39 | 19000 | 0.1898 | 0.2049 | | 0.9339 | 1.47 | 20000 | 0.1874 | 0.2056 | | 0.9268 | 1.54 | 21000 | 0.1797 | 0.1976 | | 0.9194 | 1.61 | 22000 | 0.1743 | 0.1905 | | 0.8987 | 1.69 | 23000 | 0.1738 | 0.1932 | | 0.8884 | 1.76 | 24000 | 0.1703 | 0.1873 | | 0.8939 | 1.83 | 25000 | 0.1633 | 0.1831 | | 0.8629 | 1.91 | 26000 | 0.1549 | 0.1750 | | 0.8607 | 1.98 | 27000 | 0.1550 | 0.1738 | | 0.8316 | 2.05 | 28000 | 0.1512 | 0.1709 | | 0.8321 | 2.13 | 29000 | 0.1481 | 0.1657 | | 0.825 | 2.2 | 30000 | 0.1446 | 0.1627 | | 0.8115 | 2.27 | 31000 | 0.1396 | 0.1583 | | 0.7959 | 2.35 | 32000 | 0.1389 | 0.1569 | | 0.7835 | 2.42 | 33000 | 0.1362 | 0.1545 | | 0.7959 | 2.49 | 34000 | 0.1355 | 0.1531 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test --log_outputs ``` 2. To evaluate on test dev data ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1B-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "de", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - German", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "de"}, "metrics": [{"type": "wer", "value": 15.25, "name": "Test WER"}, {"type": "cer", "value": 3.78, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 35.29, "name": "Test WER"}, {"type": "cer", "value": 13.83, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 36.2, "name": "Test WER"}]}]}]}
automatic-speech-recognition
AndrewMcDowell/wav2vec2-xls-r-1B-german
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "de", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible...
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #de #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - DE dataset. It achieves the following results on the evaluation set: * Loss: 0.1355 * Wer: 0.1532 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 2.5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test' 2. To evaluate on test dev data
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #de #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperpar...
[ 111, 159, 4, 39, 44 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #de #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyper...
[ -0.13207627832889557, 0.12429849058389664, -0.006239145994186401, 0.042449865490198135, 0.09927774220705032, 0.0337371751666069, 0.10909031331539154, 0.16356198489665985, -0.05507772043347359, 0.11216598749160767, 0.0719810351729393, 0.07711575925350189, 0.07944095134735107, 0.112149290740...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset. It achieves the following results on the evaluation set: - Loss: 1.1373 - Wer: 0.8607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.2416 | 0.84 | 500 | 1.2867 | 0.8875 | | 2.3089 | 1.67 | 1000 | 1.8336 | 0.9548 | | 2.3614 | 2.51 | 1500 | 1.5937 | 0.9469 | | 2.5234 | 3.35 | 2000 | 1.9765 | 0.9867 | | 2.5373 | 4.19 | 2500 | 1.9062 | 0.9916 | | 2.5703 | 5.03 | 3000 | 1.9772 | 0.9915 | | 2.4656 | 5.86 | 3500 | 1.8083 | 0.9829 | | 2.4339 | 6.7 | 4000 | 1.7548 | 0.9752 | | 2.344 | 7.54 | 4500 | 1.6146 | 0.9638 | | 2.2677 | 8.38 | 5000 | 1.5105 | 0.9499 | | 2.2074 | 9.21 | 5500 | 1.4191 | 0.9357 | | 2.3768 | 10.05 | 6000 | 1.6663 | 0.9665 | | 2.3804 | 10.89 | 6500 | 1.6571 | 0.9720 | | 2.3237 | 11.72 | 7000 | 1.6049 | 0.9637 | | 2.317 | 12.56 | 7500 | 1.5875 | 0.9655 | | 2.2988 | 13.4 | 8000 | 1.5357 | 0.9603 | | 2.2906 | 14.24 | 8500 | 1.5637 | 0.9592 | | 2.2848 | 15.08 | 9000 | 1.5326 | 0.9537 | | 2.2381 | 15.91 | 9500 | 1.5631 | 0.9508 | | 2.2072 | 16.75 | 10000 | 1.4565 | 0.9395 | | 2.197 | 17.59 | 10500 | 1.4304 | 0.9406 | | 2.198 | 18.43 | 11000 | 1.4230 | 0.9382 | | 2.1668 | 19.26 | 11500 | 1.3998 | 0.9315 | | 2.1498 | 20.1 | 12000 | 1.3920 | 0.9258 | | 2.1244 | 20.94 | 12500 | 1.3584 | 0.9153 | | 2.0953 | 21.78 | 13000 | 1.3274 | 0.9054 | | 2.0762 | 22.61 | 13500 | 1.2933 | 0.9073 | | 2.0587 | 23.45 | 14000 | 1.2516 | 0.8944 | | 2.0363 | 24.29 | 14500 | 1.2214 | 0.8902 | | 2.0302 | 25.13 | 15000 | 1.2087 | 0.8871 | | 2.0071 | 25.96 | 15500 | 1.1953 | 0.8786 | | 1.9882 | 26.8 | 16000 | 1.1738 | 0.8712 | | 1.9772 | 27.64 | 16500 | 1.1647 | 0.8672 | | 1.9585 | 28.48 | 17000 | 1.1459 | 0.8635 | | 1.944 | 29.31 | 17500 | 1.1414 | 0.8616 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
{"language": ["ar"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
automatic-speech-recognition
AndrewMcDowell/wav2vec2-xls-r-1b-arabic
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ar", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ar #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - AR dataset. It achieves the following results on the evaluation set: * Loss: 1.1373 * Wer: 0.8607 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 6.5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 30.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilo...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ar #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* ...
[ 79, 160, 4, 39 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ar #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\...
[ -0.122341588139534, 0.15802060067653656, -0.0039328536950051785, 0.026099558919668198, 0.10646619647741318, 0.007733722683042288, 0.09482292085886002, 0.1482316553592682, -0.07874351739883423, 0.12224624305963516, 0.0991123616695404, 0.09445363283157349, 0.09529288113117218, 0.138748064637...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset. It achieves the following results on the evaluation set: - Loss: 0.5500 - Wer: 1.0132 - Cer: 0.1609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 1.7019 | 12.65 | 1000 | 1.0510 | 0.9832 | 0.2589 | | 1.6385 | 25.31 | 2000 | 0.6670 | 0.9915 | 0.1851 | | 1.4344 | 37.97 | 3000 | 0.6183 | 1.0213 | 0.1797 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs ``` 2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["ja"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "ja", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ja"}, "metrics": [{"type": "wer", "value": 95.33, "name": "Test WER"}, {"type": "cer", "value": 22.27, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}, {"type": "cer", "value": 30.33, "name": "Test CER"}, {"type": "cer", "value": 29.63, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ja"}, "metrics": [{"type": "cer", "value": 32.69, "name": "Test CER"}]}]}]}
automatic-speech-recognition
AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "ja", "hf-asr-leaderboard", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #ja #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - JA dataset. It achieves the following results on the evaluation set: * Loss: 0.5500 * Wer: 1.0132 * Cer: 0.1609 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1500 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test' 2. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsil...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #ja #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperpara...
[ 97, 160, 4, 39, 66 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #ja #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperp...
[ -0.11913269758224487, 0.1495753973722458, -0.0071907369419932365, 0.044078655540943146, 0.08584751188755035, 0.02597866952419281, 0.09281317889690399, 0.1652391403913498, -0.055759645998477936, 0.12671718001365662, 0.06768246740102768, 0.08861856907606125, 0.0864587351679802, 0.11072776466...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset. It achieves the following results on the evaluation set: - Loss: 0.4502 - Wer: 0.4783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.7972 | 0.43 | 500 | 5.1401 | 1.0 | | 3.3241 | 0.86 | 1000 | 3.3220 | 1.0 | | 3.1432 | 1.29 | 1500 | 3.0806 | 0.9999 | | 2.9297 | 1.72 | 2000 | 2.5678 | 1.0057 | | 2.2593 | 2.14 | 2500 | 1.1068 | 0.8218 | | 2.0504 | 2.57 | 3000 | 0.7878 | 0.7114 | | 1.937 | 3.0 | 3500 | 0.6955 | 0.6450 | | 1.8491 | 3.43 | 4000 | 0.6452 | 0.6304 | | 1.803 | 3.86 | 4500 | 0.5961 | 0.6042 | | 1.7545 | 4.29 | 5000 | 0.5550 | 0.5748 | | 1.7045 | 4.72 | 5500 | 0.5374 | 0.5743 | | 1.6733 | 5.15 | 6000 | 0.5337 | 0.5404 | | 1.6761 | 5.57 | 6500 | 0.5054 | 0.5266 | | 1.655 | 6.0 | 7000 | 0.4926 | 0.5243 | | 1.6252 | 6.43 | 7500 | 0.4946 | 0.5183 | | 1.6209 | 6.86 | 8000 | 0.4915 | 0.5194 | | 1.5772 | 7.29 | 8500 | 0.4725 | 0.5104 | | 1.5602 | 7.72 | 9000 | 0.4726 | 0.5097 | | 1.5783 | 8.15 | 9500 | 0.4667 | 0.4956 | | 1.5442 | 8.58 | 10000 | 0.4685 | 0.4937 | | 1.5597 | 9.01 | 10500 | 0.4708 | 0.4957 | | 1.5406 | 9.43 | 11000 | 0.4539 | 0.4810 | | 1.5274 | 9.86 | 11500 | 0.4502 | 0.4783 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["ar"], "license": "apache-2.0", "tags": ["ar", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - Arabic", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "ar"}, "metrics": [{"type": "wer", "value": 47.54, "name": "Test WER"}, {"type": "cer", "value": 17.64, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 93.72, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 92.49, "name": "Test WER"}]}]}]}
automatic-speech-recognition
AndrewMcDowell/wav2vec2-xls-r-300m-arabic
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "ar", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible...
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #ar #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - AR dataset. It achieves the following results on the evaluation set: * Loss: 0.4502 * Wer: 0.4783 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 5.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #ar #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperpar...
[ 111, 159, 4, 41 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #ar #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyper...
[ -0.13983333110809326, 0.11411168426275253, -0.005387204233556986, 0.0484548956155777, 0.1220448762178421, 0.007736077532172203, 0.08694206178188324, 0.151136115193367, -0.09050387889146805, 0.09333959221839905, 0.07449441403150558, 0.0982985720038414, 0.07953846454620361, 0.092545419931411...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. eval results: WER: 0.20161578657865786 CER: 0.05062357805269733 --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. It achieves the following results on the evaluation set: - Loss: 0.1768 - Wer: 0.2016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.7531 | 0.04 | 500 | 5.4564 | 1.0 | | 2.9882 | 0.08 | 1000 | 3.0041 | 1.0 | | 2.1953 | 0.13 | 1500 | 1.1723 | 0.7121 | | 1.2406 | 0.17 | 2000 | 0.3656 | 0.3623 | | 1.1294 | 0.21 | 2500 | 0.2843 | 0.2926 | | 1.0731 | 0.25 | 3000 | 0.2554 | 0.2664 | | 1.051 | 0.3 | 3500 | 0.2387 | 0.2535 | | 1.0479 | 0.34 | 4000 | 0.2345 | 0.2512 | | 1.0026 | 0.38 | 4500 | 0.2270 | 0.2452 | | 0.9921 | 0.42 | 5000 | 0.2212 | 0.2353 | | 0.9839 | 0.47 | 5500 | 0.2141 | 0.2330 | | 0.9907 | 0.51 | 6000 | 0.2122 | 0.2334 | | 0.9788 | 0.55 | 6500 | 0.2114 | 0.2270 | | 0.9687 | 0.59 | 7000 | 0.2066 | 0.2323 | | 0.9777 | 0.64 | 7500 | 0.2033 | 0.2237 | | 0.9476 | 0.68 | 8000 | 0.2020 | 0.2194 | | 0.9625 | 0.72 | 8500 | 0.1977 | 0.2191 | | 0.9497 | 0.76 | 9000 | 0.1976 | 0.2175 | | 0.9781 | 0.81 | 9500 | 0.1956 | 0.2159 | | 0.9552 | 0.85 | 10000 | 0.1958 | 0.2191 | | 0.9345 | 0.89 | 10500 | 0.1964 | 0.2158 | | 0.9528 | 0.93 | 11000 | 0.1926 | 0.2154 | | 0.9502 | 0.98 | 11500 | 0.1953 | 0.2149 | | 0.9358 | 1.02 | 12000 | 0.1927 | 0.2167 | | 0.941 | 1.06 | 12500 | 0.1901 | 0.2115 | | 0.9287 | 1.1 | 13000 | 0.1936 | 0.2090 | | 0.9491 | 1.15 | 13500 | 0.1900 | 0.2104 | | 0.9478 | 1.19 | 14000 | 0.1931 | 0.2120 | | 0.946 | 1.23 | 14500 | 0.1914 | 0.2134 | | 0.9499 | 1.27 | 15000 | 0.1931 | 0.2173 | | 0.9346 | 1.32 | 15500 | 0.1913 | 0.2105 | | 0.9509 | 1.36 | 16000 | 0.1902 | 0.2137 | | 0.9294 | 1.4 | 16500 | 0.1895 | 0.2086 | | 0.9418 | 1.44 | 17000 | 0.1913 | 0.2183 | | 0.9302 | 1.49 | 17500 | 0.1884 | 0.2114 | | 0.9418 | 1.53 | 18000 | 0.1894 | 0.2108 | | 0.9363 | 1.57 | 18500 | 0.1886 | 0.2132 | | 0.9338 | 1.61 | 19000 | 0.1856 | 0.2078 | | 0.9185 | 1.66 | 19500 | 0.1852 | 0.2056 | | 0.9216 | 1.7 | 20000 | 0.1874 | 0.2095 | | 0.9176 | 1.74 | 20500 | 0.1873 | 0.2078 | | 0.9288 | 1.78 | 21000 | 0.1865 | 0.2097 | | 0.9278 | 1.83 | 21500 | 0.1869 | 0.2100 | | 0.9295 | 1.87 | 22000 | 0.1878 | 0.2095 | | 0.9221 | 1.91 | 22500 | 0.1852 | 0.2121 | | 0.924 | 1.95 | 23000 | 0.1855 | 0.2042 | | 0.9104 | 2.0 | 23500 | 0.1858 | 0.2105 | | 0.9284 | 2.04 | 24000 | 0.1850 | 0.2080 | | 0.9162 | 2.08 | 24500 | 0.1839 | 0.2045 | | 0.9111 | 2.12 | 25000 | 0.1838 | 0.2080 | | 0.91 | 2.17 | 25500 | 0.1889 | 0.2106 | | 0.9152 | 2.21 | 26000 | 0.1856 | 0.2026 | | 0.9209 | 2.25 | 26500 | 0.1891 | 0.2133 | | 0.9094 | 2.29 | 27000 | 0.1857 | 0.2089 | | 0.9065 | 2.34 | 27500 | 0.1840 | 0.2052 | | 0.9156 | 2.38 | 28000 | 0.1833 | 0.2062 | | 0.8986 | 2.42 | 28500 | 0.1789 | 0.2001 | | 0.9045 | 2.46 | 29000 | 0.1769 | 0.2022 | | 0.9039 | 2.51 | 29500 | 0.1819 | 0.2073 | | 0.9145 | 2.55 | 30000 | 0.1828 | 0.2063 | | 0.9081 | 2.59 | 30500 | 0.1811 | 0.2049 | | 0.9252 | 2.63 | 31000 | 0.1833 | 0.2086 | | 0.8957 | 2.68 | 31500 | 0.1795 | 0.2083 | | 0.891 | 2.72 | 32000 | 0.1809 | 0.2058 | | 0.9023 | 2.76 | 32500 | 0.1812 | 0.2061 | | 0.8918 | 2.8 | 33000 | 0.1775 | 0.1997 | | 0.8852 | 2.85 | 33500 | 0.1790 | 0.1997 | | 0.8928 | 2.89 | 34000 | 0.1767 | 0.2013 | | 0.9079 | 2.93 | 34500 | 0.1735 | 0.1986 | | 0.9032 | 2.97 | 35000 | 0.1793 | 0.2024 | | 0.9018 | 3.02 | 35500 | 0.1778 | 0.2027 | | 0.8846 | 3.06 | 36000 | 0.1776 | 0.2046 | | 0.8848 | 3.1 | 36500 | 0.1812 | 0.2064 | | 0.9062 | 3.14 | 37000 | 0.1800 | 0.2018 | | 0.9011 | 3.19 | 37500 | 0.1783 | 0.2049 | | 0.8996 | 3.23 | 38000 | 0.1810 | 0.2036 | | 0.893 | 3.27 | 38500 | 0.1805 | 0.2056 | | 0.897 | 3.31 | 39000 | 0.1773 | 0.2035 | | 0.8992 | 3.36 | 39500 | 0.1804 | 0.2054 | | 0.8987 | 3.4 | 40000 | 0.1768 | 0.2016 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset mozilla-foundation/common_voice_7_0 --config de --split test --log_outputs ``` 2. To evaluate on test dev data ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-german-de --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "de", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300M - German", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "de"}, "metrics": [{"type": "wer", "value": 20.16, "name": "Test WER"}, {"type": "cer", "value": 5.06, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 39.79, "name": "Test WER"}, {"type": "cer", "value": 15.02, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 47.95, "name": "Test WER"}]}]}]}
automatic-speech-recognition
AndrewMcDowell/wav2vec2-xls-r-300m-german-de
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "endpoints_compatible...
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - DE dataset. It achieves the following results on the evaluation set: * Loss: 0.1768 * Wer: 0.2016 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 3.4 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.0.dev0 * Pytorch 1.10.1+cu102 * Datasets 1.17.1.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test' 2. To evaluate on test dev data
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperpar...
[ 111, 159, 4, 41, 44 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyper...
[ -0.12741737067699432, 0.11049387603998184, -0.006292941980063915, 0.045448631048202515, 0.1109750047326088, 0.027023954316973686, 0.09660723805427551, 0.1629907190799713, -0.066348597407341, 0.10548499971628189, 0.06490739434957504, 0.09815691411495209, 0.08025956898927689, 0.1042460799217...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset. Kanji are converted into Hiragana using the [pykakasi](https://pykakasi.readthedocs.io/en/latest/index.html) library during training and evaluation. The model can output both Hiragana and Katakana characters. Since there is no spacing, WER is not a suitable metric for evaluating performance and CER is more suitable. On mozilla-foundation/common_voice_8_0 it achieved: - cer: 23.64% On speech-recognition-community-v2/dev_data it achieved: - cer: 30.99% It achieves the following results on the evaluation set: - Loss: 0.5212 - Wer: 1.3068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 48 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 4.0974 | 4.72 | 1000 | 4.0178 | 1.9535 | | 2.1276 | 9.43 | 2000 | 0.9301 | 1.2128 | | 1.7622 | 14.15 | 3000 | 0.7103 | 1.5527 | | 1.6397 | 18.87 | 4000 | 0.6729 | 1.4269 | | 1.5468 | 23.58 | 5000 | 0.6087 | 1.2497 | | 1.4885 | 28.3 | 6000 | 0.5786 | 1.3222 | | 1.451 | 33.02 | 7000 | 0.5726 | 1.3768 | | 1.3912 | 37.74 | 8000 | 0.5518 | 1.2497 | | 1.3617 | 42.45 | 9000 | 0.5352 | 1.2694 | | 1.3113 | 47.17 | 10000 | 0.5228 | 1.2781 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs ``` 2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-300m-japanese --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ```
{"language": ["ja"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "ja", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300-m", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ja"}, "metrics": [{"type": "wer", "value": 95.82, "name": "Test WER"}, {"type": "cer", "value": 23.64, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}, {"type": "cer", "value": 30.99, "name": "Test CER"}, {"type": "cer", "value": 30.37, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ja"}, "metrics": [{"type": "cer", "value": 34.42, "name": "Test CER"}]}]}]}
automatic-speech-recognition
AndrewMcDowell/wav2vec2-xls-r-300m-japanese
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "ja", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible...
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #ja #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - JA dataset. Kanji are converted into Hiragana using the pykakasi library during training and evaluation. The model can output both Hiragana and Katakana characters. Since there is no spacing, WER is not a suitable metric for evaluating performance and CER is more suitable. On mozilla-foundation/common\_voice\_8\_0 it achieved: * cer: 23.64% On speech-recognition-community-v2/dev\_data it achieved: * cer: 30.99% It achieves the following results on the evaluation set: * Loss: 0.5212 * Wer: 1.3068 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 7.5e-05 * train\_batch\_size: 48 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 50.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.2.dev0 * Tokenizers 0.11.0 #### Evaluation Commands 1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test' 2. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_step...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #ja #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperpar...
[ 111, 132, 4, 39, 66 ]
[ "passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #ja #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyper...
[ -0.1087740883231163, 0.1477416604757309, -0.005788300186395645, 0.013155115768313408, 0.10168784856796265, 0.03503540903329849, 0.11511356383562088, 0.15956926345825195, -0.05871691554784775, 0.13984927535057068, 0.06451325118541718, 0.10002699494361877, 0.09042244404554367, 0.122649326920...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-finetuned-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.1264 - Precision: 0.9305 - Recall: 0.9375 - F1: 0.9340 - Accuracy: 0.9700 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.301 | 1.0 | 625 | 0.1756 | 0.8843 | 0.9067 | 0.8953 | 0.9500 | | 0.1259 | 2.0 | 1250 | 0.1248 | 0.9285 | 0.9335 | 0.9310 | 0.9688 | | 0.0895 | 3.0 | 1875 | 0.1264 | 0.9305 | 0.9375 | 0.9340 | 0.9700 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "mbert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "lv"}, "metrics": [{"type": "precision", "value": 0.9304986338797814, "name": "Precision"}, {"type": "recall", "value": 0.9375430144528561, "name": "Recall"}, {"type": "f1", "value": 0.9340075419952005, "name": "F1"}, {"type": "accuracy", "value": 0.9699674740348558, "name": "Accuracy"}]}]}]}
token-classification
Andrey1989/mbert-finetuned-ner
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
mbert-finetuned-ner =================== This model is a fine-tuned version of bert-base-multilingual-cased on the wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.1264 * Precision: 0.9305 * Recall: 0.9375 * F1: 0.9340 * Accuracy: 0.9700 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.19.4 * Pytorch 1.11.0+cu113 * Datasets 2.2.2 * Tokenizers 0.12.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\...
[ 66, 98, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learnin...
[ -0.10305804759263992, 0.10961544513702393, -0.0018086218042299151, 0.1267901211977005, 0.1558895707130432, 0.03104560635983944, 0.11313606053590775, 0.12545624375343323, -0.08956429362297058, 0.018652750179171562, 0.1335294246673584, 0.1636468470096588, 0.013856463134288788, 0.110811509191...
null
null
transformers
This model is a finetuning of bert-base-greek-uncased as a Token Classifier which predicts at each token which punctuation mark it is followed by. The model preprocesses everything to lowercase and removes all Greek diacritics. For information on pretraining of the Greek Bert model, please refer to [Greek Bert](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) # Finetuning Parameters Epochs: 5 Maximum Sequence Length: 512 Learning Rate: 4e−5 Batch Size: 16 Finetuning Data: Greek Europarl data available at: https://opus.nlpl.eu/Europarl.php Tokens: 44.1M Sentences: 1.6M Punctuation Points Recognised: '.' (0) : Full stop ',' (1) : Comma ';' (2) : Greek question mark '-' (3) : Dash ':' (4) : Semicolon '0' (5) : No punctuation point is following # Load Finetuned Model ~~~ from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") model = AutoModelForTokenClassification.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") ~~~ # Using the Model If you are interested in trying out examples and finding the limitations of the model, the starter Python code to use the model is available at [Github Repo](https://github.com/Andrian0s/Greek-Transformer-Model-Punctuation-Prediction) # Examples of the Model Using the demo script, we tried out a few brief examples and show the results below Input | Input with Predictions ------------- | ------------- "προσεκτικά στον δρομο θα σε περιμενω" | "προσεκτικα στον δρομο, θα σε περιμενω" "τι θα φας για βραδινο" | "τι θα φας για βραδινο;" "κυριε μαυροκέφαλε εσπασε η κεραια του διαδικτυου θα παρω τηλεφωνο την cyta" | "κυριε μαυροκεφαλε, εσπασε η κεραια του διαδικτυου. θα παρω τηλεφωνο την cyta." "κυριε μαυροκεφαλε εσπασεν η αντεννα του ιντερνετ εννα πιαω τηλεφωνον την cyta" | "κυριε μαυροκεφαλε, εσπασεν η αντεννα του ιντερνετ. εννα πιαω τηλεφωνον την cyta." The last two examples have identical meanings, the first is written in plain Modern Greek and the latter in the Cypriot Dialect. It is interesting to see the model performs similarly, even if some words and suffixes are out of vocabulary. # Further Performance Improvements We would be happy to hear people have finetuned this model with more and diverse datasets, as we expect this to increase robustness. Within our research, improvements to consistency in punctuation prediction have shown to be possible with techniques such as sliding windows (during inference) for larger documents, weighted loss and ensembling of different models. Make sure to cite our work when you further our models with the aforementioned techniques. # Author This model is further work based on the winning submission at Shared Task 2 Sentence End and Punctuation Prediction in NLG Text at SwissText2021. The winning submission is entitled "UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers" in the Proceedings of the 6th SwissText Held Online. It is publicly available at http://ceur-ws.org/Vol-2957/sepp_paper2.pdf If you use the model, please cite the following: @inproceedings{ST2021-OnPoint, title={UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers}, author={Michail, Andrianos and Wehrli, Silvan and Bucková, Terézia}, booktitle={Proceedings of the 1st Shared Task on Sentence End and Punctuation Prediction in NLG Text (SEPPNLG 2021) at SwissText 2021}, year={2021} } Model Finetuned and released by Andrianos Michail with resources provided by [Department of Computational Linguistics, University of Zurich](https://www.cl.uzh.ch/en.html) | Github: [@Andrian0s](https://github.com/Andrian0s) | LinkedIn: [amichail2](https://www.linkedin.com/in/amichail2/)
{}
token-classification
Andrianos/bert-base-greek-punctuation-prediction-finetuned
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
This model is a finetuning of bert-base-greek-uncased as a Token Classifier which predicts at each token which punctuation mark it is followed by. The model preprocesses everything to lowercase and removes all Greek diacritics. For information on pretraining of the Greek Bert model, please refer to Greek Bert Finetuning Parameters ===================== Epochs: 5 Maximum Sequence Length: 512 Learning Rate: 4e−5 Batch Size: 16 Finetuning Data: Greek Europarl data available at: URL Tokens: 44.1M Sentences: 1.6M Punctuation Points Recognised: '.' (0) : Full stop ',' (1) : Comma ';' (2) : Greek question mark '-' (3) : Dash ':' (4) : Semicolon '0' (5) : No punctuation point is following Load Finetuned Model ==================== ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") model = AutoModelForTokenClassification.from_pretrained("Andrianos/bert-base-greek-punctuation-prediction-finetuned") ``` Using the Model =============== If you are interested in trying out examples and finding the limitations of the model, the starter Python code to use the model is available at Github Repo Examples of the Model ===================== Using the demo script, we tried out a few brief examples and show the results below The last two examples have identical meanings, the first is written in plain Modern Greek and the latter in the Cypriot Dialect. It is interesting to see the model performs similarly, even if some words and suffixes are out of vocabulary. Further Performance Improvements ================================ We would be happy to hear people have finetuned this model with more and diverse datasets, as we expect this to increase robustness. Within our research, improvements to consistency in punctuation prediction have shown to be possible with techniques such as sliding windows (during inference) for larger documents, weighted loss and ensembling of different models. Make sure to cite our work when you further our models with the aforementioned techniques. Author ====== This model is further work based on the winning submission at Shared Task 2 Sentence End and Punctuation Prediction in NLG Text at SwissText2021. The winning submission is entitled "UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers" in the Proceedings of the 6th SwissText Held Online. It is publicly available at URL If you use the model, please cite the following: @inproceedings{ST2021-OnPoint, title={UZH OnPoint at Swisstext-2021: Sentence End and Punctuation Prediction in NLG Text Through Ensembling of Different Transformers}, author={Michail, Andrianos and Wehrli, Silvan and Bucková, Terézia}, booktitle={Proceedings of the 1st Shared Task on Sentence End and Punctuation Prediction in NLG Text (SEPPNLG 2021) at SwissText 2021}, year={2021} } Model Finetuned and released by Andrianos Michail with resources provided by Department of Computational Linguistics, University of Zurich | Github: @Andrian0s | LinkedIn: amichail2
[]
[ "TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 37 ]
[ "passage: TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.04952388256788254, 0.052763525396585464, -0.008742042817175388, 0.033980391919612885, 0.16650345921516418, 0.031232766807079315, 0.056794650852680206, 0.08634597808122635, 0.05724777653813362, -0.022096728906035423, 0.12041265517473221, 0.25661665201187134, -0.04172574356198311, 0.09726...
null
null
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person's name right after another person's name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person's name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
token-classification
Andrija/M-bert-NER
[ "transformers", "pytorch", "bert", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #bert #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #bert #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 61 ]
[ "passage: TAGS\n#transformers #pytorch #bert #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.08848606795072556, 0.10466885566711426, -0.005547394044697285, 0.05923883244395256, 0.11184278875589371, 0.043874822556972504, 0.1225629448890686, 0.12288068979978561, 0.031205151230096817, -0.06936771422624588, 0.1197349950671196, 0.20268696546554565, 0.007789667695760727, 0.0566231049...
null
null
null
from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('Andrija/RobertaFastBPE', bos_token="&lt;s&gt;", eos_token="&lt;/s&gt;") encoded = tokenizer('Stručnjaci te bolnice, predvođeni dr Alisom Lim') # {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} tokenizer.decode(encoded['input_ids']) # &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;
{}
null
Andrija/RobertaFastBPE
[ "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('Andrija/RobertaFastBPE', bos_token="&lt;s&gt;", eos_token="&lt;/s&gt;") encoded = tokenizer('Stručnjaci te bolnice, predvođeni dr Alisom Lim') # {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} URL(encoded['input_ids']) # &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;
[ "# {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\nURL(encoded['input_ids'])", "# &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;" ]
[ "TAGS\n#region-us \n", "# {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\nURL(encoded['input_ids'])", "# &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;" ]
[ 6, 101, 31 ]
[ "passage: TAGS\n#region-us \n# {'input_ids': [0, 47541, 34632, 603, 24817, 16, 27540, 6768, 2350, 2803, 3991, 2733, 81, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\nURL(encoded['input_ids'])# &lt;s&gt;Stručnjaci te bolnice, predvođeni dr Alisom Lim&lt;/s&gt;" ]
[ -0.01273820735514164, -0.09970703721046448, -0.00964659545570612, 0.010426274500787258, 0.07812892645597458, 0.08530633896589279, 0.17530567944049835, 0.1273442953824997, 0.23327644169330597, 0.13800156116485596, 0.1499151885509491, 0.0023704604245722294, 0.06820507347583771, 0.03503307327...
null
null
transformers
# Transformer language model for Croatian and Serbian Trained on 43GB datasets that contain Croatian and Serbian language for one epochs (9.6 mil. steps, 3 epochs). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets Validation number of exampels run for perplexity:1620487 sentences Perplexity:6.02 Start loss: 8.6 Final loss: 2.0 Thoughts: Model could be trained more, the training did not stagnate. | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-F` | 80M | Fifth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (43 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig", "cc100", "hrwac"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
fill-mask
Andrija/SRoBERTa-F
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "dataset:cc100", "dataset:hrwac", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 43GB datasets that contain Croatian and Serbian language for one epochs (9.6 mil. steps, 3 epochs). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets Validation number of exampels run for perplexity:1620487 sentences Perplexity:6.02 Start loss: 8.6 Final loss: 2.0 Thoughts: Model could be trained more, the training did not stagnate.
[]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 96 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.13166163861751556, 0.1622481346130371, -0.005019253585487604, 0.10035975277423859, 0.08499771356582642, 0.052852701395750046, 0.1701194941997528, 0.1072889044880867, 0.06705498695373535, -0.04436139017343521, 0.1487106829881668, 0.1501384973526001, 0.03246442973613739, 0.067416377365589...
null
null
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
token-classification
Andrija/SRoBERTa-L-NER
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 62 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.08366859704256058, 0.10869227349758148, -0.005516380071640015, 0.061136044561862946, 0.12477051466703415, 0.03691288083791733, 0.12175776809453964, 0.12674230337142944, 0.003049373161047697, -0.07109397649765015, 0.12124678492546082, 0.2130488008260727, 0.005292007699608803, 0.052055455...
null
null
transformers
# Transformer language model for Croatian and Serbian Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps). Leipzig, OSCAR and srWac datasets | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-L` | 80M | Third | Leipzig Corpus, OSCAR and srWac (6 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
fill-mask
Andrija/SRoBERTa-L
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps). Leipzig, OSCAR and srWac datasets
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 79 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.09753619134426117, 0.14740930497646332, -0.005321509670466185, 0.09504637867212296, 0.1003584936261177, 0.03659561276435852, 0.15424290299415588, 0.10243850946426392, 0.024756282567977905, -0.07659869641065598, 0.1606483906507492, 0.1550208479166031, 0.004285811446607113, 0.093564242124...
null
null
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
token-classification
Andrija/SRoBERTa-NER
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 62 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.08366859704256058, 0.10869227349758148, -0.005516380071640015, 0.061136044561862946, 0.12477051466703415, 0.03691288083791733, 0.12175776809453964, 0.12674230337142944, 0.003049373161047697, -0.07109397649765015, 0.12124678492546082, 0.2130488008260727, 0.005292007699608803, 0.052055455...
null
null
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person's name right after another person's name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person's name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
token-classification
Andrija/SRoBERTa-XL-NER
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 62 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.08366859704256058, 0.10869227349758148, -0.005516380071640015, 0.061136044561862946, 0.12477051466703415, 0.03691288083791733, 0.12175776809453964, 0.12674230337142944, 0.003049373161047697, -0.07109397649765015, 0.12124678492546082, 0.2130488008260727, 0.005292007699608803, 0.052055455...
null
null
transformers
# Transformer language model for Croatian and Serbian Trained on 28GB datasets that contain Croatian and Serbian language for one epochs (3 mil. steps). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-XL` | 80M | Forth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (28 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "srwac", "leipzig", "cc100", "hrwac"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
fill-mask
Andrija/SRoBERTa-XL
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:srwac", "dataset:leipzig", "dataset:cc100", "dataset:hrwac", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 28GB datasets that contain Croatian and Serbian language for one epochs (3 mil. steps). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 92 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-srwac #dataset-leipzig #dataset-cc100 #dataset-hrwac #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.12945911288261414, 0.1681143045425415, -0.004363401792943478, 0.10039868205785751, 0.09462038427591324, 0.05851498246192932, 0.13595275580883026, 0.1007656678557396, 0.039781536906957626, -0.06299886107444763, 0.1612124890089035, 0.13867789506912231, 0.020428461953997612, 0.104291953146...
null
null
transformers
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges. Abbreviation|Description -|- O|Outside of a named entity B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity I-MIS | Miscellaneous entity B-PER |Beginning of a person’s name right after another person’s name B-DERIV-PER| Begginning derivative that describes relation to a person I-PER |Person’s name B-ORG |Beginning of an organization right after another organization I-ORG |organization B-LOC |Beginning of a location right after another location I-LOC |Location
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "datasets": ["hr500k"], "widget": [{"text": "Moje ime je Aleksandar i zivim u Beogradu pored Vlade Republike Srbije"}]}
token-classification
Andrija/SRoBERTa-base-NER
[ "transformers", "pytorch", "roberta", "token-classification", "hr", "sr", "multilingual", "dataset:hr500k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Named Entity Recognition (Token Classification Head) for Serbian / Croatian languges.
[]
[ "TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 62 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #token-classification #hr #sr #multilingual #dataset-hr500k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.08366859704256058, 0.10869227349758148, -0.005516380071640015, 0.061136044561862946, 0.12477051466703415, 0.03691288083791733, 0.12175776809453964, 0.12674230337142944, 0.003049373161047697, -0.07109397649765015, 0.12124678492546082, 0.2130488008260727, 0.005292007699608803, 0.052055455...
null
null
transformers
# Transformer language model for Croatian and Serbian Trained on 3GB datasets that contain Croatian and Serbian language for two epochs. Leipzig and OSCAR datasets # Information of dataset | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa-base` | 80M | Second | Leipzig Corpus and OSCAR (3 GB of text) |
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["oscar", "leipzig"], "widget": [{"text": "Ovo je po\u010detak <mask>."}]}
fill-mask
Andrija/SRoBERTa-base
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:oscar", "dataset:leipzig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 3GB datasets that contain Croatian and Serbian language for two epochs. Leipzig and OSCAR datasets Information of dataset ======================
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 72 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-oscar #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.0984482541680336, 0.14024381339550018, -0.005903754383325577, 0.09843268990516663, 0.09558193385601044, 0.0321347676217556, 0.1552160680294037, 0.10856997221708298, 0.013496332801878452, -0.07397647947072983, 0.1563597470521927, 0.16225600242614746, -0.0034573266748338938, 0.09971416741...
null
null
transformers
# Transformer language model for Croatian and Serbian Trained on 0.7GB dataset Croatian and Serbian language for one epoch. Dataset from Leipzig Corpora. # Information of dataset | Model | #params | Arch. | Training data | |--------------------------------|--------------------------------|-------|-----------------------------------| | `Andrija/SRoBERTa` | 120M | First | Leipzig Corpus (0.7 GB of text) | # How to use in code ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Andrija/SRoBERTa") model = AutoModelForMaskedLM.from_pretrained("Andrija/SRoBERTa") ```
{"language": ["hr", "sr", "multilingual"], "license": "apache-2.0", "tags": ["masked-lm"], "datasets": ["leipzig"], "widget": [{"text": "Gde je <mask>."}]}
fill-mask
Andrija/SRoBERTa
[ "transformers", "pytorch", "roberta", "fill-mask", "masked-lm", "hr", "sr", "multilingual", "dataset:leipzig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "hr", "sr", "multilingual" ]
TAGS #transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Transformer language model for Croatian and Serbian =================================================== Trained on 0.7GB dataset Croatian and Serbian language for one epoch. Dataset from Leipzig Corpora. Information of dataset ====================== How to use in code ==================
[]
[ "TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 66 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #fill-mask #masked-lm #hr #sr #multilingual #dataset-leipzig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.09399977326393127, 0.0790163055062294, -0.006095091812312603, 0.07302559167146683, 0.09586314857006073, 0.027912776917219162, 0.16787590086460114, 0.10766689479351044, 0.026470953598618507, -0.05548274144530296, 0.15206119418144226, 0.17275547981262207, -0.0018524016486480832, 0.0916277...
null
null
null
C:\Users\andry\Desktop\Выжигание 24-12-2021.jpg
{}
null
Andry/1111
[ "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
C:\Users\andry\Desktop\Выжигание URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
[ 0.024608636274933815, -0.026205500587821007, -0.009666500613093376, -0.10395516455173492, 0.08638657629489899, 0.059816278517246246, 0.01882290467619896, 0.020661840215325356, 0.23975107073783875, -0.005599027033895254, 0.1219947561621666, 0.0015615287702530622, -0.037353623658418655, 0.03...
null
null
null
Now we only upload two models for creating demos for image and video classification. More models and code can be found in our github repo: [UniFormer](https://github.com/Sense-X/UniFormer).
{"license": "mit"}
null
Andy1621/uniformer
[ "license:mit", "has_space", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #license-mit #has_space #region-us
Now we only upload two models for creating demos for image and video classification. More models and code can be found in our github repo: UniFormer.
[]
[ "TAGS\n#license-mit #has_space #region-us \n" ]
[ 15 ]
[ "passage: TAGS\n#license-mit #has_space #region-us \n" ]
[ 0.08530651777982712, -0.023697305470705032, -0.004968962166458368, -0.050554338842630386, -0.01735161431133747, 0.03713279217481613, 0.13230444490909576, 0.07879144698381424, 0.1818152368068695, 0.008017006330192089, 0.16221337020397186, 0.011939001269638538, -0.057828139513731, 0.02442608...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0609 - Precision: 0.9275 - Recall: 0.9365 - F1: 0.9320 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2527 | 1.0 | 878 | 0.0706 | 0.9120 | 0.9181 | 0.9150 | 0.9803 | | 0.0517 | 2.0 | 1756 | 0.0603 | 0.9174 | 0.9349 | 0.9261 | 0.9830 | | 0.031 | 3.0 | 2634 | 0.0609 | 0.9275 | 0.9365 | 0.9320 | 0.9840 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.984018301110458}}]}]}
token-classification
Ann2020/distilbert-base-uncased-finetuned-ner
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0609 * Precision: 0.9275 * Recall: 0.9365 * F1: 0.9320 * Accuracy: 0.9840 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.9.2 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate...
[ 65, 98, 4, 34 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_r...
[ -0.10257266461849213, 0.10689127445220947, -0.002135234186425805, 0.11783239245414734, 0.16193117201328278, 0.03428632393479347, 0.11159374564886093, 0.1199946179986, -0.11583255976438522, 0.02691405825316906, 0.1255667805671692, 0.1717144101858139, 0.011411176063120365, 0.1192567497491836...
null
null
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on bert-base-uncased model and pre-trained for text input
{}
feature-extraction
Anonymous/ReasonBERT-BERT
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see URL This is based on bert-base-uncased model and pre-trained for text input
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n" ]
[ 29 ]
[ "passage: TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n" ]
[ -0.0680389553308487, -0.01353863999247551, -0.009260591119527817, 0.003671469632536173, 0.13468711078166962, 0.03987877443432808, -0.0037161505315452814, 0.08307137340307236, 0.06908576935529709, -0.009869525209069252, 0.10839105397462845, 0.22950756549835205, -0.03434249758720398, 0.02783...
null
null
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on roberta-base model and pre-trained for text input
{}
feature-extraction
Anonymous/ReasonBERT-RoBERTa
[ "transformers", "pytorch", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see URL This is based on roberta-base model and pre-trained for text input
[]
[ "TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n" ]
[ 30 ]
[ "passage: TAGS\n#transformers #pytorch #roberta #feature-extraction #endpoints_compatible #region-us \n" ]
[ -0.06385086476802826, -0.015064781531691551, -0.009285440668463707, 0.005133595783263445, 0.14468014240264893, 0.028980428352952003, -0.0024224664084613323, 0.09294742345809937, 0.015450677834451199, -0.005549874156713486, 0.10132493823766708, 0.2598955035209656, -0.03146638348698616, 0.03...
null
null
transformers
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx This is based on tapas-base(no_reset) model and pre-trained for table input
{}
feature-extraction
Anonymous/ReasonBERT-TAPAS
[ "transformers", "pytorch", "tapas", "feature-extraction", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tapas #feature-extraction #endpoints_compatible #region-us
Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see URL This is based on tapas-base(no_reset) model and pre-trained for table input
[]
[ "TAGS\n#transformers #pytorch #tapas #feature-extraction #endpoints_compatible #region-us \n" ]
[ 30 ]
[ "passage: TAGS\n#transformers #pytorch #tapas #feature-extraction #endpoints_compatible #region-us \n" ]
[ -0.0691797137260437, -0.07712923735380173, -0.008569680154323578, 0.003274564864113927, 0.15867257118225098, 0.01808781921863556, -0.017889654263854027, 0.10221903026103973, 0.05290442332625389, 0.001763633918017149, 0.06873477250337601, 0.2616657614707947, -0.009622948244214058, 0.1203528...
null
null
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20384195 - CO2 Emissions (in grams): 4.214012748213151 ## Validation Metrics - Loss: 1.0120062828063965 - Rouge1: 41.1808 - Rouge2: 26.2564 - RougeL: 31.3106 - RougeLsum: 38.9991 - Gen Len: 58.45 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Anorak/autonlp-Niravana-test2-20384195 ```
{"language": "unk", "tags": "autonlp", "datasets": ["Anorak/autonlp-data-Niravana-test2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 4.214012748213151}
text2text-generation
Anorak/nirvana
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "unk", "dataset:Anorak/autonlp-data-Niravana-test2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Anorak/autonlp-data-Niravana-test2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20384195 - CO2 Emissions (in grams): 4.214012748213151 ## Validation Metrics - Loss: 1.0120062828063965 - Rouge1: 41.1808 - Rouge2: 26.2564 - RougeL: 31.3106 - RougeLsum: 38.9991 - Gen Len: 58.45 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20384195\n- CO2 Emissions (in grams): 4.214012748213151", "## Validation Metrics\n\n- Loss: 1.0120062828063965\n- Rouge1: 41.1808\n- Rouge2: 26.2564\n- RougeL: 31.3106\n- RougeLsum: 38.9991\n- Gen Len: 58.45", "## Usage\n\nYou can use ...
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Anorak/autonlp-data-Niravana-test2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20384195\n- CO2 Emissions (in grams): 4....
[ 75, 40, 54, 13 ]
[ "passage: TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-Anorak/autonlp-data-Niravana-test2 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 20384195\n- CO2 Emissions (in grams):...
[ -0.19583062827587128, 0.13883903622627258, -0.0020195988472551107, 0.08170326054096222, 0.048390500247478485, 0.021366560831665993, 0.10532490909099579, 0.09497849643230438, 0.04523787274956703, 0.04174358397722244, 0.16968198120594025, 0.12949182093143463, -0.013042830862104893, 0.1638124...
null
null
transformers
# Rick Sanchez DialoGPT Model
{"tags": ["conversational"]}
text-generation
AnthonyNelson/DialoGPT-small-ricksanchez
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick Sanchez DialoGPT Model
[ "# Rick Sanchez DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick Sanchez DialoGPT Model" ]
[ 51, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick Sanchez DialoGPT Model" ]
[ -0.05704520270228386, 0.1080707237124443, -0.005703833419829607, 0.024355918169021606, 0.1347416192293167, -0.009864812716841698, 0.13915762305259705, 0.13641619682312012, -0.014821183867752552, -0.025234131142497063, 0.13788719475269318, 0.23441068828105927, -0.0040086545050144196, 0.0579...
null
null
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Anthos23/distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0662 - Validation Loss: 0.2623 - Train Accuracy: 0.9083 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 21045, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2101 | 0.2373 | 0.9083 | 0 | | 0.1065 | 0.2645 | 0.9060 | 1 | | 0.0662 | 0.2623 | 0.9083 | 2 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.5.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "Anthos23/distilbert-base-uncased-finetuned-sst2", "results": []}]}
text-classification
Anthos23/distilbert-base-uncased-finetuned-sst2
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Anthos23/distilbert-base-uncased-finetuned-sst2 =============================================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0662 * Validation Loss: 0.2623 * Train Accuracy: 0.9083 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 21045, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.17.0.dev0 * TensorFlow 2.5.0 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 21045, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'na...
[ "TAGS\n#transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'lear...
[ 60, 178, 4, 36 ]
[ "passage: TAGS\n#transformers #tf #tensorboard #distilbert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'l...
[ -0.071751669049263, 0.07273967564105988, -0.00647136801853776, 0.053370434790849686, 0.10889609158039093, 0.03760404884815216, 0.15083461999893188, 0.1288233995437622, -0.09319285303354263, 0.1022256538271904, 0.14458638429641724, 0.1356535404920578, 0.055345822125673294, 0.142493456602096...
null
null
transformers
# Jordan DialoGPT Model
{"tags": ["conversational"]}
text-generation
Apisate/DialoGPT-small-jordan
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Jordan DialoGPT Model
[ "# Jordan DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Jordan DialoGPT Model" ]
[ 51, 7 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jordan DialoGPT Model" ]
[ -0.014234757982194424, 0.0020122500136494637, -0.00592494523152709, 0.01757795549929142, 0.12891024351119995, -0.0012038614368066192, 0.18380974233150482, 0.14440234005451202, 0.040929123759269714, -0.07664123922586441, 0.14973331987857819, 0.2089197337627411, 0.006873446516692638, 0.09041...
null
null
transformers
Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
{"language": "en", "tags": ["keytotext", "k2t", "Keywords to Sentences"], "thumbnail": "Keywords to Sentences"}
text2text-generation
Apoorva/k2t-test
[ "transformers", "pytorch", "t5", "text2text-generation", "keytotext", "k2t", "Keywords to Sentences", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 65 ]
[ "passage: TAGS\n#transformers #pytorch #t5 #text2text-generation #keytotext #k2t #Keywords to Sentences #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 0.048417139798402786, 0.0009807678870856762, -0.007233020383864641, 0.0036215882282704115, 0.13428790867328644, 0.017067421227693558, 0.10295163840055466, 0.1634819209575653, -0.01886797696352005, -0.03132914751768112, 0.13148918747901917, 0.2336915284395218, 0.004890760872513056, 0.083379...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-ner This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0700 - Precision: 0.9301 - Recall: 0.9376 - F1: 0.9338 - Accuracy: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.096 | 1.0 | 1756 | 0.0752 | 0.9163 | 0.9201 | 0.9182 | 0.9811 | | 0.0481 | 2.0 | 3512 | 0.0761 | 0.9169 | 0.9293 | 0.9231 | 0.9830 | | 0.0251 | 3.0 | 5268 | 0.0700 | 0.9301 | 0.9376 | 0.9338 | 0.9852 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "albert-base-v2-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9301181102362205, "name": "Precision"}, {"type": "recall", "value": 0.9376033513394334, "name": "Recall"}, {"type": "f1", "value": 0.9338457315399397, "name": "F1"}, {"type": "accuracy", "value": 0.9851613086447802, "name": "Accuracy"}]}]}]}
token-classification
ArBert/albert-base-v2-finetuned-ner
[ "transformers", "pytorch", "tensorboard", "albert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
albert-base-v2-finetuned-ner ============================ This model is a fine-tuned version of albert-base-v2 on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0700 * Precision: 0.9301 * Recall: 0.9376 * F1: 0.9338 * Accuracy: 0.9852 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.14.1 * Pytorch 1.10.1 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learni...
[ 68, 98, 4, 30 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #albert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* lea...
[ -0.09514399617910385, 0.09935272485017776, -0.0023709754459559917, 0.12687388062477112, 0.15049147605895996, 0.03435618430376053, 0.13070222735404968, 0.12690560519695282, -0.08627597242593765, 0.014853893779218197, 0.1257493942975998, 0.16191349923610687, 0.013222930021584034, 0.107548505...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner-kmeans This model is a fine-tuned version of [ArBert/bert-base-uncased-finetuned-ner](https://huggingface.co/ArBert/bert-base-uncased-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1169 - Precision: 0.9084 - Recall: 0.9245 - F1: 0.9164 - Accuracy: 0.9792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.036 | 1.0 | 1123 | 0.1010 | 0.9086 | 0.9117 | 0.9101 | 0.9779 | | 0.0214 | 2.0 | 2246 | 0.1094 | 0.9033 | 0.9199 | 0.9115 | 0.9784 | | 0.014 | 3.0 | 3369 | 0.1169 | 0.9084 | 0.9245 | 0.9164 | 0.9792 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-ner-kmeans", "results": []}]}
token-classification
ArBert/bert-base-uncased-finetuned-ner-kmeans
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-ner-kmeans ====================================== This model is a fine-tuned version of ArBert/bert-base-uncased-finetuned-ner on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1169 * Precision: 0.9084 * Recall: 0.9245 * F1: 0.9164 * Accuracy: 0.9792 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\...
[ 56, 98, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_bat...
[ -0.10436924546957016, 0.07880327850580215, -0.0021224042866379023, 0.12252530455589294, 0.18087118864059448, 0.02005484700202942, 0.09969830513000488, 0.12038746476173401, -0.10890772193670273, 0.015925442799925804, 0.12480532377958298, 0.189627543091774, 0.002304124180227518, 0.1098845824...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0905 - Precision: 0.9068 - Recall: 0.9200 - F1: 0.9133 - Accuracy: 0.9787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1266 | 1.0 | 1123 | 0.0952 | 0.8939 | 0.8869 | 0.8904 | 0.9742 | | 0.0741 | 2.0 | 2246 | 0.0866 | 0.8936 | 0.9247 | 0.9089 | 0.9774 | | 0.0496 | 3.0 | 3369 | 0.0905 | 0.9068 | 0.9200 | 0.9133 | 0.9787 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": []}]}
token-classification
ArBert/bert-base-uncased-finetuned-ner
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-finetuned-ner =============================== This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0905 * Precision: 0.9068 * Recall: 0.9200 * F1: 0.9133 * Accuracy: 0.9787 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\...
[ 56, 98, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_bat...
[ -0.10436924546957016, 0.07880327850580215, -0.0021224042866379023, 0.12252530455589294, 0.18087118864059448, 0.02005484700202942, 0.09969830513000488, 0.12038746476173401, -0.10890772193670273, 0.015925442799925804, 0.12480532377958298, 0.189627543091774, 0.002304124180227518, 0.1098845824...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner-agglo-twitter This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Precision: 0.6885 - Recall: 0.7665 - F1: 0.7254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 | | No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 | | 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 | | 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 | | 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 | | 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 | | 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 | | 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 | | 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 | | 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 | | 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 | | 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 | | 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 | | 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 | | 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 | | 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 | | 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 | | 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 | | 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 | | 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "roberta-base-finetuned-ner-agglo-twitter", "results": []}]}
token-classification
ArBert/roberta-base-finetuned-ner-agglo-twitter
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner-agglo-twitter ======================================== This model is a fine-tuned version of ArBert/roberta-base-finetuned-ner on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6645 * Precision: 0.6885 * Recall: 0.7665 * F1: 0.7254 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Train...
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_si...
[ 54, 98, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\...
[ -0.10202597826719284, 0.05369270220398903, -0.0020057777874171734, 0.1180717870593071, 0.19825606048107147, 0.027745096012949944, 0.10283763706684113, 0.11286996304988861, -0.10502272844314575, 0.01920207403600216, 0.1295049637556076, 0.2063470184803009, -0.0038995074573904276, 0.084482476...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner-kmeans-twitter This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6645 - Precision: 0.6885 - Recall: 0.7665 - F1: 0.7254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 | | No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 | | 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 | | 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 | | 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 | | 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 | | 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 | | 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 | | 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 | | 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 | | 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 | | 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 | | 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 | | 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 | | 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 | | 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 | | 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 | | 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 | | 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 | | 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "roberta-base-finetuned-ner-kmeans-twitter", "results": []}]}
token-classification
ArBert/roberta-base-finetuned-ner-kmeans-twitter
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner-kmeans-twitter ========================================= This model is a fine-tuned version of ArBert/roberta-base-finetuned-ner on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6645 * Precision: 0.6885 * Recall: 0.7665 * F1: 0.7254 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Train...
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_si...
[ 54, 98, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\...
[ -0.10202597826719284, 0.05369270220398903, -0.0020057777874171734, 0.1180717870593071, 0.19825606048107147, 0.027745096012949944, 0.10283763706684113, 0.11286996304988861, -0.10502272844314575, 0.01920207403600216, 0.1295049637556076, 0.2063470184803009, -0.0038995074573904276, 0.084482476...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner-kmeans This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0592 - Precision: 0.9559 - Recall: 0.9615 - F1: 0.9587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:| | 0.0248 | 1.0 | 878 | 0.0609 | 0.9507 | 0.9561 | 0.9534 | | 0.0163 | 2.0 | 1756 | 0.0640 | 0.9515 | 0.9578 | 0.9546 | | 0.0089 | 3.0 | 2634 | 0.0592 | 0.9559 | 0.9615 | 0.9587 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1"], "model-index": [{"name": "roberta-base-finetuned-ner-kmeans", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.955868544600939, "name": "Precision"}, {"type": "recall", "value": 0.9614658103513412, "name": "Recall"}, {"type": "f1", "value": 0.9586590074394953, "name": "F1"}]}]}]}
token-classification
ArBert/roberta-base-finetuned-ner-kmeans
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner-kmeans ================================= This model is a fine-tuned version of ArBert/roberta-base-finetuned-ner on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0592 * Precision: 0.9559 * Recall: 0.9615 * F1: 0.9587 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_r...
[ 65, 98, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\...
[ -0.10859613120555878, 0.08214722573757172, -0.002109553199261427, 0.1175299733877182, 0.16949933767318726, 0.036313414573669434, 0.11522858589887619, 0.11863342672586441, -0.10043205320835114, 0.03152560070157051, 0.130593940615654, 0.17752136290073395, 0.00706458929926157, 0.1193361729383...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0738 - Precision: 0.9232 - Recall: 0.9437 - F1: 0.9333 - Accuracy: 0.9825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1397 | 1.0 | 1368 | 0.0957 | 0.9141 | 0.9048 | 0.9094 | 0.9753 | | 0.0793 | 2.0 | 2736 | 0.0728 | 0.9274 | 0.9324 | 0.9299 | 0.9811 | | 0.0499 | 3.0 | 4104 | 0.0738 | 0.9232 | 0.9437 | 0.9333 | 0.9825 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "roberta-base-finetuned-ner", "results": []}]}
token-classification
ArBert/roberta-base-finetuned-ner
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-base-finetuned-ner ========================== This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0738 * Precision: 0.9232 * Recall: 0.9437 * F1: 0.9333 * Accuracy: 0.9825 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.17.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_si...
[ 54, 98, 4, 33 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\...
[ -0.10050548613071442, 0.05813928321003914, -0.0021708954591304064, 0.11912856996059418, 0.19747111201286316, 0.030515002086758614, 0.11030358821153641, 0.10890746116638184, -0.10393298417329788, 0.015407970175147057, 0.12744972109794617, 0.20654518902301788, -0.0013956386828795075, 0.07388...
null
null
transformers
# Stark DialoGPT Model
{"tags": ["conversational"]}
text-generation
ArJakusz/DialoGPT-small-stark
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Stark DialoGPT Model
[ "# Stark DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Stark DialoGPT Model" ]
[ 51, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Stark DialoGPT Model" ]
[ -0.04369669407606125, 0.031437650322914124, -0.004829308949410915, 0.019079672172665596, 0.13480065762996674, -0.010020311921834946, 0.14397646486759186, 0.1311003863811493, 0.04679921269416809, -0.04175829514861107, 0.1255069524049759, 0.17630527913570404, -0.009091203100979328, 0.0547598...
null
null
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
text-generation
Aran/DialoGPT-medium-harrypotter
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
[ 55, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# Harry Potter DialoGPT Model" ]
[ 0.008814724162220955, 0.0823512151837349, -0.005110622849315405, 0.0724329799413681, 0.09259387105703354, 0.04050839692354202, 0.16254787147045135, 0.13402126729488373, -0.006549961864948273, -0.02039809338748455, 0.12690581381320953, 0.17780061066150665, -0.014201340265572071, 0.039732776...
null
null
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
text-generation
Aran/DialoGPT-small-harrypotter
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
[ 51, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model" ]
[ -0.0009023238671943545, 0.07815738022327423, -0.006546166725456715, 0.07792752981185913, 0.10655936598777771, 0.048972971737384796, 0.17639793455600739, 0.12185695022344589, 0.016568755730986595, -0.04774167761206627, 0.11647630482912064, 0.2130284160375595, -0.002118367003276944, 0.024608...
null
null
transformers
# Rick DialoGPT Model
{"tags": ["conversational"]}
text-generation
Arcktosh/DialoGPT-small-rick
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick DialoGPT Model
[ "# Rick DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick DialoGPT Model" ]
[ 51, 7 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick DialoGPT Model" ]
[ -0.027243174612522125, 0.09208611398935318, -0.005486058536916971, 0.01197603065520525, 0.13312271237373352, -0.0006643096567131579, 0.14875547587871552, 0.13561291992664337, -0.012389403767883778, -0.048079900443553925, 0.13848258554935455, 0.20838283002376556, -0.007769247982650995, 0.06...
null
null
transformers
# Cultured Kumiko DialoGPT Model
{"tags": ["conversational"]}
text-generation
AriakimTaiyo/DialoGPT-cultured-Kumiko
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Cultured Kumiko DialoGPT Model
[ "# Cultured Kumiko DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Cultured Kumiko DialoGPT Model" ]
[ 51, 10 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Cultured Kumiko DialoGPT Model" ]
[ 0.01232860703021288, 0.026171725243330002, -0.005277747288346291, 0.007207741029560566, 0.1733902245759964, 0.016173211857676506, 0.18071918189525604, 0.11523045599460602, 0.04729032143950462, -0.032802581787109375, 0.07443515211343765, 0.12680093944072723, 0.036064449697732925, 0.11775501...
null
null
null
# Medium Kumiko DialoGPT Model
{"tags": ["conversational"]}
text-generation
AriakimTaiyo/DialoGPT-medium-Kumiko
[ "conversational", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #conversational #region-us
# Medium Kumiko DialoGPT Model
[ "# Medium Kumiko DialoGPT Model" ]
[ "TAGS\n#conversational #region-us \n", "# Medium Kumiko DialoGPT Model" ]
[ 10, 9 ]
[ "passage: TAGS\n#conversational #region-us \n# Medium Kumiko DialoGPT Model" ]
[ 0.05314796790480614, -0.030321577563881874, -0.005320665426552296, -0.059618886560201645, 0.14629648625850677, 0.04491886869072914, 0.223875030875206, 0.03963087871670723, 0.13206566870212555, -0.031639955937862396, 0.013066701591014862, -0.0409272238612175, 0.045119792222976685, 0.0619645...
null
null
transformers
# Revised Kumiko DialoGPT Model
{"tags": ["conversational"]}
text-generation
AriakimTaiyo/DialoGPT-revised-Kumiko
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Revised Kumiko DialoGPT Model
[ "# Revised Kumiko DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Revised Kumiko DialoGPT Model" ]
[ 51, 10 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Revised Kumiko DialoGPT Model" ]
[ 0.019446326419711113, 0.024271301925182343, -0.0042835380882024765, -0.01951071247458458, 0.1861540526151657, 0.004492292180657387, 0.17774029076099396, 0.12364965677261353, -0.0015971491811797023, -0.03398094326257706, 0.07750175893306732, 0.13506485521793365, 0.036891162395477295, 0.0889...
null
null
transformers
# Kumiko DialoGPT Model
{"tags": ["conversational"]}
text-generation
AriakimTaiyo/DialoGPT-small-Kumiko
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Kumiko DialoGPT Model
[ "# Kumiko DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Kumiko DialoGPT Model" ]
[ 51, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Kumiko DialoGPT Model" ]
[ 0.016887694597244263, 0.01733422838151455, -0.004717272240668535, -0.002768282312899828, 0.1822224259376526, 0.0037295250222086906, 0.18650959432125092, 0.12511099874973297, 0.017591670155525208, -0.027060631662607193, 0.09617900103330612, 0.14549854397773743, 0.03595368564128876, 0.085292...
null
null
transformers
# Rikka DialoGPT Model
{"tags": ["conversational"]}
text-generation
AriakimTaiyo/DialoGPT-small-Rikka
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rikka DialoGPT Model
[ "# Rikka DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rikka DialoGPT Model" ]
[ 51, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rikka DialoGPT Model" ]
[ 0.006930135190486908, -0.006712172646075487, -0.006278835702687502, 0.014029049314558506, 0.1455453634262085, 0.006294438615441322, 0.14182882010936737, 0.12902706861495972, 0.005081805866211653, -0.035810038447380066, 0.11732704192399979, 0.18529188632965088, 0.020269285887479782, 0.10991...
null
null
null
a
{}
null
AriakimTaiyo/kumiko
[ "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
a
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
[ 0.024608636274933815, -0.026205500587821007, -0.009666500613093376, -0.10395516455173492, 0.08638657629489899, 0.059816278517246246, 0.01882290467619896, 0.020661840215325356, 0.23975107073783875, -0.005599027033895254, 0.1219947561621666, 0.0015615287702530622, -0.037353623658418655, 0.03...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-hausa2-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.2032 - Wer: 0.7237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1683 | 12.49 | 400 | 1.0279 | 0.7211 | | 0.0995 | 24.98 | 800 | 1.2032 | 0.7237 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-hausa2-demo-colab", "results": []}]}
automatic-speech-recognition
Arnold/wav2vec2-hausa2-demo-colab
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-hausa2-demo-colab ========================== This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 1.2032 * Wer: 0.7237 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon...
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* t...
[ 65, 158, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n...
[ -0.11954687535762787, 0.07906337082386017, -0.0029005450196564198, 0.05556979402899742, 0.12228918075561523, 0.007954642176628113, 0.1095215380191803, 0.15148285031318665, -0.09218437969684601, 0.0806608572602272, 0.09477271139621735, 0.09238191694021225, 0.06251394003629684, 0.10250744223...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-hausa2-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2993 - Wer: 0.4826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.6e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 13 - gradient_accumulation_steps: 3 - total_train_batch_size: 36 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.1549 | 12.5 | 400 | 2.7289 | 1.0 | | 2.0566 | 25.0 | 800 | 0.4582 | 0.6768 | | 0.4423 | 37.5 | 1200 | 0.3037 | 0.5138 | | 0.2991 | 50.0 | 1600 | 0.2993 | 0.4826 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-hausa2-demo-colab", "results": []}]}
automatic-speech-recognition
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-large-xlsr-hausa2-demo-colab ===================================== This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 0.2993 * Wer: 0.4826 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 9.6e-05 * train\_batch\_size: 12 * eval\_batch\_size: 8 * seed: 13 * gradient\_accumulation\_steps: 3 * total\_train\_batch\_size: 36 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 400 * num\_epochs: 50 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.6e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 36\n* optimizer: Adam with betas=(0.9,0.999) and epsilo...
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.6e-05\n* ...
[ 65, 160, 4, 35 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.6e-05\...
[ -0.11712789535522461, 0.07729578018188477, -0.0032404859084635973, 0.04936771094799042, 0.1269652396440506, 0.00879079569131136, 0.09846438467502594, 0.14120250940322876, -0.08599630743265152, 0.08323643356561661, 0.0916893482208252, 0.09924250841140747, 0.06441248208284378, 0.113765537738...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2295 - Accuracy: 0.92 - F1: 0.9202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8187 | 1.0 | 250 | 0.3137 | 0.902 | 0.8983 | | 0.2514 | 2.0 | 500 | 0.2295 | 0.92 | 0.9202 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.92, "name": "Accuracy"}, {"type": "f1", "value": 0.9201604193183255, "name": "F1"}]}]}]}
text-classification
Aron/distilbert-base-uncased-finetuned-emotion
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotion ========================================= This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2295 * Accuracy: 0.92 * F1: 0.9202 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn...
[ 67, 98, 4, 33 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* le...
[ -0.10365526378154755, 0.11108539253473282, -0.0026109113823622465, 0.1317654550075531, 0.16546793282032013, 0.045472968369722366, 0.1148209348320961, 0.12493137270212173, -0.08185860514640808, 0.032128069549798965, 0.10837704688310623, 0.1617085337638855, 0.02285127155482769, 0.09674810618...
null
null
transformers
#Okarin Bot
{"tags": ["conversational"]}
text-generation
ArtemisZealot/DialoGTP-small-Qkarin
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Okarin Bot
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.009697278961539268, 0.03208012506365776, -0.007204889785498381, 0.004809224978089333, 0.16726240515708923, 0.014898733235895634, 0.09765533357858658, 0.13672804832458496, -0.007841327227652073, -0.031050153076648712, 0.14490588009357452, 0.20411323010921478, -0.006439372431486845, 0.066...
null
null
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
text-generation
Aruden/DialoGPT-medium-harrypotterall
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
[ 51, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model" ]
[ -0.0009023238671943545, 0.07815738022327423, -0.006546166725456715, 0.07792752981185913, 0.10655936598777771, 0.048972971737384796, 0.17639793455600739, 0.12185695022344589, 0.016568755730986595, -0.04774167761206627, 0.11647630482912064, 0.2130284160375595, -0.002118367003276944, 0.024608...
null
null
transformers
``` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("ArvinZhuang/BiTAG-t5-large") tokenizer = AutoTokenizer.from_pretrained("ArvinZhuang/BiTAG-t5-large") text = "abstract: [your abstract]" # use 'title:' as the prefix for title_to_abs task. input_ids = tokenizer.encode(text, return_tensors='pt') outputs = model.generate( input_ids, do_sample=True, max_length=500, top_p=0.9, top_k=20, temperature=1, num_return_sequences=10, ) print("Output:\n" + 100 * '-') for i, output in enumerate(outputs): print("{}: {}".format(i+1, tokenizer.decode(output, skip_special_tokens=True))) ``` GitHub: https://github.com/ArvinZhuang/BiTAG
{"inference": {"parameters": {"do_sample": true, "max_length": 500, "top_p": 0.9, "top_k": 20, "temperature": 1, "num_return_sequences": 10}}, "widget": [{"text": "abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).", "example_title": "BERT abstract"}]}
text2text-generation
ielabgroup/BiTAG-t5-large
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GitHub: URL
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 48 ]
[ "passage: TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.01584368571639061, 0.001455417019315064, -0.00658801756799221, 0.0177968367934227, 0.18000324070453644, 0.01899094320833683, 0.1102970764040947, 0.13923293352127075, -0.029492201283574104, -0.031411342322826385, 0.1258108913898468, 0.215000182390213, -0.002026807749643922, 0.09281328320...
null
null
transformers
# Model Trained Using AutoNLP - Model: Google's Pegasus (https://huggingface.co/google/pegasus-xsum) - Problem type: Summarization - Model ID: 34558227 - CO2 Emissions (in grams): 137.60574081887984 - Spaces: https://huggingface.co/spaces/TitleGenerators/ArxivTitleGenerator - Dataset: arXiv Dataset (https://www.kaggle.com/Cornell-University/arxiv) - Data subset used: https://huggingface.co/datasets/AryanLala/autonlp-data-Scientific_Title_Generator ## Validation Metrics - Loss: 2.578599214553833 - Rouge1: 44.8482 - Rouge2: 24.4052 - RougeL: 40.1716 - RougeLsum: 40.1396 - Gen Len: 11.4675 ## Social - LinkedIn: https://www.linkedin.com/in/aryanlala/ - Twitter: https://twitter.com/AryanLala20 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/AryanLala/autonlp-Scientific_Title_Generator-34558227 ```
{"language": "en", "tags": "autonlp", "datasets": ["AryanLala/autonlp-data-Scientific_Title_Generator"], "widget": [{"text": "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets."}], "co2_eq_emissions": 137.60574081887984}
text2text-generation
AryanLala/autonlp-Scientific_Title_Generator-34558227
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "en", "dataset:AryanLala/autonlp-data-Scientific_Title_Generator", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #pegasus #text2text-generation #autonlp #en #dataset-AryanLala/autonlp-data-Scientific_Title_Generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Trained Using AutoNLP - Model: Google's Pegasus (URL - Problem type: Summarization - Model ID: 34558227 - CO2 Emissions (in grams): 137.60574081887984 - Spaces: URL - Dataset: arXiv Dataset (URL - Data subset used: URL ## Validation Metrics - Loss: 2.578599214553833 - Rouge1: 44.8482 - Rouge2: 24.4052 - RougeL: 40.1716 - RougeLsum: 40.1396 - Gen Len: 11.4675 ## Social - LinkedIn: URL - Twitter: URL ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n- Model: Google's Pegasus (URL\n- Problem type: Summarization\n- Model ID: 34558227\n- CO2 Emissions (in grams): 137.60574081887984\n- Spaces: URL\n- Dataset: arXiv Dataset (URL\n- Data subset used: URL", "## Validation Metrics\n\n- Loss: 2.578599214553833\n- Rouge1: 44.8482\n- Rou...
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #en #dataset-AryanLala/autonlp-data-Scientific_Title_Generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Trained Using AutoNLP\n- Model: Google's Pegasus (URL\n- Problem type: Summarizatio...
[ 84, 74, 56, 10, 13 ]
[ "passage: TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #en #dataset-AryanLala/autonlp-data-Scientific_Title_Generator #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n# Model Trained Using AutoNLP\n- Model: Google's Pegasus (URL\n- Problem type: Summariza...
[ -0.11913996934890747, 0.2511729300022125, -0.003606410464271903, 0.07585025578737259, 0.08711470663547516, 0.03716585785150528, 0.13760927319526672, 0.08487968891859055, 0.11456042528152466, 0.06928487122058868, 0.08573422580957413, 0.060885220766067505, 0.04786363244056702, 0.220717042684...
null
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-parsbert-uncased-finetuned This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5596 | 1.0 | 515 | 3.2097 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"]}
fill-mask
Ashkanmh/bert-base-parsbert-uncased-finetuned
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bert-base-parsbert-uncased-finetuned ==================================== This model is a fine-tuned version of HooshvareLab/bert-base-parsbert-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 3.2045 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.10.0 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Trainin...
[ "TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_si...
[ 47, 98, 4, 34 ]
[ "passage: TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\...
[ -0.1051318570971489, 0.02749267965555191, -0.0016664782306179404, 0.1169574186205864, 0.1990557760000229, 0.03724491223692894, 0.11885788291692734, 0.09170103073120117, -0.11024725437164307, 0.03654031082987785, 0.1300671249628067, 0.14786748588085175, 0.0016968128038570285, 0.104617536067...
null
null
transformers
A discord chatbot trained on the whole LiS script to simulate character speech
{"tags": ["conversational"]}
text-generation
Aspect11/DialoGPT-Medium-LiSBot
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
A discord chatbot trained on the whole LiS script to simulate character speech
[]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 56 ]
[ "passage: TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.028994612395763397, 0.03717942163348198, -0.007205516565591097, 0.004361928440630436, 0.14950066804885864, -0.013941794633865356, 0.11986828595399857, 0.1182805597782135, -0.03048190474510193, -0.010174466297030449, 0.14877668023109436, 0.1851094663143158, -0.013957205228507519, 0.09307...
null
null
transformers
# RinTohsaka bot
{"tags": ["conversational"]}
text-generation
Asuramaru/DialoGPT-small-rintohsaka
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# RinTohsaka bot
[ "# RinTohsaka bot" ]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# RinTohsaka bot" ]
[ 56, 6 ]
[ "passage: TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# RinTohsaka bot" ]
[ -0.0005186268244870007, -0.0315963514149189, -0.0064608259126544, 0.01719084568321705, 0.14199034869670868, -0.014243904501199722, 0.1365937441587448, 0.10981365293264389, 0.014637871645390987, 0.011231489479541779, 0.12535503506660461, 0.1931908279657364, 0.008447866886854172, 0.156876266...
null
null
transformers
GPT-Glacier, a GPT-Neo 125M model finetuned on the Glacier2 Modding Discord server.
{}
text-generation
Atampy26/GPT-Glacier
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
GPT-Glacier, a GPT-Neo 125M model finetuned on the Glacier2 Modding Discord server.
[]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ 39 ]
[ "passage: TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
[ -0.031129082664847374, 0.010089954361319542, -0.005786326713860035, 0.002382909180596471, 0.17449840903282166, 0.03556443750858307, 0.05251007154583931, 0.13062667846679688, -0.03914913907647133, -0.02130643092095852, 0.14617420732975006, 0.1955074667930603, -0.02011914923787117, 0.1474472...
null
null
transformers
# Michael Scott DialoGPT Model
{"tags": ["conversational"]}
text-generation
Atchuth/DialoGPT-small-MichaelBot
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Michael Scott DialoGPT Model
[ "# Michael Scott DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Michael Scott DialoGPT Model" ]
[ 51, 8 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Michael Scott DialoGPT Model" ]
[ -0.05219185724854469, 0.09866782277822495, -0.005691746715456247, 0.014186694286763668, 0.1394561529159546, -0.001829843153245747, 0.16353429853916168, 0.11410007625818253, 0.0003006179176736623, -0.04741425812244415, 0.1353054791688919, 0.15719813108444214, -0.014070987701416016, 0.088142...
null
null
null
Placeholder
{}
null
Atlasky/Turkish-Negator
[ "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
Placeholder
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
[ 0.024608636274933815, -0.026205500587821007, -0.009666500613093376, -0.10395516455173492, 0.08638657629489899, 0.059816278517246246, 0.01882290467619896, 0.020661840215325356, 0.23975107073783875, -0.005599027033895254, 0.1219947561621666, 0.0015615287702530622, -0.037353623658418655, 0.03...
null
null
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
text-generation
Augustvember/WOKKAWOKKA
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.009697278961539268, 0.03208012506365776, -0.007204889785498381, 0.004809224978089333, 0.16726240515708923, 0.014898733235895634, 0.09765533357858658, 0.13672804832458496, -0.007841327227652073, -0.031050153076648712, 0.14490588009357452, 0.20411323010921478, -0.006439372431486845, 0.066...
null
null
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
text-generation
Augustvember/test
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.009697278961539268, 0.03208012506365776, -0.007204889785498381, 0.004809224978089333, 0.16726240515708923, 0.014898733235895634, 0.09765533357858658, 0.13672804832458496, -0.007841327227652073, -0.031050153076648712, 0.14490588009357452, 0.20411323010921478, -0.006439372431486845, 0.066...
null
null
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
text-generation
Augustvember/wokka5
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.009697278961539268, 0.03208012506365776, -0.007204889785498381, 0.004809224978089333, 0.16726240515708923, 0.014898733235895634, 0.09765533357858658, 0.13672804832458496, -0.007841327227652073, -0.031050153076648712, 0.14490588009357452, 0.20411323010921478, -0.006439372431486845, 0.066...
null
null
transformers
#MyAwesomeModel
{"tags": ["conversational"]}
text-generation
Augustvember/wokkabottest2
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#MyAwesomeModel
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 51 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ -0.009697278961539268, 0.03208012506365776, -0.007204889785498381, 0.004809224978089333, 0.16726240515708923, 0.014898733235895634, 0.09765533357858658, 0.13672804832458496, -0.007841327227652073, -0.031050153076648712, 0.14490588009357452, 0.20411323010921478, -0.006439372431486845, 0.066...
null
null
null
https://www.geogebra.org/m/bbuczchu https://www.geogebra.org/m/xwyasqje https://www.geogebra.org/m/mx2cqkwr https://www.geogebra.org/m/tkqqqthm https://www.geogebra.org/m/asdaf9mj https://www.geogebra.org/m/ywuaj7p5 https://www.geogebra.org/m/jkfkayj3 https://www.geogebra.org/m/hptnn7ar https://www.geogebra.org/m/de9cwmrf https://www.geogebra.org/m/yjc5hdep https://www.geogebra.org/m/nm8r56w5 https://www.geogebra.org/m/j7wfcpxj
{}
null
Aurora/asdawd
[ "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
URL URL URL URL URL URL URL URL URL URL URL URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
[ 0.024608636274933815, -0.026205500587821007, -0.009666500613093376, -0.10395516455173492, 0.08638657629489899, 0.059816278517246246, 0.01882290467619896, 0.020661840215325356, 0.23975107073783875, -0.005599027033895254, 0.1219947561621666, 0.0015615287702530622, -0.037353623658418655, 0.03...
null
null
null
https://community.afpglobal.org/network/members/profile?UserKey=b0b38adc-86c7-4d30-85c6-ac7d15c5eeb0 https://community.afpglobal.org/network/members/profile?UserKey=f4ddef89-b508-4695-9d1e-3d4d1a583279 https://community.afpglobal.org/network/members/profile?UserKey=36081479-5e7b-41ba-8370-ecf72989107a https://community.afpglobal.org/network/members/profile?UserKey=e1a88332-be7f-4997-af4e-9fcb7bb366da https://community.afpglobal.org/network/members/profile?UserKey=4738b405-2017-4025-9e5f-eadbf7674840 https://community.afpglobal.org/network/members/profile?UserKey=eb96d91c-31ae-46e1-8297-a3c8551f2e6a https://u.mpi.org/network/members/profile?UserKey=9867e2d9-d22a-4dab-8bcf-3da5c2f30745 https://u.mpi.org/network/members/profile?UserKey=5af232f2-a66e-438f-a5ab-9768321f791d https://community.afpglobal.org/network/members/profile?UserKey=481305df-48ea-4c50-bca4-a82008efb427 https://u.mpi.org/network/members/profile?UserKey=039fbb91-52c6-40aa-b58d-432fb4081e32 https://www.geogebra.org/m/jkfkayj3 https://www.geogebra.org/m/hptnn7ar https://www.geogebra.org/m/de9cwmrf https://www.geogebra.org/m/yjc5hdep https://www.geogebra.org/m/nm8r56w5 https://www.geogebra.org/m/j7wfcpxj https://www.geogebra.org/m/bbuczchu https://www.geogebra.org/m/xwyasqje https://www.geogebra.org/m/mx2cqkwr https://www.geogebra.org/m/tkqqqthm https://www.geogebra.org/m/asdaf9mj https://www.geogebra.org/m/ywuaj7p5
{}
null
Aurora/community.afpglobal
[ "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
URL URL URL URL URL URL https://u.URL https://u.URL URL https://u.URL URL URL URL URL URL URL URL URL URL URL URL URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
[ 0.024608636274933815, -0.026205500587821007, -0.009666500613093376, -0.10395516455173492, 0.08638657629489899, 0.059816278517246246, 0.01882290467619896, 0.020661840215325356, 0.23975107073783875, -0.005599027033895254, 0.1219947561621666, 0.0015615287702530622, -0.037353623658418655, 0.03...
null
null
transformers
# Blitzo DialoGPT Model
{"tags": ["conversational"]}
text-generation
AvatarXD/DialoGPT-medium-Blitzo
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Blitzo DialoGPT Model
[ "# Blitzo DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Blitzo DialoGPT Model" ]
[ 51, 9 ]
[ "passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Blitzo DialoGPT Model" ]
[ -0.04078742861747742, 0.050970129668712616, -0.006859891582280397, 0.015004501678049564, 0.12450626492500305, -0.010971833020448685, 0.15107408165931702, 0.11505024880170822, 0.004037887789309025, -0.022774508222937584, 0.11022014170885086, 0.196659654378891, -0.00424970593303442, 0.147848...