pipeline_tag string | library_name string | text string | metadata string | id string | last_modified null | tags list | sha null | created_at string | arxiv list | languages list | tags_str string | text_str string | text_lists list | processed_texts list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification | transformers |
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
``` | {"language": "hi", "license": "cc-by-4.0", "tags": ["roberta"], "datasets": ["HASOC 2021"], "widget": [{"text": "I like you. </s></s> I love you."}]} | l3cube-pune/hate-multi-roberta-hasoc-hindi | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"hi",
"arxiv:2110.12200",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2110.12200"
] | [
"hi"
] | TAGS
#transformers #pytorch #tf #safetensors #roberta #text-classification #hi #arxiv-2110.12200 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane.
More details on the dataset, models, and baseline results can be found in our [paper] (URL
| [
"## hate-roberta-hasoc-hindi\n\nhate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.\nThe label mappings are 0 -> None, 1 -> Offensive, 2 -> Hate, 3 -> Profane.\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (URL"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #roberta #text-classification #hi #arxiv-2110.12200 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## hate-roberta-hasoc-hindi\n\nhate-roberta-hasoc-hindi is a multi-class hate speech model fine-tuned on Hindi Hasoc Hate Speech Datase... |
text-classification | transformers |
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a binary hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Hate.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
``` | {"language": "hi", "license": "cc-by-4.0", "tags": ["roberta"], "datasets": ["HASOC 2021"], "widget": [{"text": "I like you. </s></s> I love you."}]} | l3cube-pune/hate-roberta-hasoc-hindi | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"hi",
"arxiv:2110.12200",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2110.12200"
] | [
"hi"
] | TAGS
#transformers #pytorch #tf #safetensors #roberta #text-classification #hi #arxiv-2110.12200 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
## hate-roberta-hasoc-hindi
hate-roberta-hasoc-hindi is a binary hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Hate.
More details on the dataset, models, and baseline results can be found in our [paper] (URL
| [
"## hate-roberta-hasoc-hindi\n\nhate-roberta-hasoc-hindi is a binary hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 2021.\nThe label mappings are 0 -> None, 1 -> Hate.\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (URL"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #roberta #text-classification #hi #arxiv-2110.12200 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## hate-roberta-hasoc-hindi\n\nhate-roberta-hasoc-hindi is a binary hate speech model fine-tuned on Hindi Hasoc Hate Speech Dataset 202... |
fill-mask | transformers |
## MahaAlBERT
MahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br> | {"language": "mr", "license": "cc-by-4.0", "datasets": ["L3Cube-MahaCorpus"]} | l3cube-pune/marathi-albert | null | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2202.01159"
] | [
"mr"
] | TAGS
#transformers #pytorch #albert #fill-mask #mr #dataset-L3Cube-MahaCorpus #arxiv-2202.01159 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
## MahaAlBERT
MahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (URL
More details on the dataset, models, and baseline results can be found in our [paper] (URL
Other Monolingual Indic BERT models are listed below: <br>
<a href='URL Marathi BERT </a> <br>
<a href='URL Marathi RoBERTa </a> <br>
<a href='URL Marathi AlBERT </a> <br>
<a href='URL Hindi BERT </a> <br>
<a href='URL Hindi RoBERTa </a> <br>
<a href='URL Hindi AlBERT </a> <br>
<a href='URL Dev BERT </a> <br>
<a href='URL Dev RoBERTa </a> <br>
<a href='URL Dev AlBERT </a> <br>
<a href='URL Kannada BERT </a> <br>
<a href='URL Telugu BERT </a> <br>
<a href='URL Malayalam BERT </a> <br>
<a href='URL Tamil BERT </a> <br>
<a href='URL Gujarati BERT </a> <br>
<a href='URL Oriya BERT </a> <br>
<a href='URL Bengali BERT </a> <br>
<a href='URL Punjabi BERT </a> <br>
<a href='URL Assamese BERT </a> <br> | [
"## MahaAlBERT\nMahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets. \n[dataset link] (URL\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (URL\n\n\n\nOther Monolingual Indic BERT models are listed below... | [
"TAGS\n#transformers #pytorch #albert #fill-mask #mr #dataset-L3Cube-MahaCorpus #arxiv-2202.01159 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## MahaAlBERT\nMahaAlBERT is a Marathi AlBERT model trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datase... |
fill-mask | transformers |
## MahaBERT
New version of this model is available here: https://huggingface.co/l3cube-pune/marathi-bert-v2
MahaBERT is a Marathi BERT model. It is a multilingual BERT (bert-base-multilingual-cased) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
``` | {"language": "mr", "license": "cc-by-4.0", "datasets": ["L3Cube-MahaCorpus"]} | l3cube-pune/marathi-bert | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2202.01159"
] | [
"mr"
] | TAGS
#transformers #pytorch #safetensors #bert #fill-mask #mr #dataset-L3Cube-MahaCorpus #arxiv-2202.01159 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
## MahaBERT
New version of this model is available here: URL
MahaBERT is a Marathi BERT model. It is a multilingual BERT (bert-base-multilingual-cased) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (URL
More details on the dataset, models, and baseline results can be found in our [paper] (URL
| [
"## MahaBERT\n\nNew version of this model is available here: URL\n\nMahaBERT is a Marathi BERT model. It is a multilingual BERT (bert-base-multilingual-cased) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets. \n[dataset link] (URL\n\nMore details on the dataset, models... | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #mr #dataset-L3Cube-MahaCorpus #arxiv-2202.01159 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## MahaBERT\n\nNew version of this model is available here: URL\n\nMahaBERT is a Marathi BERT model. It is a multilingual BER... |
fill-mask | transformers |
## MahaRoBERTa
MahaRoBERTa is a Marathi RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
```
Other Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/assamese-bert'> Assamese BERT </a> <br> | {"language": "mr", "license": "cc-by-4.0", "datasets": ["L3Cube-MahaCorpus"]} | l3cube-pune/marathi-roberta | null | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2202.01159"
] | [
"mr"
] | TAGS
#transformers #pytorch #safetensors #xlm-roberta #fill-mask #mr #dataset-L3Cube-MahaCorpus #arxiv-2202.01159 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
## MahaRoBERTa
MahaRoBERTa is a Marathi RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (URL
More details on the dataset, models, and baseline results can be found in our [paper] (URL
Other Monolingual Indic BERT models are listed below: <br>
<a href='URL Marathi BERT </a> <br>
<a href='URL Marathi RoBERTa </a> <br>
<a href='URL Marathi AlBERT </a> <br>
<a href='URL Hindi BERT </a> <br>
<a href='URL Hindi RoBERTa </a> <br>
<a href='URL Hindi AlBERT </a> <br>
<a href='URL Dev BERT </a> <br>
<a href='URL Dev RoBERTa </a> <br>
<a href='URL Dev AlBERT </a> <br>
<a href='URL Kannada BERT </a> <br>
<a href='URL Telugu BERT </a> <br>
<a href='URL Malayalam BERT </a> <br>
<a href='URL Tamil BERT </a> <br>
<a href='URL Gujarati BERT </a> <br>
<a href='URL Oriya BERT </a> <br>
<a href='URL Bengali BERT </a> <br>
<a href='URL Punjabi BERT </a> <br>
<a href='URL Assamese BERT </a> <br> | [
"## MahaRoBERTa\nMahaRoBERTa is a Marathi RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets. \n[dataset link] (URL\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (UR... | [
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #fill-mask #mr #dataset-L3Cube-MahaCorpus #arxiv-2202.01159 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## MahaRoBERTa\nMahaRoBERTa is a Marathi RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tun... |
text-generation | transformers |
# <3 | {"tags": ["conversational"]} | l41n/c3rbs | null | [
"transformers",
"pytorch",
"conversational",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #conversational #endpoints_compatible #region-us
|
# <3 | [
"# <3"
] | [
"TAGS\n#transformers #pytorch #conversational #endpoints_compatible #region-us \n",
"# <3"
] |
text-generation | transformers | Base model: [microsoft/DialoGPT-large](https://huggingface.co/microsoft/DialoGPT-large)
Fine tuned for dialogue response generation on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses
The following Dialogues were excluded:
- Those with donation amounts outside of the task range of [$0, $2].
- Those where a donation of 0 was made at the end of the task but a non-zero amount was pledged in the dialogue.
- Those with more than 800 words.
Stats:
- Training set: 519 dialogues
- Validation set: 58 dialogues
- ~20 utterances per dialogue | {} | LACAI/DialoGPT-large-PFG | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Base model: microsoft/DialoGPT-large
Fine tuned for dialogue response generation on the Persuasion For Good Dataset (Wang et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses
The following Dialogues were excluded:
- Those with donation amounts outside of the task range of [$0, $2].
- Those where a donation of 0 was made at the end of the task but a non-zero amount was pledged in the dialogue.
- Those with more than 800 words.
Stats:
- Training set: 519 dialogues
- Validation set: 58 dialogues
- ~20 utterances per dialogue | [] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Base model: [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small)
Fine tuned for dialogue response generation on the [Persuasion For Good Dataset](https://gitlab.com/ucdavisnlp/persuasionforgood) (Wang et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses
The following Dialogues were excluded:
- Those with donation amounts outside of the task range of [$0, $2].
- Those where a donation of 0 was made at the end of the task but a non-zero amount was pledged in the dialogue.
- Those with more than 800 words.
Stats:
- Training set: 519 dialogues
- Validation set: 58 dialogues
- ~20 utterances per dialogue | {} | LACAI/DialoGPT-small-PFG | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Base model: microsoft/DialoGPT-small
Fine tuned for dialogue response generation on the Persuasion For Good Dataset (Wang et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses
The following Dialogues were excluded:
- Those with donation amounts outside of the task range of [$0, $2].
- Those where a donation of 0 was made at the end of the task but a non-zero amount was pledged in the dialogue.
- Those with more than 800 words.
Stats:
- Training set: 519 dialogues
- Validation set: 58 dialogues
- ~20 utterances per dialogue | [] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Base model: [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small)
Fine tuned for dialogue response generation on the [Schema Guided Dialogue Dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue) (Rastogi et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses | {} | LACAI/DialoGPT-small-SGD | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Base model: microsoft/DialoGPT-small
Fine tuned for dialogue response generation on the Schema Guided Dialogue Dataset (Rastogi et al., 2019)
Three additional special tokens were added during the fine-tuning process:
- <|pad|> padding token
- <|user|> speaker control token to prompt user responses
- <|system|> speaker control token to prompt system responses | [] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Base model: [gpt2-xl](https://huggingface.co/gpt2-xl)
Domain-adapted for dialogue response and narrative generation on a [narrative-aligned variant](https://github.com/AbrahamSanders/gutenberg-dialog#download-narrative-aligned-datasets) of the [Gutenberg Dialogue Dataset (Csaky & Recski, 2021)](https://aclanthology.org/2021.eacl-main.11.pdf)
Fine-tuned for dialogue response generation on [Persuasion For Good (Wang et al., 2019)](https://aclanthology.org/P19-1566.pdf) ([dataset](https://gitlab.com/ucdavisnlp/persuasionforgood)) | {} | LACAI/gpt2-xl-dialog-narrative-persuasion | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Base model: gpt2-xl
Domain-adapted for dialogue response and narrative generation on a narrative-aligned variant of the Gutenberg Dialogue Dataset (Csaky & Recski, 2021)
Fine-tuned for dialogue response generation on Persuasion For Good (Wang et al., 2019) (dataset) | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_mlm
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.5832 | 0.19 | 15000 | 1.4992 |
| 1.5325 | 0.39 | 30000 | 1.4653 |
| 1.4979 | 0.58 | 45000 | 1.4359 |
| 1.4715 | 0.77 | 60000 | 1.4039 |
| 1.4448 | 0.97 | 75000 | 1.3877 |
| 1.4191 | 1.16 | 90000 | 1.3603 |
| 1.3988 | 1.35 | 105000 | 1.3425 |
| 1.3699 | 1.54 | 120000 | 1.3230 |
| 1.3493 | 1.74 | 135000 | 1.3012 |
| 1.3201 | 1.93 | 150000 | 1.2773 |
| 1.2993 | 2.12 | 165000 | 1.2617 |
| 1.2745 | 2.32 | 180000 | 1.2490 |
| 1.2614 | 2.51 | 195000 | 1.2283 |
| 1.2424 | 2.7 | 210000 | 1.2152 |
| 1.2296 | 2.9 | 225000 | 1.2052 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "output_mlm", "results": []}]} | LACAI/roberta-large-dialog-narrative | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
| output\_mlm
===========
This model is a fine-tuned version of roberta-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2024
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=... | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* e... |
text-generation | transformers | # Peter from Your Boyfriend Game. | {"tags": ["conversational"]} | lain2/Peterbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Peter from Your Boyfriend Game. | [
"# Peter from Your Boyfriend Game."
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Peter from Your Boyfriend Game."
] |
text-to-speech | espnet |
## ESPnet2 TTS model
### `lakahaga/novel_reading_tts`
This model was trained by lakahaga using novelspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 9827dfe37f69e8e55f902dc4e340de5108596311
pip install -e .
cd egs2/novelspeech/tts1
./run.sh --skip_data_prep false --skip_train true --download_model lakahaga/novel_reading_tts
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_tacotron_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 34177
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 25600000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/energy.scp
- energy
- npy
- - dump/raw/tr_no_dev/utt2sid
- sids
- text_int
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/energy.scp
- energy
- npy
- - dump/raw/dev/utt2sid
- sids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- '='
- _
- A
- Y
- N
- O
- E
- U
- L
- G
- S
- D
- M
- J
- H
- B
- ZERO
- TWO
- C
- .
- Q
- ','
- P
- T
- SEVEN
- X
- W
- THREE
- ONE
- NINE
- K
- EIGHT
- '@'
- '!'
- Z
- '?'
- F
- SIX
- FOUR
- '#'
- $
- +
- '%'
- FIVE
- '~'
- AND
- '*'
- '...'
- ''
- ^
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/feats_stats.npz
tts: fastspeech2
tts_conf:
adim: 384
aheads: 2
elayers: 4
eunits: 1536
dlayers: 4
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
encoder_type: conformer
decoder_type: conformer
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
pitch_extract: dio
pitch_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
reduction_factor: 1
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/energy_stats.npz
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "ko", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["novelspeech"]} | lakahaga/novel_reading_tts | null | [
"espnet",
"audio",
"text-to-speech",
"ko",
"dataset:novelspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"1804.00015"
] | [
"ko"
] | TAGS
#espnet #audio #text-to-speech #ko #dataset-novelspeech #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
## ESPnet2 TTS model
### 'lakahaga/novel_reading_tts'
This model was trained by lakahaga using novelspeech recipe in espnet.
### Demo: How to use in ESPnet2
## TTS config
<details><summary>expand</summary>
</details>
### Citing ESPnet
or arXiv:
| [
"## ESPnet2 TTS model",
"### 'lakahaga/novel_reading_tts'\n\nThis model was trained by lakahaga using novelspeech recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## TTS config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] | [
"TAGS\n#espnet #audio #text-to-speech #ko #dataset-novelspeech #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"## ESPnet2 TTS model",
"### 'lakahaga/novel_reading_tts'\n\nThis model was trained by lakahaga using novelspeech recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## TTS conf... |
text-classification | transformers |
# distilbert-base-multilingual-cased-vietnamese-topicifier
## About
Fine-tuning from `distilbert-base-multilingual-cased` with a tiny dataset about Vietnamese topics.
## Usage
Try entering a message to predict what topic is being discussed. For example:
```
# Photography
Đam mê của tôi là nhiếp ảnh
# World War I
Bạn đã từng nghe về cuộc đại thế chiến ?
```
## Other
The model was fine-tuning with a tiny dataset, don't use it for a product. | {"language": ["vi"], "license": ["mit"], "tags": ["vietnamese", "topicifier", "multilingual", "tiny"], "pipeline_tag": "text-classification", "widget": [{"text": "\u0110am m\u00ea c\u1ee7a t\u00f4i l\u00e0 nhi\u1ebfp \u1ea3nh"}]} | lamhieu/distilbert-base-multilingual-cased-vietnamese-topicifier | null | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"vietnamese",
"topicifier",
"multilingual",
"tiny",
"vi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"vi"
] | TAGS
#transformers #pytorch #safetensors #distilbert #text-classification #vietnamese #topicifier #multilingual #tiny #vi #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-base-multilingual-cased-vietnamese-topicifier
## About
Fine-tuning from 'distilbert-base-multilingual-cased' with a tiny dataset about Vietnamese topics.
## Usage
Try entering a message to predict what topic is being discussed. For example:
## Other
The model was fine-tuning with a tiny dataset, don't use it for a product. | [
"# distilbert-base-multilingual-cased-vietnamese-topicifier",
"## About\n\nFine-tuning from 'distilbert-base-multilingual-cased' with a tiny dataset about Vietnamese topics.",
"## Usage\n\nTry entering a message to predict what topic is being discussed. For example:",
"## Other\n\nThe model was fine-tuning wi... | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #text-classification #vietnamese #topicifier #multilingual #tiny #vi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-base-multilingual-cased-vietnamese-topicifier",
"## About\n\nFine-tuning from 'distilbert-base-multil... |
text2text-generation | transformers |
# Gemini
For in-depth understanding of our model and methods, please see our blog [here](https://www.describe-ai.com/gemini)
## Model description
Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in:
- Python
- Javascript (mostly vanilla JS, however, it can handle frameworks like React as well)
- Java
- Ruby
- Go
And outputs a description in English.
## Intended uses
Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations.
### How to use
You can use this model directly with a pipeline for Text2Text generation, as shown below:
```python
from transformers import pipeline, set_seed
summarizer = pipeline('text2text-generation', model='describeai/gemini')
code = "print('hello world!')"
response = summarizer(code, max_length=100, num_beams=3)
print("Summarized code: " + response[0]['generated_text'])
```
Which should yield something along the lines of:
```
Summarized code: The following code is greeting the world.
```
### Model sizes
- Gemini (this repo): 770 Million Parameters
- Gemini-Small - 220 Million Parameters
### Limitations
Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results.
### About Us
A Describe.ai, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community. | {"language": "en", "license": "mit", "tags": ["Explain code", "Code Summarization", "Summarization"]} | describeai/gemini | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Explain code",
"Code Summarization",
"Summarization",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #Explain code #Code Summarization #Summarization #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Gemini
For in-depth understanding of our model and methods, please see our blog here
## Model description
Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in:
- Python
- Javascript (mostly vanilla JS, however, it can handle frameworks like React as well)
- Java
- Ruby
- Go
And outputs a description in English.
## Intended uses
Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations.
### How to use
You can use this model directly with a pipeline for Text2Text generation, as shown below:
Which should yield something along the lines of:
### Model sizes
- Gemini (this repo): 770 Million Parameters
- Gemini-Small - 220 Million Parameters
### Limitations
Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results.
### About Us
A URL, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community. | [
"# Gemini\n\nFor in-depth understanding of our model and methods, please see our blog here",
"## Model description\n\nGemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetical... | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #Explain code #Code Summarization #Summarization #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Gemini\n\nFor in-depth understanding of our model and methods, please see our blog here",
... |
text2text-generation | transformers |
# Gemini
For in-depth understanding of our model and methods, please see our blog [here](https://www.describe-ai.com/gemini)
## Model description
Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in:
- Python
- Javascript (mostly vanilla JS, however, it can handle frameworks like React as well)
- Java
- Ruby
- Go
And outputs a description in English.
## Intended uses & limitations
Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations.
### How to use
You can use this model directly with a pipeline for Text2Text generation, as shown below:
```python
from transformers import pipeline, set_seed
summarizer = pipeline('text2text-generation', model='describeai/gemini-small')
code = "print('hello world!')"
response = summarizer(code, max_length=100, num_beams=3)
print("Summarized code: " + response[0]['generated_text'])
```
Which should yield something along the lines of:
```
Summarized code: The following code is greeting the world.
```
### Model sizes
- Gemini: 770 Million Parameters
- Gemini-Small (this repo): 220 Million Parameters
### Limitations
Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results.
### About Us
A Describe.ai, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community. | {"language": "en", "license": "mit", "tags": ["Explain code", "Code Summarization", "Summarization"]} | describeai/gemini-small | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Explain code",
"Code Summarization",
"Summarization",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #Explain code #Code Summarization #Summarization #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Gemini
For in-depth understanding of our model and methods, please see our blog here
## Model description
Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in:
- Python
- Javascript (mostly vanilla JS, however, it can handle frameworks like React as well)
- Java
- Ruby
- Go
And outputs a description in English.
## Intended uses & limitations
Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations.
### How to use
You can use this model directly with a pipeline for Text2Text generation, as shown below:
Which should yield something along the lines of:
### Model sizes
- Gemini: 770 Million Parameters
- Gemini-Small (this repo): 220 Million Parameters
### Limitations
Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results.
### About Us
A URL, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community. | [
"# Gemini\n\nFor in-depth understanding of our model and methods, please see our blog here",
"## Model description\n\nGemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetical... | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #Explain code #Code Summarization #Summarization #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Gemini\n\nFor in-depth understanding of our model and methods, please see our blog here",
... |
text-generation | transformers |
# Hagrid DailoGPT Model | {"tags": ["conversational"]} | lanejm/DialoGPT-small-hagrid | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Hagrid DailoGPT Model | [
"# Hagrid DailoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Hagrid DailoGPT Model"
] |
text-classification | transformers |
# bert-imdb-1hidden
## Model description
A `bert-base-uncased` model was restricted to 1 hidden layer and
fine-tuned for sequence classification on the
imdb dataset loaded using the `datasets` library.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
pretrained = "lannelin/bert-imdb-1hidden"
tokenizer = AutoTokenizer.from_pretrained(pretrained)
model = AutoModelForSequenceClassification.from_pretrained(pretrained)
LABELS = ["negative", "positive"]
def get_sentiment(text: str):
inputs = tokenizer.encode_plus(text, return_tensors='pt')
output = model(**inputs)[0].squeeze()
return LABELS[(output.argmax())]
print(get_sentiment("What a terrible film!"))
```
#### Limitations and bias
No special consideration given to limitations and bias.
Any bias held by the imdb dataset may be reflected in the model's output.
## Training data
Initialised with [bert-base-uncased](https://huggingface.co/bert-base-uncased)
Fine tuned on [imdb](https://huggingface.co/datasets/imdb)
## Training procedure
The model was fine-tuned for 1 epoch with a batch size of 64,
a learning rate of 5e-5, and a maximum sequence length of 512.
## Eval results
Accuracy on imdb test set: 0.87132 | {"language": ["en"], "datasets": ["imdb"], "metrics": ["accuracy"]} | lannelin/bert-imdb-1hidden | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #safetensors #bert #text-classification #en #dataset-imdb #autotrain_compatible #endpoints_compatible #region-us
|
# bert-imdb-1hidden
## Model description
A 'bert-base-uncased' model was restricted to 1 hidden layer and
fine-tuned for sequence classification on the
imdb dataset loaded using the 'datasets' library.
## Intended uses & limitations
#### How to use
#### Limitations and bias
No special consideration given to limitations and bias.
Any bias held by the imdb dataset may be reflected in the model's output.
## Training data
Initialised with bert-base-uncased
Fine tuned on imdb
## Training procedure
The model was fine-tuned for 1 epoch with a batch size of 64,
a learning rate of 5e-5, and a maximum sequence length of 512.
## Eval results
Accuracy on imdb test set: 0.87132 | [
"# bert-imdb-1hidden",
"## Model description\n\nA 'bert-base-uncased' model was restricted to 1 hidden layer and\nfine-tuned for sequence classification on the \nimdb dataset loaded using the 'datasets' library.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\nNo special ... | [
"TAGS\n#transformers #pytorch #jax #safetensors #bert #text-classification #en #dataset-imdb #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-imdb-1hidden",
"## Model description\n\nA 'bert-base-uncased' model was restricted to 1 hidden layer and\nfine-tuned for sequence classification on the... |
feature-extraction | transformers |
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: [Code and Named Entity Recognition in StackOverflow](https://www.aclweb.org/anthology/2020.acl-main.443/).
#### How to use
```python
from transformers import *
import torch
tokenizer = AutoTokenizer.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
model = AutoModelForTokenClassification.from_pretrained("lanwuwei/BERTOverflow_stackoverflow_github")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{tabassum2020code,
title={Code and Named Entity Recognition in StackOverflow},
author={Tabassum, Jeniya and Maddela, Mounica and Xu, Wei and Ritter, Alan },
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},
url={https://www.aclweb.org/anthology/2020.acl-main.443/}
year = {2020},
}
```
| {} | lanwuwei/BERTOverflow_stackoverflow_github | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #safetensors #bert #feature-extraction #endpoints_compatible #region-us
|
# BERTOverflow
## Model description
We pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: Code and Named Entity Recognition in StackOverflow.
#### How to use
### BibTeX entry and citation info
| [
"# BERTOverflow",
"## Model description\n\nWe pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 paper: Code and Named Entity Recognition in StackOverflow.",
"#### How to use",
"### BibTeX entry and citation in... | [
"TAGS\n#transformers #pytorch #jax #safetensors #bert #feature-extraction #endpoints_compatible #region-us \n",
"# BERTOverflow",
"## Model description\n\nWe pre-trained BERT-base model on 152 million sentences from the StackOverflow's 10 year archive. More details of this model can be found in our ACL 2020 pap... |
feature-extraction | transformers |
## GigaBERT-v3
GigaBERT-v3 is a customized bilingual BERT for English and Arabic. It was pre-trained in a large-scale corpus (Gigaword+Oscar+Wikipedia) with ~10B tokens, showing state-of-the-art zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper:
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {An Empirical Study of Pre-trained Transformers for Arabic Information Extraction},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
## Usage
```
from transformers import *
tokenizer = BertTokenizer.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English", do_lower_case=True)
model = BertForTokenClassification.from_pretrained("lanwuwei/GigaBERT-v3-Arabic-and-English")
```
More code examples can be found [here](https://github.com/lanwuwei/GigaBERT).
| {"language": ["en", "ar", "multilingual"], "datasets": ["gigaword", "oscar", "wikipedia"]} | lanwuwei/GigaBERT-v3-Arabic-and-English | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"en",
"ar",
"multilingual",
"dataset:gigaword",
"dataset:oscar",
"dataset:wikipedia",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en",
"ar",
"multilingual"
] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #en #ar #multilingual #dataset-gigaword #dataset-oscar #dataset-wikipedia #endpoints_compatible #region-us
|
## GigaBERT-v3
GigaBERT-v3 is a customized bilingual BERT for English and Arabic. It was pre-trained in a large-scale corpus (Gigaword+Oscar+Wikipedia) with ~10B tokens, showing state-of-the-art zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper:
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {An Empirical Study of Pre-trained Transformers for Arabic Information Extraction},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
## Usage
More code examples can be found here.
| [
"## GigaBERT-v3\nGigaBERT-v3 is a customized bilingual BERT for English and Arabic. It was pre-trained in a large-scale corpus (Gigaword+Oscar+Wikipedia) with ~10B tokens, showing state-of-the-art zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found i... | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #en #ar #multilingual #dataset-gigaword #dataset-oscar #dataset-wikipedia #endpoints_compatible #region-us \n",
"## GigaBERT-v3\nGigaBERT-v3 is a customized bilingual BERT for English and Arabic. It was pre-trained in a large-scale corpus (Gigaword+Osca... |
feature-extraction | transformers | ## GigaBERT-v4
GigaBERT-v4 is a continued pre-training of [GigaBERT-v3](https://huggingface.co/lanwuwei/GigaBERT-v3-Arabic-and-English) on code-switched data, showing improved zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper:
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {GigaBERT: Zero-shot Transfer Learning from English to Arabic},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
## Download
```
from transformers import *
tokenizer = BertTokenizer.from_pretrained("lanwuwei/GigaBERT-v4-Arabic-and-English", do_lower_case=True)
model = BertForTokenClassification.from_pretrained("lanwuwei/GigaBERT-v4-Arabic-and-English")
```
Here is downloadable link [GigaBERT-v4](https://drive.google.com/drive/u/1/folders/1uFGzMuTOD7iNsmKQYp_zVuvsJwOaIdar).
| {} | lanwuwei/GigaBERT-v4-Arabic-and-English | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us
| ## GigaBERT-v4
GigaBERT-v4 is a continued pre-training of GigaBERT-v3 on code-switched data, showing improved zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper:
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {GigaBERT: Zero-shot Transfer Learning from English to Arabic},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
## Download
Here is downloadable link GigaBERT-v4.
| [
"## GigaBERT-v4\nGigaBERT-v4 is a continued pre-training of GigaBERT-v3 on code-switched data, showing improved zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More details can be found in the following paper:\n\n\t@inproceedings{lan2020gigabert,\n\t author = {Lan, W... | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us \n",
"## GigaBERT-v4\nGigaBERT-v4 is a continued pre-training of GigaBERT-v3 on code-switched data, showing improved zero-shot transfer performance from English to Arabic on information extraction (IE) tasks. More detail... |
text-generation | transformers |
# Rick DialoGPT Model | {"tags": ["conversational"]} | lapacc33/DialoGPT-medium-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model | [
"# Rick DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] |
null | null | ERROR: type should be string, got "https://camerasaigon24h.com\nhttps://cameragiamsat360.com\nhttps://lapdatcameracongty.vn\nhttps://lapdatcamerawifi.vn\nhttps://lapcamerawifi.com\nhttps://giacameraquansat.com\nhttps://cameraquansatre.com\nhttps://cameraanninhwifi.com\n\nhttps://camerawifigiadinh.com/\nhttps://lapcameratanphu.com\nhttp://camerathehemoi.com\nhttp://lapcameratanbinh.com\nhttp://lapcamerabinhtan.com\nhttp://lapcameraquan2giare.com\nhttp://cameraquan12.com\nhttp://cameraquan3giare.com\nhttp://lapdatcameraquan4.com\nhttp://lapdatcameraquan10.com\nhttp://lapdatcameraquan7.com\nhttp://camerabinhthanh.com\nhttp://lapcameraquan9giare.com\nhttp://lapdatcameraquan11.com\nhttp://lapcameragiarethuduc.com\nhttp://lapdatcameraquan6.com\nhttp://lapdatcameraquan5.com\nhttp://lapcameraquan1.com\nhttp://cameraquan8.com\nhttp://cameranhatranggiare.com\nhttp://lapcamerahocmon.com\nhttp://lapcameragiaregovap.com\nhttp://lapcameraphunhuan.com\nhttp://cameragiarebinhduong.com\nhttp://phanphoicameragiare.com\nhttp://camerawifigiadinh.com/\nhttp://cameraphanthietgiare.com/" | {} | lapcameraatp/cameragiamsat | null | [
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#region-us
| URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL | [] | [
"TAGS\n#region-us \n"
] |
text-classification | transformers | # Danish BERT fine-tuned for Sentiment Analysis (Polarity)
This model detects polarity ('positive', 'neutral', 'negative') of danish texts.
It is trained and tested on Tweets annotated by [Alexandra Institute](https://github.com/alexandrainst).
Here is an example on how to load the model in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("larskjeldgaard/senda")
model = AutoModelForSequenceClassification.from_pretrained("larskjeldgaard/senda")
# create 'senda' sentiment analysis pipeline
senda_pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
senda_pipeline("Sikke en dejlig dag det er i dag")
```
| {"language": "da", "license": "cc-by-4.0", "tags": ["danish", "bert", "sentiment", "polarity"], "widget": [{"text": "Sikke en dejlig dag det er i dag"}]} | larskjeldgaard/senda | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"danish",
"sentiment",
"polarity",
"da",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"da"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #danish #sentiment #polarity #da #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| # Danish BERT fine-tuned for Sentiment Analysis (Polarity)
This model detects polarity ('positive', 'neutral', 'negative') of danish texts.
It is trained and tested on Tweets annotated by Alexandra Institute.
Here is an example on how to load the model in PyTorch using the Transformers library:
| [
"# Danish BERT fine-tuned for Sentiment Analysis (Polarity)\nThis model detects polarity ('positive', 'neutral', 'negative') of danish texts.\n\nIt is trained and tested on Tweets annotated by Alexandra Institute.\n\nHere is an example on how to load the model in PyTorch using the Transformers library:"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #danish #sentiment #polarity #da #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Danish BERT fine-tuned for Sentiment Analysis (Polarity)\nThis model detects polarity ('positive', 'neutral', 'negative') of danish texts... |
fill-mask | transformers |
# LASSL bert-ko-base
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("lassl/bert-ko-base")
tokenizer = AutoTokenizer.from_pretrained("lassl/bert-ko-base")
```
## Evaluation
Evaulation results will be released soon.
## Corpora
This model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`.
```bash
corpora/
├── [707M] kowiki_latest.txt
├── [ 26M] modu_dialogue_v1.2.txt
├── [1.3G] modu_news_v1.1.txt
├── [9.7G] modu_news_v2.0.txt
├── [ 15M] modu_np_v1.1.txt
├── [1008M] modu_spoken_v1.2.txt
├── [6.5G] modu_written_v1.0.txt
└── [413M] petition.txt
```
| {"language": "ko", "license": "apache-2.0", "tags": ["fill-mask", "korean", "lassl"], "mask_token": "[MASK]", "widget": [{"text": "\ub300\ud55c\ubbfc\uad6d\uc758 \uc218\ub3c4\ub294 [MASK] \uc785\ub2c8\ub2e4."}]} | lassl/bert-ko-base | null | [
"transformers",
"pytorch",
"bert",
"pretraining",
"fill-mask",
"korean",
"lassl",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"ko"
] | TAGS
#transformers #pytorch #bert #pretraining #fill-mask #korean #lassl #ko #license-apache-2.0 #endpoints_compatible #region-us
|
# LASSL bert-ko-base
## How to use
## Evaluation
Evaulation results will be released soon.
## Corpora
This model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see 'URL'.
| [
"# LASSL bert-ko-base",
"## How to use",
"## Evaluation\r\nEvaulation results will be released soon.",
"## Corpora\r\nThis model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see '... | [
"TAGS\n#transformers #pytorch #bert #pretraining #fill-mask #korean #lassl #ko #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LASSL bert-ko-base",
"## How to use",
"## Evaluation\r\nEvaulation results will be released soon.",
"## Corpora\r\nThis model was trained from 702,437 examples (whose h... |
fill-mask | transformers |
# LASSL bert-ko-small
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("lassl/bert-ko-small")
tokenizer = AutoTokenizer.from_pretrained("lassl/bert-ko-small")
```
## Evaluation
Evaulation results will be released soon.
## Corpora
This model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`.
```bash
corpora/
├── [707M] kowiki_latest.txt
├── [ 26M] modu_dialogue_v1.2.txt
├── [1.3G] modu_news_v1.1.txt
├── [9.7G] modu_news_v2.0.txt
├── [ 15M] modu_np_v1.1.txt
├── [1008M] modu_spoken_v1.2.txt
├── [6.5G] modu_written_v1.0.txt
└── [413M] petition.txt
```
| {"language": "ko", "license": "apache-2.0", "tags": ["fill-mask", "korean", "lassl"], "mask_token": "[MASK]", "widget": [{"text": "\ub300\ud55c\ubbfc\uad6d\uc758 \uc218\ub3c4\ub294 [MASK] \uc785\ub2c8\ub2e4."}]} | lassl/bert-ko-small | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"pretraining",
"fill-mask",
"korean",
"lassl",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"ko"
] | TAGS
#transformers #pytorch #safetensors #bert #pretraining #fill-mask #korean #lassl #ko #license-apache-2.0 #endpoints_compatible #region-us
|
# LASSL bert-ko-small
## How to use
## Evaluation
Evaulation results will be released soon.
## Corpora
This model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see 'URL'.
| [
"# LASSL bert-ko-small",
"## How to use",
"## Evaluation\nEvaulation results will be released soon.",
"## Corpora\nThis model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see 'URL... | [
"TAGS\n#transformers #pytorch #safetensors #bert #pretraining #fill-mask #korean #lassl #ko #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LASSL bert-ko-small",
"## How to use",
"## Evaluation\nEvaulation results will be released soon.",
"## Corpora\nThis model was trained from 702,437 example... |
fill-mask | transformers |
# LASSL roberta-ko-small
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("lassl/roberta-ko-small")
tokenizer = AutoTokenizer.from_pretrained("lassl/roberta-ko-small")
```
## Evaluation
Pretrained `roberta-ko-small` on korean language was trained by [LASSL](https://github.com/lassl/lassl) framework. Below performance was evaluated at 2021/12/15.
| nsmc | klue_nli | klue_sts | korquadv1 | klue_mrc | avg |
| ---- | -------- | -------- | --------- | ---- | -------- |
| 87.8846 | 66.3086 | 83.8353 | 83.1780 | 42.4585 | 72.7330 |
## Corpora
This model was trained from 6,860,062 examples (whose have 3,512,351,744 tokens). 6,860,062 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`.
```bash
corpora/
├── [707M] kowiki_latest.txt
├── [ 26M] modu_dialogue_v1.2.txt
├── [1.3G] modu_news_v1.1.txt
├── [9.7G] modu_news_v2.0.txt
├── [ 15M] modu_np_v1.1.txt
├── [1008M] modu_spoken_v1.2.txt
├── [6.5G] modu_written_v1.0.txt
└── [413M] petition.txt
```
| {"language": "ko", "license": "apache-2.0", "tags": ["korean", "lassl"], "mask_token": "<mask>", "widget": [{"text": "\ub300\ud55c\ubbfc\uad6d\uc758 \uc218\ub3c4\ub294 <mask> \uc785\ub2c8\ub2e4."}]} | lassl/roberta-ko-small | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"korean",
"lassl",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"ko"
] | TAGS
#transformers #pytorch #roberta #fill-mask #korean #lassl #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| LASSL roberta-ko-small
======================
How to use
----------
Evaluation
----------
Pretrained 'roberta-ko-small' on korean language was trained by LASSL framework. Below performance was evaluated at 2021/12/15.
Corpora
-------
This model was trained from 6,860,062 examples (whose have 3,512,351,744 tokens). 6,860,062 examples are extracted from below corpora. If you want to get information for training, you should see 'URL'.
| [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #korean #lassl #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8945
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 1.1411 | 1.0 | 235 | 0.9358 | 0.5 |
| 0.9653 | 2.0 | 470 | 0.8945 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]} | laurauzcategui/xlm-roberta-base-finetuned-marc-en | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-marc-en
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8945
* Mae: 0.5
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_... |
null | null | # Supervised Continous Bag of words model trained with Uruguayan news from Twitter
Model trained with Facebook's fasttext library. | {} | leandrodzp/cbow_uruguayan_news | null | [
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#region-us
| # Supervised Continous Bag of words model trained with Uruguayan news from Twitter
Model trained with Facebook's fasttext library. | [
"# Supervised Continous Bag of words model trained with Uruguayan news from Twitter\nModel trained with Facebook's fasttext library."
] | [
"TAGS\n#region-us \n",
"# Supervised Continous Bag of words model trained with Uruguayan news from Twitter\nModel trained with Facebook's fasttext library."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# celera_relevance
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3072
- Train Sparse Categorical Accuracy: 0.8813
- Validation Loss: 0.4371
- Validation Sparse Categorical Accuracy: 0.8295
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.4060 | 0.8274 | 0.3665 | 0.8440 | 0 |
| 0.3388 | 0.8594 | 0.3639 | 0.8585 | 1 |
| 0.3072 | 0.8813 | 0.4371 | 0.8295 | 2 |
### Framework versions
- Transformers 4.16.0
- TensorFlow 2.7.0
- Datasets 1.18.1
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "hfl/chinese-roberta-wwm-ext", "model-index": [{"name": "celera_relevance", "results": []}]} | leetdavid/celera_relevance | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| celera\_relevance
=================
This model is a fine-tuned version of hfl/chinese-roberta-wwm-ext on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.3072
* Train Sparse Categorical Accuracy: 0.8813
* Validation Loss: 0.4371
* Validation Sparse Categorical Accuracy: 0.8295
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.0
* TensorFlow 2.7.0
* Datasets 1.18.1
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework... | [
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# importance_model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4867
- Train Sparse Categorical Accuracy: 0.8389
- Validation Loss: 0.6060
- Validation Sparse Categorical Accuracy: 0.8016
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.7037 | 0.7614 | 0.6077 | 0.7964 | 0 |
| 0.5683 | 0.8120 | 0.5615 | 0.8106 | 1 |
| 0.4867 | 0.8389 | 0.6060 | 0.8016 | 2 |
### Framework versions
- Transformers 4.16.0
- TensorFlow 2.7.0
- Datasets 1.18.1
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "hfl/chinese-roberta-wwm-ext", "model-index": [{"name": "importance_model", "results": []}]} | leetdavid/importance_model | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| importance\_model
=================
This model is a fine-tuned version of hfl/chinese-roberta-wwm-ext on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.4867
* Train Sparse Categorical Accuracy: 0.8389
* Validation Loss: 0.6060
* Validation Sparse Categorical Accuracy: 0.8016
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.0
* TensorFlow 2.7.0
* Datasets 1.18.1
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework... | [
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# market_positivity
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4959
- Train Sparse Categorical Accuracy: 0.8060
- Validation Loss: 0.4484
- Validation Sparse Categorical Accuracy: 0.8187
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.6595 | 0.7184 | 0.5732 | 0.7479 | 0 |
| 0.4959 | 0.8060 | 0.4484 | 0.8187 | 1 |
### Framework versions
- Transformers 4.16.0
- TensorFlow 2.7.0
- Datasets 1.18.1
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "hfl/chinese-roberta-wwm-ext", "model-index": [{"name": "market_positivity", "results": []}]} | leetdavid/market_positivity | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| market\_positivity
==================
This model is a fine-tuned version of hfl/chinese-roberta-wwm-ext on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.4959
* Train Sparse Categorical Accuracy: 0.8060
* Validation Loss: 0.4484
* Validation Sparse Categorical Accuracy: 0.8187
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.0
* TensorFlow 2.7.0
* Datasets 1.18.1
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework... | [
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* o... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# market_positivity_model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5776
- Train Sparse Categorical Accuracy: 0.7278
- Validation Loss: 0.6460
- Validation Sparse Categorical Accuracy: 0.6859
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.7207 | 0.6394 | 0.6930 | 0.6811 | 0 |
| 0.6253 | 0.7033 | 0.6549 | 0.6872 | 1 |
| 0.5776 | 0.7278 | 0.6460 | 0.6859 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "hfl/chinese-roberta-wwm-ext", "model-index": [{"name": "market_positivity_model", "results": []}]} | leetdavid/market_positivity_model | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| market\_positivity\_model
=========================
This model is a fine-tuned version of hfl/chinese-roberta-wwm-ext on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.5776
* Train Sparse Categorical Accuracy: 0.7278
* Validation Loss: 0.6460
* Validation Sparse Categorical Accuracy: 0.6859
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework... | [
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# relevance-model
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3134
- Train Binary Accuracy: 0.8773
- Validation Loss: 0.3633
- Validation Binary Accuracy: 0.8541
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Binary Accuracy | Validation Loss | Validation Binary Accuracy | Epoch |
|:----------:|:---------------------:|:---------------:|:--------------------------:|:-----:|
| 0.3980 | 0.8289 | 0.3739 | 0.8541 | 0 |
| 0.3446 | 0.8606 | 0.3614 | 0.8505 | 1 |
| 0.3134 | 0.8773 | 0.3633 | 0.8541 | 2 |
### Framework versions
- Transformers 4.16.0
- TensorFlow 2.7.0
- Datasets 1.18.1
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "hfl/chinese-roberta-wwm-ext", "model-index": [{"name": "relevance-model", "results": []}]} | leetdavid/relevance-model | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| relevance-model
===============
This model is a fine-tuned version of hfl/chinese-roberta-wwm-ext on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.3134
* Train Binary Accuracy: 0.8773
* Validation Loss: 0.3633
* Validation Binary Accuracy: 0.8541
* Epoch: 2
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.0
* TensorFlow 2.7.0
* Datasets 1.18.1
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework... | [
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-hfl/chinese-roberta-wwm-ext #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5608
- Matthews Correlation: 0.5062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4851 | 0.4301 |
| No log | 2.0 | 268 | 0.4619 | 0.4891 |
| No log | 3.0 | 402 | 0.5447 | 0.4965 |
| 0.3828 | 4.0 | 536 | 0.5608 | 0.5062 |
| 0.3828 | 5.0 | 670 | 0.5702 | 0.5029 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5062132225102124, "name": "Matthews Correlation"}]}]}]} | leeyujin/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5608
* Matthews Correlation: 0.5062
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.8.1+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Traini... | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-0... |
image-classification | transformers |
# ResNet-50
Pretrained model on [ImageNet](http://www.image-net.org/). The ResNet architecture was introduced in
[this paper](https://arxiv.org/abs/1512.03385).
## Intended uses
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
## Evaluation results
This model has a top1-accuracy of 76.13% and a top-5 accuracy of 92.86% in the evaluation set of ImageNet.
| {"license": "afl-3.0", "tags": ["image-classification", "resnet"], "datasets": ["imagenet"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]} | leftthomas/resnet50 | null | [
"transformers",
"pytorch",
"resnet",
"image-classification",
"custom_code",
"dataset:imagenet",
"arxiv:1512.03385",
"license:afl-3.0",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"1512.03385"
] | [] | TAGS
#transformers #pytorch #resnet #image-classification #custom_code #dataset-imagenet #arxiv-1512.03385 #license-afl-3.0 #autotrain_compatible #region-us
|
# ResNet-50
Pretrained model on ImageNet. The ResNet architecture was introduced in
this paper.
## Intended uses
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
## Evaluation results
This model has a top1-accuracy of 76.13% and a top-5 accuracy of 92.86% in the evaluation set of ImageNet.
| [
"# ResNet-50\r\n\r\nPretrained model on ImageNet. The ResNet architecture was introduced in\r\nthis paper.",
"## Intended uses\r\n\r\nYou can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head\r\nto fine-tune it on a downstream task (another classification task ... | [
"TAGS\n#transformers #pytorch #resnet #image-classification #custom_code #dataset-imagenet #arxiv-1512.03385 #license-afl-3.0 #autotrain_compatible #region-us \n",
"# ResNet-50\r\n\r\nPretrained model on ImageNet. The ResNet architecture was introduced in\r\nthis paper.",
"## Intended uses\r\n\r\nYou can use th... |
text2text-generation | transformers | A mt5-base model that the vocab and word embedding are truncated, only Chinese and English characters are retained.
https://github.com/lemon234071/TransformerBaselines | {} | lemon234071/t5-base-Chinese | null | [
"transformers",
"pytorch",
"jax",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| A mt5-base model that the vocab and word embedding are truncated, only Chinese and English characters are retained.
URL | [] | [
"TAGS\n#transformers #pytorch #jax #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition | superb |
# Fine-tuned s3prl model for ASR | {"library_name": "superb", "tags": ["automatic-speech-recognition", "osanseviero/hubert_base"], "datasets": ["superb"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | leo19941227/superb-s3prl-osanseviero__hubert_base-asr-c61a5cff | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"osanseviero/hubert_base",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us
|
# Fine-tuned s3prl model for ASR | [
"# Fine-tuned s3prl model for ASR"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us \n",
"# Fine-tuned s3prl model for ASR"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the fdner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1016
- Precision: 0.9146
- Recall: 0.9414
- F1: 0.9278
- Accuracy: 0.9751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 2 | 0.9181 | 0.1271 | 0.1255 | 0.1263 | 0.7170 |
| No log | 2.0 | 4 | 0.8048 | 0.1919 | 0.2385 | 0.2127 | 0.7669 |
| No log | 3.0 | 6 | 0.7079 | 0.2422 | 0.3264 | 0.2781 | 0.7980 |
| No log | 4.0 | 8 | 0.6201 | 0.3505 | 0.4854 | 0.4070 | 0.8338 |
| No log | 5.0 | 10 | 0.5462 | 0.3898 | 0.4812 | 0.4307 | 0.8611 |
| No log | 6.0 | 12 | 0.4851 | 0.4749 | 0.5941 | 0.5279 | 0.8802 |
| No log | 7.0 | 14 | 0.4338 | 0.5213 | 0.6151 | 0.5643 | 0.8936 |
| No log | 8.0 | 16 | 0.3843 | 0.5663 | 0.6611 | 0.6100 | 0.9076 |
| No log | 9.0 | 18 | 0.3451 | 0.6255 | 0.6987 | 0.6601 | 0.9214 |
| No log | 10.0 | 20 | 0.3058 | 0.6719 | 0.7197 | 0.6949 | 0.9293 |
| No log | 11.0 | 22 | 0.2783 | 0.6808 | 0.7406 | 0.7094 | 0.9344 |
| No log | 12.0 | 24 | 0.2497 | 0.7050 | 0.7699 | 0.7360 | 0.9427 |
| No log | 13.0 | 26 | 0.2235 | 0.7519 | 0.8117 | 0.7807 | 0.9506 |
| No log | 14.0 | 28 | 0.2031 | 0.7713 | 0.8326 | 0.8008 | 0.9552 |
| No log | 15.0 | 30 | 0.1861 | 0.7915 | 0.8577 | 0.8233 | 0.9593 |
| No log | 16.0 | 32 | 0.1726 | 0.8031 | 0.8703 | 0.8353 | 0.9613 |
| No log | 17.0 | 34 | 0.1619 | 0.8320 | 0.8912 | 0.8606 | 0.9641 |
| No log | 18.0 | 36 | 0.1521 | 0.8571 | 0.9038 | 0.8798 | 0.9674 |
| No log | 19.0 | 38 | 0.1420 | 0.8710 | 0.9038 | 0.8871 | 0.9695 |
| No log | 20.0 | 40 | 0.1352 | 0.8795 | 0.9163 | 0.8975 | 0.9700 |
| No log | 21.0 | 42 | 0.1281 | 0.8755 | 0.9121 | 0.8934 | 0.9712 |
| No log | 22.0 | 44 | 0.1209 | 0.8916 | 0.9289 | 0.9098 | 0.9728 |
| No log | 23.0 | 46 | 0.1155 | 0.8924 | 0.9372 | 0.9143 | 0.9733 |
| No log | 24.0 | 48 | 0.1115 | 0.904 | 0.9456 | 0.9243 | 0.9746 |
| No log | 25.0 | 50 | 0.1087 | 0.9116 | 0.9498 | 0.9303 | 0.9746 |
| No log | 26.0 | 52 | 0.1068 | 0.9146 | 0.9414 | 0.9278 | 0.9740 |
| No log | 27.0 | 54 | 0.1054 | 0.9146 | 0.9414 | 0.9278 | 0.9743 |
| No log | 28.0 | 56 | 0.1036 | 0.9146 | 0.9414 | 0.9278 | 0.9743 |
| No log | 29.0 | 58 | 0.1022 | 0.9146 | 0.9414 | 0.9278 | 0.9746 |
| No log | 30.0 | 60 | 0.1016 | 0.9146 | 0.9414 | 0.9278 | 0.9751 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "datasets": ["fdner"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-chinese-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "fdner", "type": "fdner", "args": "fdner"}, "metrics": [{"type": "precision", "value": 0.9146341463414634, "name": "Precision"}, {"type": "recall", "value": 0.9414225941422594, "name": "Recall"}, {"type": "f1", "value": 0.9278350515463917, "name": "F1"}, {"type": "accuracy", "value": 0.9750636132315522, "name": "Accuracy"}]}]}]} | leonadase/bert-base-chinese-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:fdner",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-fdner #model-index #autotrain_compatible #endpoints_compatible #region-us
| bert-base-chinese-finetuned-ner
===============================
This model is a fine-tuned version of bert-base-chinese on the fdner dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1016
* Precision: 0.9146
* Recall: 0.9414
* F1: 0.9278
* Accuracy: 0.9751
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 10
* eval\_batch\_size: 10
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30",
"### Train... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-fdner #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\... |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9210
- Recall: 0.9357
- F1: 0.9283
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 |
| 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 |
| 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9210439378923027, "name": "Precision"}, {"type": "recall", "value": 0.9356751314464705, "name": "Recall"}, {"type": "f1", "value": 0.9283018867924528, "name": "F1"}, {"type": "accuracy", "value": 0.983176322938345, "name": "Accuracy"}]}]}]} | leonadase/distilbert-base-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0611
* Precision: 0.9210
* Recall: 0.9357
* F1: 0.9283
* Accuracy: 0.9832
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* le... |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-Robust - Finetuned on Librispeech (960 hours)
## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :
```ipython
with torch.no_grad():
model(torch.randn((1,300_000)))
```
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. | {"language": "en", "license": "apache-2.0", "tags": ["speech", "automatic-speech-recognition", "CTC", "Attention", "wav2vec2"], "datasets": ["libri_light", "common_voice", "switchboard", "fisher"]} | leonardvorbeck/wav2vec2-large-robust-LS960 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"CTC",
"Attention",
"en",
"dataset:libri_light",
"dataset:common_voice",
"dataset:switchboard",
"dataset:fisher",
"arxiv:2104.01027",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2104.01027"
] | [
"en"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #speech #CTC #Attention #en #dataset-libri_light #dataset-common_voice #dataset-switchboard #dataset-fisher #arxiv-2104.01027 #license-apache-2.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Large-Robust - Finetuned on Librispeech (960 hours)
## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :
Facebook's Wav2Vec2
The base model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- Libri-Light: open-source audio books from the LibriVox project; clean, read-out audio data
- CommonVoice: crowd-source collected audio data; read-out text snippets
- Switchboard: telephone speech corpus; noisy telephone data
- Fisher: conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
Check out this blog for more information.
Paper Robust Wav2Vec2
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
Abstract
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under URL
# Usage
See this notebook for more information on how to fine-tune the model. | [
"# Wav2Vec2-Large-Robust - Finetuned on Librispeech (960 hours)",
"## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :\n\n\n\nFacebook's Wav2Vec2\n\nThe base model pretrain... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #speech #CTC #Attention #en #dataset-libri_light #dataset-common_voice #dataset-switchboard #dataset-fisher #arxiv-2104.01027 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-Robust - Finetuned on Librispeech (960 hou... |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-Robust - Finetuned on Switchboard (300 hours)
## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :
```ipython
with torch.no_grad():
model(torch.randn((1,300_000)))
```
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
- [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
- [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
- [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
**Abstract**
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. | {"language": "en", "license": "apache-2.0", "tags": ["speech", "automatic-speech-recognition", "CTC", "Attention", "wav2vec2"], "datasets": ["libri_light", "common_voice", "switchboard", "fisher"]} | leonardvorbeck/wav2vec2-large-robust-SB300 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"CTC",
"Attention",
"en",
"dataset:libri_light",
"dataset:common_voice",
"dataset:switchboard",
"dataset:fisher",
"arxiv:2104.01027",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2104.01027"
] | [
"en"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #speech #CTC #Attention #en #dataset-libri_light #dataset-common_voice #dataset-switchboard #dataset-fisher #arxiv-2104.01027 #license-apache-2.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Large-Robust - Finetuned on Switchboard (300 hours)
## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :
Facebook's Wav2Vec2
The base model pretrained on 16kHz sampled speech audio.
Speech datasets from multiple domains were used to pretrain the model:
- Libri-Light: open-source audio books from the LibriVox project; clean, read-out audio data
- CommonVoice: crowd-source collected audio data; read-out text snippets
- Switchboard: telephone speech corpus; noisy telephone data
- Fisher: conversational telephone speech; noisy telephone data
When using the model make sure that your speech input is also sampled at 16Khz.
Check out this blog for more information.
Paper Robust Wav2Vec2
Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
Abstract
Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
The original model can be found under URL
# Usage
See this notebook for more information on how to fine-tune the model. | [
"# Wav2Vec2-Large-Robust - Finetuned on Switchboard (300 hours)",
"## Note : Model has not been initialized. If you want to use it without further finetuning, do a forward pass first to recalculate the normalized weights of the positional convolutional layer :\n\n\n\nFacebook's Wav2Vec2\n\nThe base model pretrain... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #speech #CTC #Attention #en #dataset-libri_light #dataset-common_voice #dataset-switchboard #dataset-fisher #arxiv-2104.01027 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-Robust - Finetuned on Switchboard (300 hou... |
text-classification | transformers |
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2287
- Accuracy: 0.918
- F1: 0.9182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8478 | 1.0 | 250 | 0.3294 | 0.9015 | 0.8980 |
| 0.2616 | 2.0 | 500 | 0.2287 | 0.918 | 0.9182 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.918, "name": "Accuracy"}, {"type": "f1", "value": 0.9182094401352938, "name": "F1"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9185, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFmYmNlNzU0NzNlMGU4NDI1ZjAyMzRjY2U4NzZkMjVkNmM5Zjk2ZGNmNjBiZmY0YjY1Zjg3MzViMmRlMmRiOSIsInZlcnNpb24iOjF9.7VJ4JGkOHZ7jp_hA9Jx0ToQ74OBp918a1OVZ3qpuv1ZV1qkPrCVW9_g72v0QjmICdlHvHrBwvKywdzv-It6RCg"}, {"type": "precision", "value": 0.8948630809230339, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDRhYjBjYzViMGY2MjE4OGU2OWZlYTUzNDljMjllYTAyMGI4Y2FhODQxOWU2N2NkNTYyOGJhZjA4MmFkOWFiOCIsInZlcnNpb24iOjF9.0rf2OHpdMViVl-vFQIE0g5qFmpvSfWa1Igs9Ala_T0foNk1rD4IR_bLDHqbU57HWDDYFKK2EKfV9KK19-pONBg"}, {"type": "precision", "value": 0.9185, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTM0YjhmZDVhYTlhZWQ3ZGQwOTRjNGI0NTU0OTFlZjFlMTE5ODQwY2E2ZTZhZmMxYjA5NDc0MzgxMjFkZjNmMyIsInZlcnNpb24iOjF9.n1LvyMO5EkZ5H7zkB533gP8w7FMpv8TxgaeaqiM-fAHmrMsF_-Dkc0X5tjI5_QQGU2aqXOHdThmWI1ohelJoDw"}, {"type": "precision", "value": 0.9190547804558933, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmM5NDVmMDcwZjVhYWIyNTI1Njk3M2Y4ZDg0N2Q5NzU2NTU3YmZlNjEzNjcyY2VmODhhMWY5MGExZjViMjMzYSIsInZlcnNpb24iOjF9.gAvnEt3NSkc5Mp0JhezC6pfsa2nXVcvD-3dfFcRy_F4S-iv8u-WjC2sj5S3ieYmw5zZlgFVLiWj3N9WclLceBg"}, {"type": "recall", "value": 0.860108882009274, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDQ3ZjM3NGM4NzVjYzVkMWYxNmMzYjM1ODVhOWMwODk2NmE3NjcwNDRhMmQ0YTQ1NzdkNTNkZTEwYTBhMmIyYyIsInZlcnNpb24iOjF9.niXajj933x2yuG_AorT3Yp7_60MgHy-eXkwpjp1ERCknWcxJ5BB38-tJdP9ambP3QeGJYtjPlXVeQLpaQ7rdAw"}, {"type": "recall", "value": 0.9185, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjE4OWM0OTVkMDllN2JiNjUxMWNlOWUzNTNkYmU3M2U1YzIyODBkNjk5YTBhMmFmMzM5Mzk1NjRkNjRmMjUxZSIsInZlcnNpb24iOjF9.S0di5PwvB-9NpPh6d1VOBUZOqIxVdyfPeUIc5NCTZ6-hc4NrWyAsrs_-3ybbhnws6ZqgQh8S-oCLPj142J0LCA"}, {"type": "recall", "value": 0.9185, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWI0Zjc5ZGIwMzdhMzRiYjgxYzcxZWVjMDczZTcxMWZkYTljOTI0MjVkOTU0MDdiNDYzMjkyNThmNmUwMWQxYSIsInZlcnNpb24iOjF9.fdOWpzsUjzuC_jL4Iy4AY-gloMO3_cuxwvFs-2ViJU4RLn7xnJNqdID5hyuoSlytpYyk8yf0J8tImddj_V4qBg"}, {"type": "f1", "value": 0.8727941247828231, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ1ZGEwOTMxYjAzNjgxYmMzZGM1ZjkwNmNiYzdmOWE3MGI5NzY5NjM3ZDljZTVmZWQ4YThlMTExYjE2MzkxNyIsInZlcnNpb24iOjF9.y4K4-ICKWoib_dtJkrTjPrrrWVQO4vMJ4OZeXu4yrCHBEwc5Pa-605oDLjujZcVI5Vn2lE3piUUJn_Ko_eRKBQ"}, {"type": "f1", "value": 0.9185, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjBjYjUzZTlkYzJjZDhkMjM4MjBlZWYwNjA4NTZlZjY2Njc0ZDgyZjYyNjU5ZmM0YzY3ODFlN2ZlMWRiZDZmYiIsInZlcnNpb24iOjF9.WXwc2VTkkUDPCY5JxnHFPduRa_iViuxS3MvNiH4Od2kRNnIYxlFY2wo1yT3UQukAnz69Uq6M_aSi6a7qnxt7Bg"}, {"type": "f1", "value": 0.9177368694234422, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGMxMzBjOTNhOWVmZDg0NjlmMmFhY2RmYzc0YzRlMTkyN2E4NTVmYzdkYWEwMDljY2U5ZmQ5YmM5ZjlhYWNlMiIsInZlcnNpb24iOjF9.XcschKnQYuy1KCgM-eTPJxHaTyj4iRkmdc8Pyxa3i1b_7a8FOr5vBUdijrnh1sEj4Cg08yrM5o59sGWRz_ZuDg"}, {"type": "loss", "value": 0.21989187598228455, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTY0MDUwNGUyYTA1NjIyZTkzMzc5ODI5ZTE2ZDY5M2I3NzM2ZTZhNTQxODY5ZGY4MmUzZGFmYTU3M2FmZTc1ZCIsInZlcnNpb24iOjF9.y7Ylg_yZ-pqRohxawrTNQU6DpVlVP7bBNwsoOvpzcPJncNR2CG94edcvi4F6w86EcDsPEm0ab4XK3elAAhC6Dw"}]}]}]} | lewiswatson/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2287
* Accuracy: 0.918
* F1: 0.9182
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.17.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.4
* Tokenizers 0.11.6
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-finetuned-imdb
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.2464 | 1.0 | 391 | 4.2951 |
| 4.2302 | 2.0 | 782 | 4.0023 |
| 4.0726 | 3.0 | 1173 | 3.9328 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imdb"]} | lewtun/MiniLM-L12-H384-uncased-finetuned-imdb | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #dataset-imdb #license-mit #autotrain_compatible #endpoints_compatible #region-us
| MiniLM-L12-H384-uncased-finetuned-imdb
======================================
This model is a fine-tuned version of microsoft/MiniLM-L12-H384-uncased on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 3.9328
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.3
* Pytorch 1.9.1+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_pr... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #dataset-imdb #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_si... |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2244 | 1.0 | 958 | 2.0726 |
| 2.1537 | 2.0 | 1916 | 2.0381 |
| 2.1183 | 3.0 | 2874 | 2.0284 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.1+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"]} | lewtun/bert-base-uncased-finetuned-imdb | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-base-uncased-finetuned-imdb
================================
This model is a fine-tuned version of bert-base-uncased on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0284
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.3
* Pytorch 1.9.1+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_pr... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_bat... |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0603
- Precision: 0.9408
- Recall: 0.9520
- F1: 0.9464
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0884 | 1.0 | 1756 | 0.0658 | 0.9145 | 0.9337 | 0.9240 | 0.9827 |
| 0.0375 | 2.0 | 3512 | 0.0618 | 0.9366 | 0.9490 | 0.9427 | 0.9864 |
| 0.0216 | 3.0 | 5268 | 0.0603 | 0.9408 | 0.9520 | 0.9464 | 0.9865 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9407949442873773, "name": "Precision"}, {"type": "recall", "value": 0.9520363513968361, "name": "Recall"}, {"type": "f1", "value": 0.9463822668339608, "name": "F1"}, {"type": "accuracy", "value": 0.9865485371166186, "name": "Accuracy"}]}]}]} | lewtun/bert-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| bert-finetuned-ner
==================
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0603
* Precision: 0.9408
* Recall: 0.9520
* F1: 0.9464
* Accuracy: 0.9865
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning... |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad", "lewtun/autoevaluate__squad"], "model-index": [{"name": "bert-finetuned-squad", "results": []}]} | lewtun/bert-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"dataset:lewtun/autoevaluate__squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #dataset-lewtun/autoevaluate__squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-finetuned-squad
This model is a fine-tuned version of bert-base-cased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| [
"# bert-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"#... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #dataset-lewtun/autoevaluate__squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-finetuned-squad\n\nThis model is a fine-tuned version of bert-base-cased on the squad dataset.",
"## M... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-test-01
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7510
- Accuracy: 0.39
- F1: 0.2188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 |
| No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion-test-01", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.39, "name": "Accuracy"}, {"type": "f1", "value": 0.21884892086330932, "name": "F1"}]}]}]} | lewtun/distilbert-base-uncased-finetuned-emotion-test-01 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion-test-01
=================================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7510
* Accuracy: 0.39
* F1: 0.2188
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7106 | 1.0 | 157 | 2.4854 |
| 2.5716 | 2.0 | 314 | 2.4161 |
| 2.5408 | 3.0 | 471 | 2.4454 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "model-index": [{"name": "distilbert-base-uncased-finetuned-imdb", "results": []}]} | lewtun/distilbert-base-uncased-finetuned-imdb | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-imdb
======================================
This model is a fine-tuned version of distilbert-base-uncased on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4286
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_pr... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train... |
question-answering | null |
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["question-answering"], "datasets": ["squad"], "metrics": ["squad"], "thumbnail": "https://github.com/karanchahal/distiller/blob/master/distiller.jpg"} | lewtun/distilbert-base-uncased-finetuned-squad-d5716d28 | null | [
"pytorch",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"1910.01108"
] | [
"en"
] | TAGS
#pytorch #question-answering #en #dataset-squad #arxiv-1910.01108 #license-apache-2.0 #region-us
| DistilBERT with a second step of distillation
=============================================
Model description
-----------------
This model replicates the "DistilBERT (D)" model from Table 2 of the DistilBERT paper. In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: 'distilbert-base-uncased'
* Teacher: 'lewtun/bert-base-uncased-finetuned-squad-v1'
Training data
-------------
This model was trained on the SQuAD v1.1 dataset which can be obtained from the 'datasets' library as follows:
Training procedure
------------------
Eval results
------------
Exact Match: DistilBERT paper, F1: 79.1
Exact Match: Ours, F1: 78.4
The scores were calculated using the 'squad' metric from 'datasets'.
### BibTeX entry and citation info
| [
"### BibTeX entry and citation info"
] | [
"TAGS\n#pytorch #question-answering #en #dataset-squad #arxiv-1910.01108 #license-apache-2.0 #region-us \n",
"### BibTeX entry and citation info"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dummy-translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model_index": [{"name": "dummy-translation", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}}]}]} | lewtun/dummy-translation | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# dummy-translation
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ro on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
| [
"# dummy-translation\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ro on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedur... | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# dummy-translation\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ro on an unkown dataset.",
"## Model description\n\nMore information needed",
"#... |
null | transformers |
# LitMetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| {"license": "mit", "tags": ["satflow", "forecasting", "timeseries", "remote-sensing"]} | lewtun/litmetnet-test-01 | null | [
"transformers",
"pytorch",
"satflow",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #satflow #forecasting #timeseries #remote-sensing #license-mit #endpoints_compatible #region-us
|
# LitMetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| [
"# LitMetNet",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
"## Limitations and bias\n\n[More information needed]",
"## Training data\n\n[More information needed]",
"## Training procedure... | [
"TAGS\n#transformers #pytorch #satflow #forecasting #timeseries #remote-sensing #license-mit #endpoints_compatible #region-us \n",
"# LitMetNet",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
... |
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6772
- Bleu: 38.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "metrics": ["bleu"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 38.988820814501665, "name": "Bleu"}]}]}]} | lewtun/marian-finetuned-kde4-en-to-fr | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #translation #generated_from_trainer #dataset-kde4 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6772
- Bleu: 38.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| [
"# marian-finetuned-kde4-en-to-fr\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.6772\n- Bleu: 38.9888",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore infor... | [
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #translation #generated_from_trainer #dataset-kde4 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# marian-finetuned-kde4-en-to-fr\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-e... |
null | transformers |
# Model Card for MetNet
| {"tags": ["autonlp", "evaluation", "benchmark"]} | lewtun/metnet-test-3 | null | [
"transformers",
"pytorch",
"autonlp",
"evaluation",
"benchmark",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #autonlp #evaluation #benchmark #endpoints_compatible #region-us
|
# Model Card for MetNet
| [
"# Model Card for MetNet"
] | [
"TAGS\n#transformers #pytorch #autonlp #evaluation #benchmark #endpoints_compatible #region-us \n",
"# Model Card for MetNet"
] |
null | transformers |
# Model Card for MetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| {"license": "mit", "tags": ["satflow"]} | lewtun/metnet-test-4 | null | [
"transformers",
"pytorch",
"satflow",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #satflow #license-mit #endpoints_compatible #region-us
|
# Model Card for MetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| [
"# Model Card for MetNet",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
"## Limitations and bias\n\n[More information needed]",
"## Training data\n\n[More information needed]",
"## Traini... | [
"TAGS\n#transformers #pytorch #satflow #license-mit #endpoints_compatible #region-us \n",
"# Model Card for MetNet",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
"## Limitations and bias\n\... |
null | transformers |
# MetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| {"license": "mit", "tags": ["satflow"]} | lewtun/metnet-test-5 | null | [
"transformers",
"pytorch",
"satflow",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #satflow #license-mit #endpoints_compatible #region-us
|
# MetNet
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| [
"# MetNet",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
"## Limitations and bias\n\n[More information needed]",
"## Training data\n\n[More information needed]",
"## Training procedure\n\... | [
"TAGS\n#transformers #pytorch #satflow #license-mit #endpoints_compatible #region-us \n",
"# MetNet",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
"## Limitations and bias\n\n[More informat... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# minilm-finetuned-emotion
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3891
- F1: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.3957 | 1.0 | 250 | 1.0134 | 0.6088 |
| 0.8715 | 2.0 | 500 | 0.6892 | 0.8493 |
| 0.6085 | 3.0 | 750 | 0.4943 | 0.8920 |
| 0.4626 | 4.0 | 1000 | 0.4096 | 0.9078 |
| 0.3961 | 5.0 | 1250 | 0.3891 | 0.9118 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.6.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["f1"], "model-index": [{"name": "minilm-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "f1", "value": 0.9117582218338629, "name": "F1"}]}]}]} | lewtun/minilm-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-emotion #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
| minilm-finetuned-emotion
========================
This model is a fine-tuned version of microsoft/MiniLM-L12-H384-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3891
* F1: 0.9118
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.6.0
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_prec... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-emotion #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2... |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mlsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 1.1475
- Rouge2: 0.1284
- Rougel: 1.0634
- Rougelsum: 1.0778
- Gen Len: 3.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| nan | 1.0 | 808 | nan | 1.1475 | 0.1284 | 1.0634 | 1.0778 | 3.7939 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mlsum"], "metrics": ["rouge"], "model-index": [{"name": "mt5-small-finetuned-mlsum", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "mlsum", "type": "mlsum", "args": "es"}, "metrics": [{"type": "rouge", "value": 1.1475, "name": "Rouge1"}]}]}]} | lewtun/mt5-small-finetuned-mlsum | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:mlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #dataset-mlsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-finetuned-mlsum
=========================
This model is a fine-tuned version of google/mt5-small on the mlsum dataset.
It achieves the following results on the evaluation set:
* Loss: nan
* Rouge1: 1.1475
* Rouge2: 0.1284
* Rougel: 1.0634
* Rougelsum: 1.0778
* Gen Len: 3.7939
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.3
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precis... | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #dataset-mlsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during tra... |
image-classification | transformers |
# oz-fauna
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dingo

#### koala

#### kookaburra

#### possum

#### tasmanian devil
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | lewtun/oz-fauna | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# oz-fauna
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo.
Report any issues with the demo at the github repo.
## Example Images
#### dingo
!dingo
#### koala
!koala
#### kookaburra
!kookaburra
#### possum
!possum
#### tasmanian devil
!tasmanian devil | [
"# oz-fauna\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### dingo\n\n!dingo",
"#### koala\n\n!koala",
"#### kookaburra\n\n!kookaburra",
"#### possum\n\n!possum",
"... | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# oz-fauna\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo.\n\nReport any issues with the demo at the... |
null | transformers |
# Perceiver
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| {"license": "mit", "tags": ["satflow", "forecasting", "timeseries", "remote-sensing"]} | lewtun/perceriver-test-01 | null | [
"transformers",
"pytorch",
"satflow",
"forecasting",
"timeseries",
"remote-sensing",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #satflow #forecasting #timeseries #remote-sensing #license-mit #endpoints_compatible #region-us
|
# Perceiver
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
| [
"# Perceiver",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
"## Limitations and bias\n\n[More information needed]",
"## Training data\n\n[More information needed]",
"## Training procedure... | [
"TAGS\n#transformers #pytorch #satflow #forecasting #timeseries #remote-sensing #license-mit #endpoints_compatible #region-us \n",
"# Perceiver",
"## Model description\n\n[More information needed]",
"## Intended uses & limitations\n\n[More information needed]",
"## How to use\n\n[More information needed]",
... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8221 | 1.0 | 250 | 0.3106 | 0.9125 | 0.9102 |
| 0.2537 | 2.0 | 500 | 0.2147 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "results", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.925, "name": "Accuracy"}, {"type": "f1", "value": 0.9251012149383893, "name": "F1"}]}]}]} | lewtun/results | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| results
=======
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2147
* Accuracy: 0.925
* F1: 0.9251
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1+cu102
* Datasets 1.13.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi
This model was trained from scratch on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3595
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.103 | 1.0 | 1250 | 0.2864 | 0.928 |
| 0.0407 | 2.0 | 2500 | 0.3595 | 0.9285 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9285}}]}]} | lewtun/roberta-base-bne-finetuned-amazon_reviews_multi-finetuned-amazon_reviews_multi | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-bne-finetuned-amazon\_reviews\_multi-finetuned-amazon\_reviews\_multi
==================================================================================
This model was trained from scratch on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3595
* Accuracy: 0.9285
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* tr... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2306
- Accuracy: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1978 | 1.0 | 1250 | 0.1750 | 0.9325 |
| 0.0951 | 2.0 | 2500 | 0.2306 | 0.9307 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.93075}}]}]} | lewtun/roberta-base-bne-finetuned-amazon_reviews_multi | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-bne-finetuned-amazon\_reviews\_multi
=================================================
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2306
* Accuracy: 0.9307
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\... |
null | superb |
# Test for superb using hubert downstream SD
## Usage
```python
import io
import soundfile as sf
from urllib.request import urlopen
from model import PreTrainedModel
model = PreTrainedModel()
url = "https://huggingface.co/datasets/lewtun/s3prl-sd-dummy/raw/main/audio.wav"
data, samplerate = sf.read(io.BytesIO(urlopen(url).read()))
print(model(data))
``` | {"library_name": "superb", "tags": ["superb", "speaker-diarization", "benchmark:superb"]} | lewtun/s3prl-sd-hubert-dummy | null | [
"superb",
"speaker-diarization",
"benchmark:superb",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #speaker-diarization #benchmark-superb #region-us
|
# Test for superb using hubert downstream SD
## Usage
| [
"# Test for superb using hubert downstream SD",
"## Usage"
] | [
"TAGS\n#superb #speaker-diarization #benchmark-superb #region-us \n",
"# Test for superb using hubert downstream SD",
"## Usage"
] |
null | null | # This is a test! | {} | lewtun/superb-dummy-asr-push-to-hub | null | [
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#region-us
| # This is a test! | [
"# This is a test!"
] | [
"TAGS\n#region-us \n",
"# This is a test!"
] |
null | null | Here is some latex:
$ \LaTeX $
$$ \frac{\mathrm{A\,fox}}{23} $$ | {} | lewtun/superb-dummy-asr | null | [
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#region-us
| Here is some latex:
$ \LaTeX $
$$ \frac{\mathrm{A\,fox}}{23} $$ | [] | [
"TAGS\n#region-us \n"
] |
automatic-speech-recognition | superb |
# Test for s3prl push to hub after fine-tuning | {"library_name": "superb", "tags": ["superb", "automatic-speech-recognition"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-hubert-asr | null | [
"superb",
"automatic-speech-recognition",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #automatic-speech-recognition #region-us
|
# Test for s3prl push to hub after fine-tuning | [
"# Test for s3prl push to hub after fine-tuning"
] | [
"TAGS\n#superb #automatic-speech-recognition #region-us \n",
"# Test for s3prl push to hub after fine-tuning"
] |
automatic-speech-recognition | superb |
# Fine-tuned s3prl model for ASR | {"library_name": "superb", "tags": ["automatic-speech-recognition", "osanseviero/hubert_base"], "datasets": ["superb"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-osanseviero__hubert_base-asr-50f7ee76 | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"osanseviero/hubert_base",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us
|
# Fine-tuned s3prl model for ASR | [
"# Fine-tuned s3prl model for ASR"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us \n",
"# Fine-tuned s3prl model for ASR"
] |
automatic-speech-recognition | superb |
# Test for s3prl push to hub after fine-tuning | {"library_name": "superb", "tags": ["superb", "automatic-speech-recognition"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-osanseviero__hubert_base-asr-67be9268 | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #region-us
|
# Test for s3prl push to hub after fine-tuning | [
"# Test for s3prl push to hub after fine-tuning"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #region-us \n",
"# Test for s3prl push to hub after fine-tuning"
] |
automatic-speech-recognition | superb |
# Test for s3prl push to hub after fine-tuning | {"library_name": "superb", "tags": ["superb", "automatic-speech-recognition"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-osanseviero__hubert_base-asr-700ddb7b | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #region-us
|
# Test for s3prl push to hub after fine-tuning | [
"# Test for s3prl push to hub after fine-tuning"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #region-us \n",
"# Test for s3prl push to hub after fine-tuning"
] |
automatic-speech-recognition | superb |
# Test for s3prl push to hub after fine-tuning | {"library_name": "superb", "tags": ["superb", "automatic-speech-recognition"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-osanseviero__hubert_base-asr-a03c2ae5 | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #region-us
|
# Test for s3prl push to hub after fine-tuning | [
"# Test for s3prl push to hub after fine-tuning"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #region-us \n",
"# Test for s3prl push to hub after fine-tuning"
] |
automatic-speech-recognition | superb |
# Fine-tuned s3prl model for ASR | {"library_name": "superb", "tags": ["automatic-speech-recognition", "osanseviero/hubert_base"], "datasets": ["superb"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-osanseviero__hubert_base-asr-ca6de67e | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"osanseviero/hubert_base",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us
|
# Fine-tuned s3prl model for ASR | [
"# Fine-tuned s3prl model for ASR"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us \n",
"# Fine-tuned s3prl model for ASR"
] |
automatic-speech-recognition | superb |
# Fine-tuned s3prl model for ASR | {"library_name": "superb", "tags": ["automatic-speech-recognition", "osanseviero/hubert_base"], "datasets": ["superb"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-osanseviero__hubert_base-asr-cbcd177a | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"osanseviero/hubert_base",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us
|
# Fine-tuned s3prl model for ASR | [
"# Fine-tuned s3prl model for ASR"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #osanseviero/hubert_base #dataset-superb #region-us \n",
"# Fine-tuned s3prl model for ASR"
] |
automatic-speech-recognition | superb |
# Test for s3prl push to hub after fine-tuning | {"library_name": "superb", "tags": ["superb", "automatic-speech-recognition"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-osanseviero__hubert_base-asr | null | [
"superb",
"automatic-speech-recognition",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #automatic-speech-recognition #region-us
|
# Test for s3prl push to hub after fine-tuning | [
"# Test for s3prl push to hub after fine-tuning"
] | [
"TAGS\n#superb #automatic-speech-recognition #region-us \n",
"# Test for s3prl push to hub after fine-tuning"
] |
null | superb |
# Fine-tuned s3prl model for SD | {"library_name": "superb", "tags": ["speaker-diarization", "osanseviero/hubert_base"], "datasets": ["superb"], "benchmark": "superb", "task": "sd"} | lewtun/superb-s3prl-osanseviero__hubert_base-diarization-7f28b8b5 | null | [
"superb",
"tensorboard",
"speaker-diarization",
"osanseviero/hubert_base",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #speaker-diarization #osanseviero/hubert_base #dataset-superb #region-us
|
# Fine-tuned s3prl model for SD | [
"# Fine-tuned s3prl model for SD"
] | [
"TAGS\n#superb #tensorboard #speaker-diarization #osanseviero/hubert_base #dataset-superb #region-us \n",
"# Fine-tuned s3prl model for SD"
] |
automatic-speech-recognition | superb |
# Fine-tuned s3prl model for ASR | {"library_name": "superb", "tags": ["automatic-speech-recognition", "superb-test-org/test-submission-with-weights"], "datasets": ["superb"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-superb-test-org__test-submission-with-weights-asr-ceaac01d | null | [
"superb",
"tensorboard",
"automatic-speech-recognition",
"superb-test-org/test-submission-with-weights",
"dataset:superb",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #tensorboard #automatic-speech-recognition #superb-test-org/test-submission-with-weights #dataset-superb #region-us
|
# Fine-tuned s3prl model for ASR | [
"# Fine-tuned s3prl model for ASR"
] | [
"TAGS\n#superb #tensorboard #automatic-speech-recognition #superb-test-org/test-submission-with-weights #dataset-superb #region-us \n",
"# Fine-tuned s3prl model for ASR"
] |
automatic-speech-recognition | superb |
# Test for s3prl push to hub after fine-tuning | {"library_name": "superb", "tags": ["superb", "automatic-speech-recognition"], "benchmark": "superb", "task": "asr", "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}]} | lewtun/superb-s3prl-wav2vec2-asr | null | [
"superb",
"automatic-speech-recognition",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#superb #automatic-speech-recognition #region-us
|
# Test for s3prl push to hub after fine-tuning | [
"# Test for s3prl push to hub after fine-tuning"
] | [
"TAGS\n#superb #automatic-speech-recognition #region-us \n",
"# Test for s3prl push to hub after fine-tuning"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9934
- Mae: 0.4867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1514 | 1.0 | 308 | 1.0455 | 0.5221 |
| 0.9997 | 2.0 | 616 | 0.9934 | 0.4867 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-de", "results": []}]} | lewtun/xlm-roberta-base-finetuned-marc-de | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-marc-de
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9934
* Mae: 0.4867
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en-dummy
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8931
- Mae: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1258 | 1.0 | 235 | 0.9538 | 0.4390 |
| 0.9445 | 2.0 | 470 | 0.8931 | 0.4634 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en-dummy", "results": []}]} | lewtun/xlm-roberta-base-finetuned-marc-en-dummy | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-marc-en-dummy
========================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8931
* Mae: 0.4634
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en-hslu
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8826
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1121 | 1.0 | 235 | 0.9400 | 0.5732 |
| 0.9487 | 2.0 | 470 | 0.8826 | 0.5 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en-hslu", "results": []}]} | lewtun/xlm-roberta-base-finetuned-marc-en-hslu | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-marc-en-hslu
=======================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8826
* Mae: 0.5
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8850
- Mae: 0.4390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1589 | 1.0 | 235 | 0.9769 | 0.5122 |
| 0.974 | 2.0 | 470 | 0.8850 | 0.4390 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]} | lewtun/xlm-roberta-base-finetuned-marc-en | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| xlm-roberta-base-finetuned-marc-en
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8850
* Mae: 0.4390
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* ... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9932
- Mae: 0.4838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.05 | 1.0 | 860 | 1.0007 | 0.5074 |
| 0.9166 | 2.0 | 1720 | 0.9932 | 0.4838 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc", "results": []}]} | lewtun/xlm-roberta-base-finetuned-marc | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
| xlm-roberta-base-finetuned-marc
===============================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9932
* Mae: 0.4838
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.1+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* ... |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-portuguese-ner-archive
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)
It achieves the following results on the evaluation set:
- Loss: 0.1140
- Precision: 0.9147
- Recall: 0.9483
- F1: 0.9312
- Accuracy: 0.9700
## Model description
This model was fine-tunned on token classification task (NER) on Portuguese archival documents. The annotated labels are: Date, Profession, Person, Place, Organization
### Datasets
All the training and evaluation data is available at: http://ner.epl.di.uminho.pt/
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 192 | 0.1438 | 0.8917 | 0.9392 | 0.9148 | 0.9633 |
| 0.2454 | 2.0 | 384 | 0.1222 | 0.8985 | 0.9417 | 0.9196 | 0.9671 |
| 0.0526 | 3.0 | 576 | 0.1098 | 0.9150 | 0.9481 | 0.9312 | 0.9698 |
| 0.0372 | 4.0 | 768 | 0.1140 | 0.9147 | 0.9483 | 0.9312 | 0.9700 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
### Citation
```bibtex
@Article{make4010003,
AUTHOR = {Cunha, Luís Filipe and Ramalho, José Carlos},
TITLE = {NER in Archival Finding Aids: Extended},
JOURNAL = {Machine Learning and Knowledge Extraction},
VOLUME = {4},
YEAR = {2022},
NUMBER = {1},
PAGES = {42--65},
URL = {https://www.mdpi.com/2504-4990/4/1/3},
ISSN = {2504-4990},
ABSTRACT = {The amount of information preserved in Portuguese archives has increased over the years. These documents represent a national heritage of high importance, as they portray the country’s history. Currently, most Portuguese archives have made their finding aids available to the public in digital format, however, these data do not have any annotation, so it is not always easy to analyze their content. In this work, Named Entity Recognition solutions were created that allow the identification and classification of several named entities from the archival finding aids. These named entities translate into crucial information about their context and, with high confidence results, they can be used for several purposes, for example, the creation of smart browsing tools by using entity linking and record linking techniques. In order to achieve high result scores, we annotated several corpora to train our own Machine Learning algorithms in this context domain. We also used different architectures, such as CNNs, LSTMs, and Maximum Entropy models. Finally, all the created datasets and ML models were made available to the public with a developed web platform, NER@DI.},
DOI = {10.3390/make4010003}
}
```
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-portuguese-ner-archive", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9700325118974698}}]}]} | lfcc/bert-portuguese-ner-archive | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
| bert-portuguese-ner-archive
===========================
This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased
It achieves the following results on the evaluation set:
* Loss: 0.1140
* Precision: 0.9147
* Recall: 0.9483
* F1: 0.9312
* Accuracy: 0.9700
Model description
-----------------
This model was fine-tunned on token classification task (NER) on Portuguese archival documents. The annotated labels are: Date, Profession, Person, Place, Organization
### Datasets
All the training and evaluation data is available at: URL
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.10.0.dev0
* Pytorch 1.9.0+cu111
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Datasets\n\n\nAll the training and evaluation data is available at: URL",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and ... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Datasets\n\n\nAll the training and evaluation data is available at: URL",
"### Training hyperparameters\n\n\nThe following hyperparameters ... |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-pt-archive
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0869
- Precision: 0.9280
- Recall: 0.9541
- F1: 0.9409
- Accuracy: 0.9767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0665 | 1.0 | 765 | 0.1020 | 0.8928 | 0.9566 | 0.9236 | 0.9696 |
| 0.0392 | 2.0 | 1530 | 0.0781 | 0.9229 | 0.9586 | 0.9404 | 0.9757 |
| 0.0201 | 3.0 | 2295 | 0.0809 | 0.9278 | 0.9550 | 0.9412 | 0.9767 |
| 0.0152 | 4.0 | 3060 | 0.0869 | 0.9280 | 0.9541 | 0.9409 | 0.9767 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-large-pt-archive", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9766762474673703}}]}]} | lfcc/bert-large-pt-archive | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
| bert-large-pt-archive
=====================
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0869
* Precision: 0.9280
* Recall: 0.9541
* F1: 0.9409
* Accuracy: 0.9767
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.10.0.dev0
* Pytorch 1.9.0+cu111
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size:... |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# portuguese-archival-finding-aids
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1812
- Precision: 0.8624
- Recall: 0.9557
- F1: 0.9067
- Accuracy: 0.9618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 192 | 0.1565 | 0.8511 | 0.9327 | 0.8900 | 0.9563 |
| 0.1849 | 2.0 | 384 | 0.1594 | 0.8634 | 0.9543 | 0.9065 | 0.9619 |
| 0.0454 | 3.0 | 576 | 0.1812 | 0.8624 | 0.9557 | 0.9067 | 0.9618 |
### Framework versions
- Transformers 4.10.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "portuguese-archival-finding-aids", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9617770479839446}}]}]} | lfcc/bert-multilingual-pt-archive | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| portuguese-archival-finding-aids
================================
This model is a fine-tuned version of bert-base-multilingual-cased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1812
* Precision: 0.8624
* Recall: 0.9557
* F1: 0.9067
* Accuracy: 0.9618
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.10.0.dev0
* Pytorch 1.9.0+cu111
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\... |
text-generation | transformers | # This model is probably not what you're looking for. | {} | lg/fexp_1 | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
| # This model is probably not what you're looking for. | [
"# This model is probably not what you're looking for."
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"# This model is probably not what you're looking for."
] |
text-generation | transformers | **This model is provided with no guarantees whatsoever; use at your own risk.**
This is a Neo2.7B model fine tuned on github data scraped by an EleutherAI member (filtered for python-only) for 20k steps. A better code model is coming soon™ (hopefully, maybe); this model was created mostly as a test of infrastructure code. | {} | lg/ghpy_20k | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
| This model is provided with no guarantees whatsoever; use at your own risk.
This is a Neo2.7B model fine tuned on github data scraped by an EleutherAI member (filtered for python-only) for 20k steps. A better code model is coming soon™ (hopefully, maybe); this model was created mostly as a test of infrastructure code. | [] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | transformers | # This model is probably not what you're looking for. | {} | lg/ghpy_40k | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #endpoints_compatible #region-us
| # This model is probably not what you're looking for. | [
"# This model is probably not what you're looking for."
] | [
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n",
"# This model is probably not what you're looking for."
] |
text-generation | transformers | # This model is probably not what you're looking for. | {} | lg/openinstruct_1k1 | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
| # This model is probably not what you're looking for. | [
"# This model is probably not what you're looking for."
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"# This model is probably not what you're looking for."
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WavLM-large-CORAA-pt
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on [CORAA dataset](https://github.com/nilc-nlp/CORAA).
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Wer: 0.3840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.04 | 1000 | 1.9230 | 0.9960 |
| 5.153 | 0.08 | 2000 | 1.3733 | 0.8444 |
| 5.153 | 0.13 | 3000 | 1.1992 | 0.7362 |
| 1.367 | 0.17 | 4000 | 1.1289 | 0.6957 |
| 1.367 | 0.21 | 5000 | 1.0357 | 0.6470 |
| 1.1824 | 0.25 | 6000 | 1.0216 | 0.6201 |
| 1.1824 | 0.29 | 7000 | 0.9338 | 0.6036 |
| 1.097 | 0.33 | 8000 | 0.9149 | 0.5760 |
| 1.097 | 0.38 | 9000 | 0.8885 | 0.5541 |
| 1.0254 | 0.42 | 10000 | 0.8678 | 0.5366 |
| 1.0254 | 0.46 | 11000 | 0.8349 | 0.5323 |
| 0.9782 | 0.5 | 12000 | 0.8230 | 0.5155 |
| 0.9782 | 0.54 | 13000 | 0.8245 | 0.5049 |
| 0.9448 | 0.59 | 14000 | 0.7802 | 0.4990 |
| 0.9448 | 0.63 | 15000 | 0.7650 | 0.4900 |
| 0.9092 | 0.67 | 16000 | 0.7665 | 0.4796 |
| 0.9092 | 0.71 | 17000 | 0.7568 | 0.4795 |
| 0.8764 | 0.75 | 18000 | 0.7403 | 0.4615 |
| 0.8764 | 0.8 | 19000 | 0.7219 | 0.4644 |
| 0.8498 | 0.84 | 20000 | 0.7180 | 0.4502 |
| 0.8498 | 0.88 | 21000 | 0.7017 | 0.4436 |
| 0.8278 | 0.92 | 22000 | 0.6992 | 0.4395 |
| 0.8278 | 0.96 | 23000 | 0.7021 | 0.4329 |
| 0.8077 | 1.0 | 24000 | 0.6892 | 0.4265 |
| 0.8077 | 1.05 | 25000 | 0.6940 | 0.4248 |
| 0.7486 | 1.09 | 26000 | 0.6767 | 0.4202 |
| 0.7486 | 1.13 | 27000 | 0.6734 | 0.4150 |
| 0.7459 | 1.17 | 28000 | 0.6650 | 0.4152 |
| 0.7459 | 1.21 | 29000 | 0.6559 | 0.4078 |
| 0.7304 | 1.26 | 30000 | 0.6536 | 0.4088 |
| 0.7304 | 1.3 | 31000 | 0.6537 | 0.4025 |
| 0.7183 | 1.34 | 32000 | 0.6462 | 0.4008 |
| 0.7183 | 1.38 | 33000 | 0.6381 | 0.3973 |
| 0.7059 | 1.42 | 34000 | 0.6266 | 0.3930 |
| 0.7059 | 1.46 | 35000 | 0.6280 | 0.3921 |
| 0.6983 | 1.51 | 36000 | 0.6248 | 0.3897 |
| 0.6983 | 1.55 | 37000 | 0.6275 | 0.3872 |
| 0.6892 | 1.59 | 38000 | 0.6199 | 0.3852 |
| 0.6892 | 1.63 | 39000 | 0.6180 | 0.3842 |
| 0.691 | 1.67 | 40000 | 0.6144 | 0.3840 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"language": ["pt"], "license": "apache-2.0", "tags": ["generated_from_trainer", "pt"], "model-index": [{"name": "WavLM-large-CORAA-pt", "results": []}]} | lgris/WavLM-large-CORAA-pt | null | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wavlm #automatic-speech-recognition #generated_from_trainer #pt #license-apache-2.0 #endpoints_compatible #region-us
| WavLM-large-CORAA-pt
====================
This model is a fine-tuned version of microsoft/wavlm-large on CORAA dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6144
* Wer: 0.3840
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* training\_steps: 40000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=... | [
"TAGS\n#transformers #pytorch #wavlm #automatic-speech-recognition #generated_from_trainer #pt #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_b... |
automatic-speech-recognition | transformers |
# Wav2vec 2.0 for Portuguese in 8kHz
This is a fine-tuned model from [facebook/wav2vec2-base-10k-voxpopuli](https://huggingface.co/facebook/wav2vec2-base-10k-voxpopuli)
Datasets used to fine-tune the model:
CETUC: contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the CETEN-Folha corpus.
Common Voice 7.0: is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the oficial site.
Lapsbm: "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
Multilingual Librispeech (MLS): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like LibriVox. The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese used in this work (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
Multilingual TEDx: a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
Sidney (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
VoxForge: is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz
VoxPopuli | {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/base_10k_8khz_pt | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
|
# Wav2vec 2.0 for Portuguese in 8kHz
This is a fine-tuned model from facebook/wav2vec2-base-10k-voxpopuli
Datasets used to fine-tune the model:
CETUC: contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the CETEN-Folha corpus.
Common Voice 7.0: is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the oficial site.
Lapsbm: "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
Multilingual Librispeech (MLS): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like LibriVox. The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese used in this work (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
Multilingual TEDx: a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
Sidney (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
VoxForge: is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz
VoxPopuli | [
"# Wav2vec 2.0 for Portuguese in 8kHz\n\nThis is a fine-tuned model from facebook/wav2vec2-base-10k-voxpopuli\n\nDatasets used to fine-tune the model:\nCETUC: contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonet... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Wav2vec 2.0 for Po... |
automatic-speech-recognition | transformers |
# cetuc100-xlsr: Wav2vec 2.0 with CETUC Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz) dataset. This dataset contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 94h | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| cetuc\_100 (demonstration below)| 0.446 | 0.856 | 0.089 | 0.967 | 1.172 | 0.929 | 0.902 | 0.765 |
| cetuc\_100 + 4-gram (demonstration below)|0.339 | 0.734 | 0.076 | 0.961 | 1.188 | 1.227 | 0.801 | 0.760 |
## Demonstration
```python
MODEL_NAME = "lgris/cetuc100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.44677581829220825
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.8561919899139065
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.08955808080808081
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.9670008790979718
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 1.1723738343632861
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.929976436317539
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.9020183982683985
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.3396346663354827
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.7341013242719512
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07612373737373737
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.960908940243212
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 1.188118540533579
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 1.2271077178339618
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.800196158008658
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-cetuc100-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| cetuc100-xlsr: Wav2vec 2.0 with CETUC Dataset
=============================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the CETUC dataset. This dataset contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the CETEN-Folha corpus.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.44677581829220825
```
#### Common Voice
```
CV WER: 0.8561919899139065
```
#### LaPS
```
Laps WER: 0.08955808080808081
```
#### MLS
```
MLS WER: 0.9670008790979718
```
#### SID
```
Sid WER: 1.1723738343632861
```
#### TEDx
```
TEDx WER: 0.929976436317539
```
#### VoxForge
```
VoxForge WER: 0.9020183982683985
```
### Tests with LM
#### CETUC
```
CETUC WER: 0.3396346663354827
```
#### Common Voice
```
CV WER: 0.7341013242719512
```
#### LaPS
```
Laps WER: 0.07612373737373737
```
#### MLS
```
MLS WER: 0.960908940243212
```
#### SID
```
Sid WER: 1.188118540533579
```
#### TEDx
```
TEDx WER: 1.2271077178339618
```
#### VoxForge
```
VoxForge WER: 0.800196158008658
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.44677581829220825\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.8561919899139065\n\n```",
"#### LaPS\n\n\n\n`... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# commonvoice10-xlsr: Wav2vec 2.0 with Common Voice Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Common Voice 7.0](https://commonvoice.mozilla.org/pt) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | 37.8h | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| commonvoice10 (demonstration below) | 0.133 | 0.189 | 0.165 | 0.189 | 0.247 | 0.474 | 0.251 | 0.235 |
| commonvoice10 + 4-gram (demonstration below) | 0.060 | 0.117 | 0.088 | 0.136 | 0.181 | 0.394 | 0.227 | 0.171 |
## Demonstration
```python
MODEL_NAME = "lgris/commonvoice10-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.13291846056190185
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.18909733896486755
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.1655429292929293
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1894711228284466
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.2471983709551264
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.4739658565194102
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.2510294913419914
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.060609303416680915
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.11758415681158373
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.08815340909090909
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1359966791836458
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1818429601530829
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.39469326522731385
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.22779897186147183
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-commonvoice10-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| commonvoice10-xlsr: Wav2vec 2.0 with Common Voice Dataset
=========================================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the Common Voice 7.0 dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.13291846056190185
```
#### Common Voice
```
CV WER: 0.18909733896486755
```
#### LaPS
```
Laps WER: 0.1655429292929293
```
#### MLS
```
MLS WER: 0.1894711228284466
```
#### SID
```
Sid WER: 0.2471983709551264
```
#### TEDx
```
TEDx WER: 0.4739658565194102
```
#### VoxForge
```
VoxForge WER: 0.2510294913419914
```
### Tests with LM
#### CETUC
```
CETUC WER: 0.060609303416680915
```
#### Common Voice
```
CV WER: 0.11758415681158373
```
#### LaPS
```
Laps WER: 0.08815340909090909
```
#### MLS
```
MLS WER: 0.1359966791836458
```
#### SID
```
Sid WER: 0.1818429601530829
```
#### TEDx
```
TEDx WER: 0.39469326522731385
```
#### VoxForge
```
VoxForge WER: 0.22779897186147183
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.13291846056190185\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.18909733896486755\n\n```",
"#### LaPS\n\n\n\n... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# commonvoice100-xlsr: Wav2vec 2.0 with Common Voice Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Common Voice 7.0](https://commonvoice.mozilla.org/pt) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | 37.8h | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| commonvoice\_100 (demonstration below) |0.088 | 0.126 | 0.121 | 0.173 | 0.177 | 0.424 | 0.145 | 0.179 |
| commonvoice\_100 + 4-gram (demonstration below) |0.057 | 0.095 | 0.076 | 0.138 | 0.146 | 0.382 | 0.130 | 0.146|
## Demonstration
```python
MODEL_NAME = "lgris/commonvoice100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.08868880057404624
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.12601035333655114
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.12149621212121209
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.173594387890256
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1775290775992294
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.4245704568241374
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.14541801948051947
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05764220069547976
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09569130510737103
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07688131313131312
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.13814768877494732
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.14652459944499036
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.38196090002435623
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.13054112554112554
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-commonvoice100-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| commonvoice100-xlsr: Wav2vec 2.0 with Common Voice Dataset
==========================================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the Common Voice 7.0 dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.08868880057404624
```
#### Common Voice
```
CV WER: 0.12601035333655114
```
#### LaPS
```
Laps WER: 0.12149621212121209
```
#### MLS
```
MLS WER: 0.173594387890256
```
#### SID
```
Sid WER: 0.1775290775992294
```
#### TEDx
```
TEDx WER: 0.4245704568241374
```
#### VoxForge
```
VoxForge WER: 0.14541801948051947
```
### Tests with LM
#### CETUC
```
CETUC WER: 0.05764220069547976
```
#### Common Voice
```
CV WER: 0.09569130510737103
```
#### LaPS
```
Laps WER: 0.07688131313131312
```
#### MLS
```
MLS WER: 0.13814768877494732
```
#### SID
```
Sid WER: 0.14652459944499036
```
#### TEDx
```
TEDx WER: 0.38196090002435623
```
#### VoxForge
```
VoxForge WER: 0.13054112554112554
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.08868880057404624\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.12601035333655114\n\n```",
"#### LaPS\n\n\n\n... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# lapsbm1-xlsr: Wav2vec 2.0 with LaPSBM Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [LaPS BM](https://github.com/falabrasil/gitlab-resources) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| lapsbm1\_100 (demonstration below) | 0.111 | 0.418 | 0.145 | 0.299 | 0.562 | 0.580 | 0.469 | 0.369 |
| lapsbm1\_100 + 4-gram (demonstration below) | 0.061 | 0.305 | 0.089 | 0.201 | 0.452 | 0.525 | 0.381 | 0.287 |
## Demonstration
```python
MODEL_NAME = "lgris/lapsbm1-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.11147816967489037
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.41880890234535906
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.1451893939393939
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.29958960206171104
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.5626767414610376
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.5807549973642049
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.4693479437229436
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.06157628194513477
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.3051714756833442
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.0893623737373737
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.20062044237806004
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.4522665618175908
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.5256707813182246
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.38106331168831165
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-lapsbm1-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| lapsbm1-xlsr: Wav2vec 2.0 with LaPSBM Dataset
=============================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the LaPS BM dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.11147816967489037
```
#### Common Voice
```
CV WER: 0.41880890234535906
```
#### LaPS
```
Laps WER: 0.1451893939393939
```
#### MLS
```
MLS WER: 0.29958960206171104
```
#### SID
```
Sid WER: 0.5626767414610376
```
#### TEDx
```
TEDx WER: 0.5807549973642049
```
#### VoxForge
```
VoxForge WER: 0.4693479437229436
```
### Tests with LM
#### CETUC
```
CETUC WER: 0.06157628194513477
```
#### Common Voice
```
CV WER: 0.3051714756833442
```
#### LaPS
```
Laps WER: 0.0893623737373737
```
#### MLS
```
MLS WER: 0.20062044237806004
```
#### SID
```
Sid WER: 0.4522665618175908
```
#### TEDx
```
TEDx WER: 0.5256707813182246
```
#### VoxForge
```
VoxForge WER: 0.38106331168831165
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.11147816967489037\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.41880890234535906\n\n```",
"#### LaPS\n\n\n\n... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# mls100-xlsr: Wav2vec 2.0 with MLS Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Multilingual Librispeech in Portuguese (MLS)](http://www.openslr.org/94/) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | 161h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | 161h | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| mls100 (demonstration below) | 0.192 | 0.260 | 0.162 | 0.163 | 0.268 | 0.492 | 0.268 | 0.258 |
| mls100 + 4-gram (demonstration below) | 0.087 | 0.173 | 0.077 | 0.126 | 0.245 | 0.415 | 0.218 | 0.191 |
## Demonstration
```python
MODEL_NAME = "lgris/mls100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset/
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.192586382955233
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.2604333640312866
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.16259469696969692
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.16343014413283674
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.2682880375992515
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.49252836581485837
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.2686972402597403
### Tests with LM
```python
!rm -rf ~/.cache
%cd /content/
# !gdown --id '1d13Onxy9ubmJZORZ8FO2vnsnl36QMiUc' # trained with wikipedia;
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
%cd bp_dataset/
```
/content/bp_dataset
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.0878818926974661
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.173303354010221
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07691919191919189
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.12624377042839321
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.24545473435776916
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.4156272215612955
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.21832386363636366
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-mls100-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| mls100-xlsr: Wav2vec 2.0 with MLS Dataset
=========================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the Multilingual Librispeech in Portuguese (MLS) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
```
/content/bp_dataset
```
### Tests
#### CETUC
```
CETUC WER: 0.192586382955233
```
#### Common Voice
```
CV WER: 0.2604333640312866
```
#### LaPS
```
Laps WER: 0.16259469696969692
```
#### MLS
```
MLS WER: 0.16343014413283674
```
#### SID
```
Sid WER: 0.2682880375992515
```
#### TEDx
```
TEDx WER: 0.49252836581485837
```
#### VoxForge
```
VoxForge WER: 0.2686972402597403
```
### Tests with LM
```
/content/bp_dataset
```
#### CETUC
```
CETUC WER: 0.0878818926974661
```
#### Common Voice
```
CV WER: 0.173303354010221
```
#### LaPS
```
Laps WER: 0.07691919191919189
```
#### MLS
```
MLS WER: 0.12624377042839321
```
#### SID
```
Sid WER: 0.24545473435776916
```
#### TEDx
```
TEDx WER: 0.4156272215612955
```
#### VoxForge
```
VoxForge WER: 0.21832386363636366
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets\n\n\n\n```\n/content/bp_dataset\n\n```",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.192586382955233\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.2604333640... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# sid10-xlsr: Wav2vec 2.0 with Sidney Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [Sidney](https://igormq.github.io/datasets/) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | 7.2h | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total | 7.2h| -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| sid\_10 (demonstration below) |0.186 | 0.327 | 0.207 | 0.505 | 0.124 | 0.835 | 0.472 | 0.379|
| sid\_10 + 4-gram (demonstration below) |0.096 | 0.223 | 0.115 | 0.432 | 0.101 | 0.791 | 0.348 | 0.301|
## Demonstration
```python
MODEL_NAME = "lgris/sid10-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.18623689076557778
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.3279775395502392
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.20780303030303032
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.5056711598536057
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.1247776617710105
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.8350609256842175
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.47242153679653687
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.09677271347353278
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.22363215674470321
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.1154924242424242
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.4322369152606427
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.10080313085145765
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.7911789829264236
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.34786255411255407
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-sid10-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| sid10-xlsr: Wav2vec 2.0 with Sidney Dataset
===========================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the Sidney dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.18623689076557778
```
#### Common Voice
```
CV WER: 0.3279775395502392
```
#### LaPS
```
Laps WER: 0.20780303030303032
```
#### MLS
```
MLS WER: 0.5056711598536057
```
#### SID
```
Sid WER: 0.1247776617710105
```
#### TEDx
```
TEDx WER: 0.8350609256842175
```
#### VoxForge
```
VoxForge WER: 0.47242153679653687
```
### Tests with LM
#### CETUC
```
CETUC WER: 0.09677271347353278
```
#### Common Voice
```
CV WER: 0.22363215674470321
```
#### LaPS
```
Laps WER: 0.1154924242424242
```
#### MLS
```
MLS WER: 0.4322369152606427
```
#### SID
```
Sid WER: 0.10080313085145765
```
#### TEDx
```
TEDx WER: 0.7911789829264236
```
#### VoxForge
```
VoxForge WER: 0.34786255411255407
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.18623689076557778\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.3279775395502392\n\n```",
"#### LaPS\n\n\n\n`... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# tedx100-xlsr: Wav2vec 2.0 with TEDx Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [TEDx multilingual in Portuguese](http://www.openslr.org/100) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 148.8h| -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | | -- | 0.1h |
| Total |148.8h | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| tedx\_100 (demonstration below) |0.138 | 0.369 | 0.169 | 0.165 | 0.794 | 0.222 | 0.395 | 0.321|
| tedx\_100 + 4-gram (demonstration below) |0.123 | 0.414 | 0.171 | 0.152 | 0.982 | 0.215 | 0.395 | 0.350|
## Demonstration
```python
MODEL_NAME = "lgris/tedx100-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.13846663354859937
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.36960721735520236
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.16941287878787875
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.16586103382107384
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.7943364822145216
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.22221476803982182
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.39486066017315996
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.12338749517028079
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.4146185693398481
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.17142676767676762
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.15212081808962674
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.982518441309493
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.21567860841157235
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.3952218614718614
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-tedx100-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| tedx100-xlsr: Wav2vec 2.0 with TEDx Dataset
===========================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the TEDx multilingual in Portuguese dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.13846663354859937
```
#### Common Voice
```
CV WER: 0.36960721735520236
```
#### LaPS
```
Laps WER: 0.16941287878787875
```
#### MLS
```
MLS WER: 0.16586103382107384
```
#### SID
```
Sid WER: 0.7943364822145216
```
#### TEDx
```
TEDx WER: 0.22221476803982182
```
#### VoxForge
```
VoxForge WER: 0.39486066017315996
```
### Tests with LM
#### CETUC
```
CETUC WER: 0.12338749517028079
```
#### Common Voice
```
CV WER: 0.4146185693398481
```
#### LaPS
```
Laps WER: 0.17142676767676762
```
#### MLS
```
MLS WER: 0.15212081808962674
```
#### SID
```
Sid WER: 0.982518441309493
```
#### TEDx
```
TEDx WER: 0.21567860841157235
```
#### VoxForge
```
VoxForge WER: 0.3952218614718614
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.13846663354859937\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.36960721735520236\n\n```",
"#### LaPS\n\n\n\n... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# voxforge1-xlsr: Wav2vec 2.0 with VoxForge Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the [VoxForge](http://www.voxforge.org/) dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | | -- | 5.4h |
| Common Voice | | -- | 9.5h |
| LaPS BM | | -- | 0.1h |
| MLS | | -- | 3.7h |
| Multilingual TEDx (Portuguese) | | -- | 1.8h |
| SID | | -- | 1.0h |
| VoxForge | 3.9h | -- | 0.1h |
| Total | 3.9h | -- | 21.6h |
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| voxforge\_1 (demonstration below) | 0.468 | 0.608 | 0.503 | 0.505 | 0.717 | 0.731 | 0.561 | 0.584 |
| voxforge\_1 + 4-gram (demonstration below) | 0.322 | 0.471 | 0.356 | 0.378 | 0.586 | 0.637 | 0.428 | 0.454 |
## Demonstration
```python
MODEL_NAME = "lgris/voxforge1-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.4684840205331983
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.6080167359840954
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.5037468434343434
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.505595213971485
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.7177723323755854
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.7309431974873112
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.5613906926406929
### Tests with LM
```python
# !find -type f -name "*.wav" -delete
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.32184971297675896
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.4707820098981609
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.356227904040404
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.3786376653384398
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.5864959640811857
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.6368727228726417
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.4279924242424241
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"]} | lgris/bp-voxforge1-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us
| voxforge1-xlsr: Wav2vec 2.0 with VoxForge Dataset
=================================================
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the VoxForge dataset.
In this notebook the model is tested against other available Brazilian Portuguese datasets.
#### Summary
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.4684840205331983
```
#### Common Voice
```
CV WER: 0.6080167359840954
```
#### LaPS
```
Laps WER: 0.5037468434343434
```
#### MLS
```
MLS WER: 0.505595213971485
```
#### SID
```
Sid WER: 0.7177723323755854
```
#### TEDx
```
TEDx WER: 0.7309431974873112
```
#### VoxForge
```
VoxForge WER: 0.5613906926406929
```
### Tests with LM
#### CETUC
```
CETUC WER: 0.32184971297675896
```
#### Common Voice
```
CV WER: 0.4707820098981609
```
#### LaPS
```
Laps WER: 0.356227904040404
```
#### MLS
```
MLS WER: 0.3786376653384398
```
#### SID
```
Sid WER: 0.5864959640811857
```
#### TEDx
```
TEDx WER: 0.6368727228726417
```
#### VoxForge
```
VoxForge WER: 0.4279924242424241
```
| [
"#### Summary\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.4684840205331983\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.6080167359840954\n\n```",
"#### LaPS\n\n\n\n``... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #license-apache-2.0 #endpoints_compatible #region-us \n",
"#### Summary\n\n\n\n... |
automatic-speech-recognition | transformers |
# bp400-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
**Paper:** https://arxiv.org/abs/2107.11414
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt).
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
- [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
- [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 93.9h | -- | 5.4h |
| Common Voice | 37.6h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h |
| SID | 5.0h | -- | 1.0h |
| VoxForge | 2.8h | -- | 0.1h |
| Total | 437.2h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/drive/folders/1eRUExXRF2XK8JxUjIzbLBkLa5wuR3nig?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_400 (demonstration below) | 0.052 | 0.140 | 0.074 | 0.117 | 0.121 | 0.245 | 0.118 | 0.124 |
| bp\_400 + 3-gram | 0.033 | 0.095 | 0.046 | 0.123 | 0.112 | 0.212 | 0.123 | 0.106 |
| bp\_400 + 4-gram (demonstration below) | **0.030** | 0.096 | 0.043 | **0.106** | 0.118 | 0.229 | **0.117** | **0.105** |
| bp\_400 + 5-gram | 0.033 | 0.094 | 0.043 | 0.123 | **0.111** | **0.210** | 0.123 | **0.105** |
| bp\_400 + Transf. | 0.032 | **0.092** | **0.036** | 0.130 | 0.115 | 0.215 | 0.125 | 0.106 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|alguém sabe a que horas começa o jantar | alguém sabe a que horas **começo** jantar |
|lila covas ainda não sabe o que vai fazer no fundo|**lilacovas** ainda não sabe o que vai fazer no fundo|
|que tal um pouco desse bom spaghetti|**quetá** um pouco **deste** bom **ispaguete**|
|hong kong em cantonês significa porto perfumado|**rongkong** **en** **cantones** significa porto perfumado|
|vamos hackear esse problema|vamos **rackar** esse problema|
|apenas a poucos metros há uma estação de ônibus|apenas **ha** poucos metros **á** uma estação de ônibus|
|relâmpago e trovão sempre andam juntos|**relampagotrevão** sempre andam juntos|
## Demonstration
```python
MODEL_NAME = "lgris/bp400-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05159104708285062
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.14031426198658084
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07432133838383838
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.11678793514817509
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.12152357273433984
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24666815906766504
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11873106060606062
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.030266462438593742
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09577710237417715
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.043617424242424235
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.10642133314350002
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.11839021001747055
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.22929952467810416
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11716314935064935
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch", "hf-asr-leaderboard"], "datasets": ["common_voice", "mls", "cetuc", "lapsbm", "voxforge", "tedx", "sid"], "metrics": ["wer"], "model-index": [{"name": "bp400-xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "pt"}, "metrics": [{"type": "wer", "value": 14.0, "name": "Test WER"}]}]}]} | lgris/bp400-xlsr | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"hf-asr-leaderboard",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
... | null | 2022-03-02T23:29:05+00:00 | [
"2107.11414",
"2012.03411"
] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #hf-asr-leaderboard #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #arxiv-2107.11414 #arxiv-2012.03411 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| bp400-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
==============================================================
Paper: URL
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
* CETUC: contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the CETEN-Folha corpus.
* Common Voice 7.0: is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the oficial site.
* Lapsbm: "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
* Multilingual Librispeech (MLS): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like LibriVox. The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese used in this work (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
* Multilingual TEDx: a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
* Sidney (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
* VoxForge: is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
The original model was fine-tuned using fairseq. This notebook uses a converted version of the original one. The link to the original fairseq model is available here.
#### Summary
#### Transcription examples
Demonstration
-------------
### Imports and dependencies
### Helpers
### Model
### Download datasets
### Tests
#### CETUC
```
CETUC WER: 0.05159104708285062
```
#### Common Voice
```
CV WER: 0.14031426198658084
```
#### LaPS
```
Laps WER: 0.07432133838383838
```
#### MLS
```
MLS WER: 0.11678793514817509
```
#### SID
```
Sid WER: 0.12152357273433984
```
#### TEDx
```
TEDx WER: 0.24666815906766504
```
#### VoxForge
```
VoxForge WER: 0.11873106060606062
```
### Tests with LM
### Cetuc
```
CETUC WER: 0.030266462438593742
```
#### Common Voice
```
CV WER: 0.09577710237417715
```
#### LaPS
```
Laps WER: 0.043617424242424235
```
#### MLS
```
MLS WER: 0.10642133314350002
```
#### SID
```
Sid WER: 0.11839021001747055
```
#### TEDx
```
TEDx WER: 0.22929952467810416
```
#### VoxForge
```
VoxForge WER: 0.11716314935064935
```
| [
"#### Summary",
"#### Transcription examples\n\n\n\nDemonstration\n-------------",
"### Imports and dependencies",
"### Helpers",
"### Model",
"### Download datasets",
"### Tests",
"#### CETUC\n\n\n\n```\nCETUC WER: 0.05159104708285062\n\n```",
"#### Common Voice\n\n\n\n```\nCV WER: 0.14031426198658... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #pt #portuguese-speech-corpus #PyTorch #hf-asr-leaderboard #dataset-common_voice #dataset-mls #dataset-cetuc #dataset-lapsbm #dataset-voxforge #dataset-tedx #dataset-sid #arxiv-2107.11414 #arxiv-2012.03411 #license-apache-2.0 #mode... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.