pipeline_tag stringclasses 48 values | library_name stringclasses 198 values | text stringlengths 1 900k | metadata stringlengths 2 438k | id stringlengths 5 122 | last_modified null | tags listlengths 1 1.84k | sha null | created_at stringlengths 25 25 | arxiv listlengths 0 201 | languages listlengths 0 1.83k | tags_str stringlengths 17 9.34k | text_str stringlengths 0 389k | text_lists listlengths 0 722 | processed_texts listlengths 1 723 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-JES-cnn_dailymail
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1452
- Rouge1: 43.9753
- Rouge2: 19.7191
- Rougel: 33.6236
- Rougelsum: 41.1683
- Gen Len: 80.1767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.2949 | 1.0 | 71779 | 1.2080 | 11.7171 | 3.3284 | 11.3209 | 11.4022 | 20.0 |
| 1.191 | 2.0 | 143558 | 1.1615 | 11.8484 | 3.363 | 11.4175 | 11.5037 | 20.0 |
| 1.0907 | 3.0 | 215337 | 1.1452 | 12.6221 | 3.773 | 12.1226 | 12.2359 | 20.0 |
| 0.9798 | 4.0 | 287116 | 1.1670 | 12.4306 | 3.7329 | 11.9497 | 12.0617 | 20.0 |
| 0.9112 | 5.0 | 358895 | 1.1667 | 12.5404 | 3.7842 | 12.0541 | 12.1643 | 20.0 |
| 0.8358 | 6.0 | 430674 | 1.1997 | 12.5153 | 3.778 | 12.0382 | 12.1332 | 20.0 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"]} | jogonba2/bart-JES-cnn_dailymail | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| bart-JES-cnn\_dailymail
=======================
This model is a fine-tuned version of facebook/bart-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1452
* Rouge1: 43.9753
* Rouge2: 19.7191
* Rougel: 33.6236
* Rougelsum: 41.1683
* Gen Len: 80.1767
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 6.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.7.1+cu110
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 6.0\n* mixed\\_prec... | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\... |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-archeologie
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 2.0733
- Rouge1: 37.1845
- Rouge2: 16.9534
- Rougel: 28.8416
- Rougelsum: 29.077
- Gen Len: 34.4028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.4832 | 1.0 | 108 | 2.4237 | 22.6662 | 10.009 | 19.8729 | 19.8814 | 15.8333 |
| 2.557 | 2.0 | 216 | 2.2328 | 24.8102 | 11.9911 | 20.4773 | 20.696 | 19.0139 |
| 2.2702 | 3.0 | 324 | 2.2002 | 25.6482 | 11.6191 | 21.8383 | 21.9341 | 18.1944 |
| 2.1119 | 4.0 | 432 | 2.1266 | 25.5806 | 11.9765 | 21.3973 | 21.3503 | 19.4306 |
| 1.9582 | 5.0 | 540 | 2.1072 | 25.6578 | 12.2709 | 22.182 | 22.0548 | 19.1528 |
| 1.8137 | 6.0 | 648 | 2.1008 | 26.5272 | 11.4033 | 22.359 | 22.3259 | 19.4722 |
| 1.7725 | 7.0 | 756 | 2.1074 | 25.0405 | 11.1773 | 21.1369 | 21.1847 | 19.1806 |
| 1.6772 | 8.0 | 864 | 2.0959 | 26.5237 | 11.6028 | 22.5018 | 22.3931 | 19.3333 |
| 1.5798 | 9.0 | 972 | 2.0976 | 27.7443 | 11.9898 | 22.4052 | 22.2954 | 19.7222 |
| 1.4753 | 10.0 | 1080 | 2.0733 | 28.3502 | 12.9162 | 22.6352 | 22.6015 | 19.8194 |
| 1.4646 | 11.0 | 1188 | 2.1091 | 27.9198 | 12.8591 | 23.0718 | 23.0779 | 19.6111 |
| 1.4082 | 12.0 | 1296 | 2.1036 | 28.8509 | 13.0987 | 23.4189 | 23.5044 | 19.4861 |
| 1.2862 | 13.0 | 1404 | 2.1222 | 28.6641 | 12.8157 | 22.6799 | 22.7051 | 19.8611 |
| 1.2612 | 14.0 | 1512 | 2.1487 | 26.9709 | 11.6084 | 22.0312 | 22.0543 | 19.875 |
| 1.2327 | 15.0 | 1620 | 2.1808 | 28.218 | 12.6239 | 22.7372 | 22.7881 | 19.7361 |
| 1.2264 | 16.0 | 1728 | 2.1778 | 26.7393 | 11.4474 | 21.6057 | 21.555 | 19.7639 |
| 1.1848 | 17.0 | 1836 | 2.1995 | 27.6902 | 12.1082 | 22.0406 | 22.0101 | 19.6806 |
| 1.133 | 18.0 | 1944 | 2.2038 | 27.0402 | 12.1846 | 21.7793 | 21.7513 | 19.8056 |
| 1.168 | 19.0 | 2052 | 2.2116 | 27.5149 | 11.9876 | 22.1113 | 22.1527 | 19.7222 |
| 1.1206 | 20.0 | 2160 | 2.2133 | 28.2321 | 12.677 | 22.749 | 22.8485 | 19.5972 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"]} | jogonba2/barthez-deft-archeologie | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| barthez-deft-archeologie
========================
This model is a fine-tuned version of moussaKam/barthez on an unknown dataset.
Note: this model is one of the preliminary experiments and it underperforms the models published in the paper (using MBartHez and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
* Loss: 2.0733
* Rouge1: 37.1845
* Rouge2: 16.9534
* Rougel: 28.8416
* Rougelsum: 29.077
* Gen Len: 34.4028
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.7.1+cu110
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0\n* mixed\\_pre... | [
"TAGS\n#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch... |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-chimie
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 2.0710
- Rouge1: 31.8947
- Rouge2: 16.7563
- Rougel: 23.5428
- Rougelsum: 23.4918
- Gen Len: 38.5256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.8022 | 1.0 | 118 | 2.5491 | 16.8208 | 7.0027 | 13.957 | 14.0479 | 19.1538 |
| 2.9286 | 2.0 | 236 | 2.3074 | 17.5356 | 7.8717 | 14.4874 | 14.5044 | 19.9487 |
| 2.5422 | 3.0 | 354 | 2.2322 | 19.6491 | 9.4156 | 15.9467 | 15.9433 | 19.7051 |
| 2.398 | 4.0 | 472 | 2.1500 | 18.7166 | 9.859 | 15.7535 | 15.8036 | 19.9231 |
| 2.2044 | 5.0 | 590 | 2.1372 | 19.978 | 10.6235 | 16.1348 | 16.1274 | 19.6154 |
| 1.9405 | 6.0 | 708 | 2.0992 | 20.226 | 10.551 | 16.6928 | 16.7211 | 19.9744 |
| 1.8544 | 7.0 | 826 | 2.0841 | 19.8869 | 10.8456 | 16.1072 | 16.097 | 19.8846 |
| 1.7536 | 8.0 | 944 | 2.0791 | 19.3017 | 9.4921 | 16.1541 | 16.2167 | 19.859 |
| 1.6914 | 9.0 | 1062 | 2.0710 | 21.3848 | 10.4088 | 17.1963 | 17.2254 | 19.8846 |
| 1.654 | 10.0 | 1180 | 2.1069 | 22.3811 | 10.7987 | 18.7595 | 18.761 | 19.9231 |
| 1.5899 | 11.0 | 1298 | 2.0919 | 20.8546 | 10.6958 | 16.8637 | 16.9499 | 19.8077 |
| 1.4661 | 12.0 | 1416 | 2.1065 | 22.3677 | 11.7472 | 18.262 | 18.3 | 19.9744 |
| 1.4205 | 13.0 | 1534 | 2.1164 | 20.5845 | 10.7825 | 16.9972 | 17.0216 | 19.9359 |
| 1.3797 | 14.0 | 1652 | 2.1240 | 22.2561 | 11.303 | 17.5064 | 17.5815 | 19.9744 |
| 1.3724 | 15.0 | 1770 | 2.1187 | 23.2825 | 11.912 | 18.5208 | 18.5499 | 19.9359 |
| 1.3404 | 16.0 | 1888 | 2.1394 | 22.1305 | 10.5258 | 17.772 | 17.8202 | 19.9744 |
| 1.2846 | 17.0 | 2006 | 2.1502 | 21.567 | 11.0557 | 17.2562 | 17.2974 | 20.0 |
| 1.2871 | 18.0 | 2124 | 2.1572 | 22.5871 | 11.702 | 18.2906 | 18.3826 | 19.9744 |
| 1.2422 | 19.0 | 2242 | 2.1613 | 23.0935 | 11.6824 | 18.6087 | 18.6777 | 19.9744 |
| 1.2336 | 20.0 | 2360 | 2.1581 | 22.6789 | 11.4363 | 18.1661 | 18.2346 | 19.9487 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"]} | jogonba2/barthez-deft-chimie | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| barthez-deft-chimie
===================
This model is a fine-tuned version of moussaKam/barthez on an unknown dataset.
Note: this model is one of the preliminary experiments and it underperforms the models published in the paper (using MBartHez and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
* Loss: 2.0710
* Rouge1: 31.8947
* Rouge2: 16.7563
* Rougel: 23.5428
* Rougelsum: 23.4918
* Gen Len: 38.5256
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.7.1+cu110
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0\n* mixed\\_pre... | [
"TAGS\n#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch... |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-linguistique
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 1.7596
- Rouge1: 41.989
- Rouge2: 22.4524
- Rougel: 32.7966
- Rougelsum: 32.7953
- Gen Len: 22.1549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.0569 | 1.0 | 108 | 2.0282 | 31.6993 | 14.9483 | 25.5565 | 25.4379 | 18.3803 |
| 2.2892 | 2.0 | 216 | 1.8553 | 35.2563 | 18.019 | 28.3135 | 28.2927 | 18.507 |
| 1.9062 | 3.0 | 324 | 1.7696 | 37.4613 | 18.1488 | 28.9959 | 29.0134 | 19.5352 |
| 1.716 | 4.0 | 432 | 1.7641 | 37.6903 | 18.7496 | 30.1097 | 30.1027 | 18.9577 |
| 1.5722 | 5.0 | 540 | 1.7781 | 38.1013 | 19.8291 | 29.8142 | 29.802 | 19.169 |
| 1.4655 | 6.0 | 648 | 1.7661 | 38.3557 | 20.3309 | 30.5068 | 30.4728 | 19.3662 |
| 1.3507 | 7.0 | 756 | 1.7596 | 39.7409 | 20.2998 | 31.0849 | 31.1152 | 19.3944 |
| 1.2874 | 8.0 | 864 | 1.7706 | 37.7846 | 20.3457 | 30.6826 | 30.6321 | 19.4789 |
| 1.2641 | 9.0 | 972 | 1.7848 | 38.7421 | 19.5701 | 30.5798 | 30.6305 | 19.3944 |
| 1.1192 | 10.0 | 1080 | 1.8008 | 40.3313 | 20.3378 | 31.8325 | 31.8648 | 19.5493 |
| 1.0724 | 11.0 | 1188 | 1.8450 | 38.9612 | 20.5719 | 31.4496 | 31.3144 | 19.8592 |
| 1.0077 | 12.0 | 1296 | 1.8364 | 36.5997 | 18.46 | 29.1808 | 29.1705 | 19.7324 |
| 0.9362 | 13.0 | 1404 | 1.8677 | 38.0371 | 19.2321 | 30.3893 | 30.3926 | 19.6338 |
| 0.8868 | 14.0 | 1512 | 1.9154 | 36.4737 | 18.5314 | 29.325 | 29.3634 | 19.6479 |
| 0.8335 | 15.0 | 1620 | 1.9344 | 35.7583 | 18.0687 | 27.9666 | 27.8675 | 19.8028 |
| 0.8305 | 16.0 | 1728 | 1.9556 | 37.2137 | 18.2199 | 29.5959 | 29.5799 | 19.9577 |
| 0.8057 | 17.0 | 1836 | 1.9793 | 36.6834 | 17.8505 | 28.6701 | 28.7145 | 19.7324 |
| 0.7869 | 18.0 | 1944 | 1.9994 | 37.5918 | 19.1984 | 28.8569 | 28.8278 | 19.7606 |
| 0.7549 | 19.0 | 2052 | 2.0117 | 37.3278 | 18.5169 | 28.778 | 28.7737 | 19.8028 |
| 0.7497 | 20.0 | 2160 | 2.0189 | 37.7513 | 19.1813 | 29.3675 | 29.402 | 19.6901 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"]} | jogonba2/barthez-deft-linguistique | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| barthez-deft-linguistique
=========================
This model is a fine-tuned version of moussaKam/barthez on an unknown dataset.
Note: this model is one of the preliminary experiments and it underperforms the models published in the paper (using MBartHez and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
* Loss: 1.7596
* Rouge1: 41.989
* Rouge2: 22.4524
* Rougel: 32.7966
* Rougelsum: 32.7953
* Gen Len: 22.1549
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.7.1+cu110
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0\n* mixed\\_pre... | [
"TAGS\n#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch... |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# barthez-deft-sciences_de_l_information
This model is a fine-tuned version of [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) on an unknown dataset.
**Note**: this model is one of the preliminary experiments and it underperforms the models published in the paper (using [MBartHez](https://huggingface.co/moussaKam/mbarthez) and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
- Loss: 2.0258
- Rouge1: 34.5672
- Rouge2: 16.7861
- Rougel: 27.5573
- Rougelsum: 27.6099
- Gen Len: 17.8857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.3405 | 1.0 | 106 | 2.3682 | 31.3511 | 12.1973 | 25.6977 | 25.6851 | 14.9714 |
| 2.4219 | 2.0 | 212 | 2.1891 | 30.1154 | 13.3459 | 25.4854 | 25.5403 | 14.0429 |
| 2.0789 | 3.0 | 318 | 2.0994 | 32.153 | 15.3865 | 26.1859 | 26.1672 | 15.2 |
| 1.869 | 4.0 | 424 | 2.0258 | 34.5797 | 16.4194 | 27.6909 | 27.7201 | 16.9857 |
| 1.6569 | 5.0 | 530 | 2.0417 | 34.3854 | 16.5237 | 28.7036 | 28.8258 | 15.2429 |
| 1.5414 | 6.0 | 636 | 2.0503 | 33.1768 | 15.4851 | 27.2818 | 27.2884 | 16.0143 |
| 1.4461 | 7.0 | 742 | 2.0293 | 35.4273 | 16.118 | 27.3622 | 27.393 | 16.6857 |
| 1.3435 | 8.0 | 848 | 2.0336 | 35.3471 | 15.9695 | 27.668 | 27.6749 | 17.2 |
| 1.2624 | 9.0 | 954 | 2.0779 | 35.9201 | 17.2547 | 27.409 | 27.3293 | 17.1857 |
| 1.1807 | 10.0 | 1060 | 2.1301 | 35.7061 | 15.9138 | 27.3968 | 27.4716 | 17.1286 |
| 1.0972 | 11.0 | 1166 | 2.1726 | 34.3194 | 16.1313 | 27.0367 | 27.0737 | 17.1429 |
| 1.0224 | 12.0 | 1272 | 2.1704 | 34.9278 | 16.7958 | 27.8754 | 27.932 | 16.6571 |
| 1.0181 | 13.0 | 1378 | 2.2458 | 34.472 | 15.9111 | 28.2938 | 28.2946 | 16.7571 |
| 0.9769 | 14.0 | 1484 | 2.3405 | 35.1592 | 16.3135 | 29.0956 | 29.0858 | 16.5429 |
| 0.8866 | 15.0 | 1590 | 2.3303 | 34.8732 | 15.6709 | 27.5858 | 27.6169 | 16.2429 |
| 0.8888 | 16.0 | 1696 | 2.2976 | 35.3034 | 16.8011 | 27.7988 | 27.7569 | 17.5143 |
| 0.8358 | 17.0 | 1802 | 2.3349 | 35.505 | 16.8851 | 28.3651 | 28.413 | 16.8143 |
| 0.8026 | 18.0 | 1908 | 2.3738 | 35.2328 | 17.0358 | 28.544 | 28.6211 | 16.6143 |
| 0.7487 | 19.0 | 2014 | 2.4103 | 34.0793 | 15.4468 | 27.8057 | 27.8586 | 16.7286 |
| 0.7722 | 20.0 | 2120 | 2.3991 | 34.8116 | 15.8706 | 27.9173 | 27.983 | 16.9286 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"]} | jogonba2/barthez-deft-sciences_de_l_information | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| barthez-deft-sciences\_de\_l\_information
=========================================
This model is a fine-tuned version of moussaKam/barthez on an unknown dataset.
Note: this model is one of the preliminary experiments and it underperforms the models published in the paper (using MBartHez and HAL/Wiki pre-training + copy mechanisms)
It achieves the following results on the evaluation set:
* Loss: 2.0258
* Rouge1: 34.5672
* Rouge2: 16.7861
* Rougel: 27.5573
* Rougelsum: 27.6099
* Gen Len: 17.8857
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 20.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.7.1+cu110
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20.0\n* mixed\\_pre... | [
"TAGS\n#transformers #pytorch #mbart #text2text-generation #generated_from_trainer #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch... |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbarthez-davide_articles-copy_enhanced
This model is a fine-tuned version of [moussaKam/mbarthez](https://huggingface.co/moussaKam/mbarthez) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4905
- Rouge1: 36.548
- Rouge2: 19.6282
- Rougel: 30.2513
- Rougelsum: 30.2765
- Gen Len: 25.7238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6706 | 1.0 | 33552 | 1.5690 | 31.2477 | 16.5455 | 26.9855 | 26.9754 | 18.6217 |
| 1.3446 | 2.0 | 67104 | 1.5060 | 32.1108 | 17.1408 | 27.7833 | 27.7703 | 18.9115 |
| 1.3245 | 3.0 | 100656 | 1.4905 | 32.9084 | 17.7027 | 28.2912 | 28.2975 | 18.9801 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"]} | jogonba2/mbarthez-copy_mechanism-hal_articles | null | [
"transformers",
"pytorch",
"mbart",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #mbart #generated_from_trainer #license-apache-2.0 #model-index #endpoints_compatible #region-us
| mbarthez-davide\_articles-copy\_enhanced
========================================
This model is a fine-tuned version of moussaKam/mbarthez on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4905
* Rouge1: 36.548
* Rouge2: 19.6282
* Rougel: 30.2513
* Rougelsum: 30.2765
* Gen Len: 25.7238
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.7.1+cu110
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_prec... | [
"TAGS\n#transformers #pytorch #mbart #generated_from_trainer #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed... |
text-generation | transformers |
# Arya DialoGPT Model | {"tags": ["conversational"]} | jogp10/DialoGPT-medium-arya | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Arya DialoGPT Model | [
"# Arya DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Arya DialoGPT Model"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8790
- Wer: 1.3448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": ["ab"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | joheras/xls-r-ab-spanish | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"ab"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8790
- Wer: 1.3448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8790\n- Wer: 1.3448",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore inform... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB datase... |
sentence-similarity | sentence-transformers |
# DeCLUTR-base
## Model description
The "DeCLUTR-base" model from our paper: [DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659).
## Intended uses & limitations
The model is intended to be used as a universal sentence encoder, similar to [Google's Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers).
#### How to use
Please see [our repo](https://github.com/JohnGiorgi/DeCLUTR) for full details. A simple example is shown below.
##### With [SentenceTransformers](https://www.sbert.net/)
```python
from scipy.spatial.distance import cosine
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("johngiorgi/declutr-base")
# Prepare some text to embed
texts = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
# Embed the text
embeddings = model.encode(texts)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
##### With 🤗 Transformers
```python
import torch
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-base")
model = AutoModel.from_pretrained("johngiorgi/declutr-base")
# Prepare some text to embed
text = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
# Embed the text
with torch.no_grad():
sequence_output = model(**inputs)[0]
# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
### BibTeX entry and citation info
```bibtex
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "datasets": ["openwebtext"], "pipeline_tag": "sentence-similarity"} | johngiorgi/declutr-base | null | [
"sentence-transformers",
"pytorch",
"jax",
"roberta",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:openwebtext",
"arxiv:2006.03659",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2006.03659"
] | [
"en"
] | TAGS
#sentence-transformers #pytorch #jax #roberta #feature-extraction #sentence-similarity #en #dataset-openwebtext #arxiv-2006.03659 #license-apache-2.0 #endpoints_compatible #region-us
|
# DeCLUTR-base
## Model description
The "DeCLUTR-base" model from our paper: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations.
## Intended uses & limitations
The model is intended to be used as a universal sentence encoder, similar to Google's Universal Sentence Encoder or Sentence Transformers.
#### How to use
Please see our repo for full details. A simple example is shown below.
##### With SentenceTransformers
##### With Transformers
### BibTeX entry and citation info
| [
"# DeCLUTR-base",
"## Model description\n\nThe \"DeCLUTR-base\" model from our paper: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations.",
"## Intended uses & limitations\n\nThe model is intended to be used as a universal sentence encoder, similar to Google's Universal Sentence Encoder... | [
"TAGS\n#sentence-transformers #pytorch #jax #roberta #feature-extraction #sentence-similarity #en #dataset-openwebtext #arxiv-2006.03659 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DeCLUTR-base",
"## Model description\n\nThe \"DeCLUTR-base\" model from our paper: DeCLUTR: Deep Contrastive Learn... |
sentence-similarity | sentence-transformers |
# DeCLUTR-sci-base
## Model description
This is the [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) model, with extended pretraining on over 2 million scientific papers from [S2ORC](https://github.com/allenai/s2orc/) using the self-supervised training strategy presented in [DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659).
## Intended uses & limitations
The model is intended to be used as a sentence encoder, similar to [Google's Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers). It is particularly suitable for scientific text.
#### How to use
Please see [our repo](https://github.com/JohnGiorgi/DeCLUTR) for full details. A simple example is shown below.
##### With [SentenceTransformers](https://www.sbert.net/)
```python
from scipy.spatial.distance import cosine
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("johngiorgi/declutr-sci-base")
# Prepare some text to embed
text = [
"Oncogenic KRAS mutations are common in cancer.",
"Notably, c-Raf has recently been found essential for development of K-Ras-driven NSCLCs.",
]
# Embed the text
embeddings = model.encode(texts)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
##### With 🤗 Transformers
```python
import torch
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-sci-base")
model = AutoModel.from_pretrained("johngiorgi/declutr-sci-base")
# Prepare some text to embed
text = [
"Oncogenic KRAS mutations are common in cancer.",
"Notably, c-Raf has recently been found essential for development of K-Ras-driven NSCLCs.",
]
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
# Embed the text
with torch.no_grad():
sequence_output = model(**inputs)[0]
# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
### BibTeX entry and citation info
```bibtex
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "datasets": ["s2orc"], "pipeline_tag": "sentence-similarity"} | johngiorgi/declutr-sci-base | null | [
"sentence-transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"arxiv:2006.03659",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2006.03659"
] | [
"en"
] | TAGS
#sentence-transformers #pytorch #jax #bert #feature-extraction #sentence-similarity #en #dataset-s2orc #arxiv-2006.03659 #license-apache-2.0 #endpoints_compatible #region-us
|
# DeCLUTR-sci-base
## Model description
This is the allenai/scibert_scivocab_uncased model, with extended pretraining on over 2 million scientific papers from S2ORC using the self-supervised training strategy presented in DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations.
## Intended uses & limitations
The model is intended to be used as a sentence encoder, similar to Google's Universal Sentence Encoder or Sentence Transformers. It is particularly suitable for scientific text.
#### How to use
Please see our repo for full details. A simple example is shown below.
##### With SentenceTransformers
##### With Transformers
### BibTeX entry and citation info
| [
"# DeCLUTR-sci-base",
"## Model description\n\nThis is the allenai/scibert_scivocab_uncased model, with extended pretraining on over 2 million scientific papers from S2ORC using the self-supervised training strategy presented in DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations.",
"## ... | [
"TAGS\n#sentence-transformers #pytorch #jax #bert #feature-extraction #sentence-similarity #en #dataset-s2orc #arxiv-2006.03659 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DeCLUTR-sci-base",
"## Model description\n\nThis is the allenai/scibert_scivocab_uncased model, with extended pretraining o... |
sentence-similarity | sentence-transformers |
# DeCLUTR-small
## Model description
The "DeCLUTR-small" model from our paper: [DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations](https://arxiv.org/abs/2006.03659).
## Intended uses & limitations
The model is intended to be used as a universal sentence encoder, similar to [Google's Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) or [Sentence Transformers](https://github.com/UKPLab/sentence-transformers).
#### How to use
Please see [our repo](https://github.com/JohnGiorgi/DeCLUTR) for full details. A simple example is shown below.
##### With [SentenceTransformers](https://www.sbert.net/)
```python
from scipy.spatial.distance import cosine
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("johngiorgi/declutr-small")
# Prepare some text to embed
texts = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
# Embed the text
embeddings = model.encode(texts)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
##### With 🤗 Transformers
```python
import torch
from scipy.spatial.distance import cosine
from transformers import AutoModel, AutoTokenizer
# Load the model
tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small")
model = AutoModel.from_pretrained("johngiorgi/declutr-small")
# Prepare some text to embed
text = [
"A smiling costumed woman is holding an umbrella.",
"A happy woman in a fairy costume holds an umbrella.",
]
inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt")
# Embed the text
with torch.no_grad():
sequence_output = model(**inputs)[0]
# Mean pool the token-level embeddings to get sentence-level embeddings
embeddings = torch.sum(
sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1
) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9)
# Compute a semantic similarity via the cosine distance
semantic_sim = 1 - cosine(embeddings[0], embeddings[1])
```
### BibTeX entry and citation info
```bibtex
@inproceedings{giorgi-etal-2021-declutr,
title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
year = 2021,
month = aug,
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {879--895},
doi = {10.18653/v1/2021.acl-long.72},
url = {https://aclanthology.org/2021.acl-long.72}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "datasets": ["openwebtext"], "pipeline_tag": "sentence-similarity"} | johngiorgi/declutr-small | null | [
"sentence-transformers",
"pytorch",
"jax",
"roberta",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:openwebtext",
"arxiv:2006.03659",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2006.03659"
] | [
"en"
] | TAGS
#sentence-transformers #pytorch #jax #roberta #feature-extraction #sentence-similarity #en #dataset-openwebtext #arxiv-2006.03659 #license-apache-2.0 #endpoints_compatible #region-us
|
# DeCLUTR-small
## Model description
The "DeCLUTR-small" model from our paper: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations.
## Intended uses & limitations
The model is intended to be used as a universal sentence encoder, similar to Google's Universal Sentence Encoder or Sentence Transformers.
#### How to use
Please see our repo for full details. A simple example is shown below.
##### With SentenceTransformers
##### With Transformers
### BibTeX entry and citation info
| [
"# DeCLUTR-small",
"## Model description\n\nThe \"DeCLUTR-small\" model from our paper: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations.",
"## Intended uses & limitations\n\nThe model is intended to be used as a universal sentence encoder, similar to Google's Universal Sentence Encod... | [
"TAGS\n#sentence-transformers #pytorch #jax #roberta #feature-extraction #sentence-similarity #en #dataset-openwebtext #arxiv-2006.03659 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# DeCLUTR-small",
"## Model description\n\nThe \"DeCLUTR-small\" model from our paper: DeCLUTR: Deep Contrastive Lea... |
text-generation | transformers | ## GPT-2 for Skript
## Complete your Skript automatically via a finetuned GPT-2 model
`0.57` Training loss on about 2 epochs (in total)
1.2 million lines of Skript is inside the dataset.
Inference Colab: https://colab.research.google.com/drive/1ujtLt7MOk7Nsag3q-BYK62Kpoe4Lr4PE | {} | johnpaulbin/gpt2-skript-1m-v5 | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ## GPT-2 for Skript
## Complete your Skript automatically via a finetuned GPT-2 model
'0.57' Training loss on about 2 epochs (in total)
1.2 million lines of Skript is inside the dataset.
Inference Colab: URL | [
"## GPT-2 for Skript",
"## Complete your Skript automatically via a finetuned GPT-2 model\n\n'0.57' Training loss on about 2 epochs (in total)\n\n1.2 million lines of Skript is inside the dataset.\n\nInference Colab: URL"
] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## GPT-2 for Skript",
"## Complete your Skript automatically via a finetuned GPT-2 model\n\n'0.57' Training loss on about 2 epochs (in total)\n\n1.2 million l... |
text-generation | transformers | GPT-2 Skript 80k lines. v3
Training loss: `0.594200`
1.5 GB
Inferencing colab: https://colab.research.google.com/drive/1uTAPLa1tuNXFpG0qVLSseMro6iU9-xNc | {} | johnpaulbin/gpt2-skript-80-v3 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT-2 Skript 80k lines. v3
Training loss: '0.594200'
1.5 GB
Inferencing colab: URL | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | GPT-2 for the Minecraft Plugin: Skript (80,000 Lines, 3< GB: GPT-2 Large model finetune)
Inferencing Colab: https://colab.research.google.com/drive/1uTAPLa1tuNXFpG0qVLSseMro6iU9-xNc | {} | johnpaulbin/gpt2-skript-80 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT-2 for the Minecraft Plugin: Skript (80,000 Lines, 3< GB: GPT-2 Large model finetune)
Inferencing Colab: URL | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | GPT2 for Minecraft Plugin Skript (50,000 Lines, 3 GB: GPT-Large model finetune)
Inference Colab: https://colab.research.google.com/drive/1z8dwtNP8Kj3evEOmKmGBHK_vmP30lgiY | {} | johnpaulbin/gpt2-skript-base | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT2 for Minecraft Plugin Skript (50,000 Lines, 3 GB: GPT-Large model finetune)
Inference Colab: URL | [] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Trained on ~400 youtube titles of meme compilations on youtube.
WARNING: may produce offensive content. | {} | johnpaulbin/meme-titles | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Trained on ~400 youtube titles of meme compilations on youtube.
WARNING: may produce offensive content. | [] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Monkey D Luffy DialoGPT Model | {"tags": ["conversational"]} | jollmimmim/DialoGPT-small-monkeydluffy | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Monkey D Luffy DialoGPT Model | [
"# Monkey D Luffy DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Monkey D Luffy DialoGPT Model"
] |
text2text-generation | transformers | Just a test
| {} | jonatasgrosman/bartuque-bart-base-pretrained-mm-2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| Just a test
| [] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers | Just a test
| {} | jonatasgrosman/bartuque-bart-base-pretrained-r-2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| Just a test
| [] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers | Just a test
| {} | jonatasgrosman/bartuque-bart-base-pretrained-rm-2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| Just a test
| [] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers | Just a test
| {} | jonatasgrosman/bartuque-bart-base-random-r-2 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| Just a test
| [] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers | testing
| {} | jonatasgrosman/paraphrase | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| testing
| [] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned wav2vec2 large model for speech recognition in English
Fine-tuned [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on English using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "SHE'LL BE ALL RIGHT." | SHELL BE ALL RIGHT |
| SIX | SIX |
| "ALL'S WELL THAT ENDS WELL." | ALLAS WELL THAT ENDS WELL |
| DO YOU MEAN IT? | W MEAN IT |
| THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS. | THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE BUT STILL CAUSES REGRESTION |
| HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE? | HOW IS MOSILLA GOING TO BANDL AND BE WHIT IS LIKE QU AND QU |
| "I GUESS YOU MUST THINK I'M KINDA BATTY." | RUSTION AS HAME AK AN THE POT |
| NO ONE NEAR THE REMOTE MACHINE YOU COULD RING? | NO ONE NEAR THE REMOTE MACHINE YOU COULD RING |
| SAUCE FOR THE GOOSE IS SAUCE FOR THE GANDER. | SAUCE FOR THE GUCE IS SAUCE FOR THE GONDER |
| GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD. | GRAFS STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD |
## Evaluation
The model can be evaluated as follows on the English (en) test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-english"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-06-17). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-english | **18.98%** | **8.29%** |
| jonatasgrosman/wav2vec2-large-english | 21.53% | 9.66% |
| facebook/wav2vec2-large-960h-lv60-self | 22.03% | 10.39% |
| facebook/wav2vec2-large-960h-lv60 | 23.97% | 11.14% |
| boris/xlsr-en-punctuation | 29.10% | 10.75% |
| facebook/wav2vec2-large-960h | 32.79% | 16.03% |
| facebook/wav2vec2-base-960h | 39.86% | 19.89% |
| facebook/wav2vec2-base-100h | 51.06% | 25.06% |
| elgeish/wav2vec2-large-lv60-timit-asr | 59.96% | 34.28% |
| facebook/wav2vec2-base-10k-voxpopuli-ft-en | 66.41% | 36.76% |
| elgeish/wav2vec2-base-timit-asr | 68.78% | 36.81% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021wav2vec2-large-english,
title={Fine-tuned wav2vec2 large model for speech recognition in {E}nglish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-english}},
year={2021}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "Wav2Vec2 English by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice en", "type": "common_voice", "args": "en"}, "metrics": [{"type": "wer", "value": 21.53, "name": "Test WER"}, {"type": "cer", "value": 9.66, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-english | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"en",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #en #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Fine-tuned wav2vec2 large model for speech recognition in English
=================================================================
Fine-tuned facebook/wav2vec2-large on English using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the English (en) test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-06-17). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: jonatasgrosman/wav2vec2-large-xlsr-53-english, WER: 18.98%, CER: 8.29%
Model: jonatasgrosman/wav2vec2-large-english, WER: 21.53%, CER: 9.66%
Model: facebook/wav2vec2-large-960h-lv60-self, WER: 22.03%, CER: 10.39%
Model: facebook/wav2vec2-large-960h-lv60, WER: 23.97%, CER: 11.14%
Model: boris/xlsr-en-punctuation, WER: 29.10%, CER: 10.75%
Model: facebook/wav2vec2-large-960h, WER: 32.79%, CER: 16.03%
Model: facebook/wav2vec2-base-960h, WER: 39.86%, CER: 19.89%
Model: facebook/wav2vec2-base-100h, WER: 51.06%, CER: 25.06%
Model: elgeish/wav2vec2-large-lv60-timit-asr, WER: 59.96%, CER: 34.28%
Model: facebook/wav2vec2-base-10k-voxpopuli-ft-en, WER: 66.41%, CER: 36.76%
Model: elgeish/wav2vec2-base-timit-asr, WER: 68.78%, CER: 36.81%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #en #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned French Voxpopuli wav2vec2 large model for speech recognition in French
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) on French using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-fr-voxpopuli-french")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-large-fr-voxpopuli-french"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "CE DERNIER A ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE." | CE DERNIER A ÉVOLÉ TOUT AU LONG DE L'HISTOIRE ROMAINE |
| CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ACHÉMÉNIDE ET SEPT DES SASSANIDES. | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNESTIE ACHÉMÉNIDE ET SEPT DES SACENNIDES |
| "J'AI DIT QUE LES ACTEURS DE BOIS AVAIENT, SELON MOI, BEAUCOUP D'AVANTAGES SUR LES AUTRES." | JAI DIT QUE LES ACTEURS DE BOIS AVAIENT SELON MOI BEAUCOUP DAVANTAGE SUR LES AUTRES |
| LES PAYS-BAS ONT REMPORTÉ TOUTES LES ÉDITIONS. | LE PAYS-BAS ON REMPORTÉ TOUTES LES ÉDITIONS |
| IL Y A MAINTENANT UNE GARE ROUTIÈRE. | IL A MAINTENANT GULA E RETIREN |
| HUIT | HUIT |
| DANS L’ATTENTE DU LENDEMAIN, ILS NE POUVAIENT SE DÉFENDRE D’UNE VIVE ÉMOTION | DANS LATTENTE DU LENDEMAIN IL NE POUVAIT SE DÉFENDRE DUNE VIVE ÉMOTION |
| LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES. | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZ ÉPISODES |
| ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES. | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES |
| ZÉRO | ZÉRO |
## Evaluation
The model can be evaluated as follows on the French (fr) test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-large-fr-voxpopuli-french"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-16). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-french | **15.90%** | **5.29%** |
| jonatasgrosman/wav2vec2-large-fr-voxpopuli-french | 17.62% | 6.04% |
| Ilyes/wav2vec2-large-xlsr-53-french | 19.67% | 6.70% |
| Nhut/wav2vec2-large-xlsr-french | 24.09% | 8.42% |
| facebook/wav2vec2-large-xlsr-53-french | 25.45% | 10.35% |
| MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French | 28.22% | 9.70% |
| Ilyes/wav2vec2-large-xlsr-53-french_punctuation | 29.80% | 11.79% |
| facebook/wav2vec2-base-10k-voxpopuli-ft-fr | 61.06% | 33.31% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021voxpopuli-fr-wav2vec2-large-french,
title={Fine-tuned {F}rench {V}oxpopuli wav2vec2 large model for speech recognition in {F}rench},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-fr-voxpopuli-french}},
year={2021}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "Voxpopuli Wav2Vec2 French by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fr", "type": "common_voice", "args": "fr"}, "metrics": [{"type": "wer", "value": 17.62, "name": "Test WER"}, {"type": "cer", "value": 6.04, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-fr-voxpopuli-french | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Fine-tuned French Voxpopuli wav2vec2 large model for speech recognition in French
=================================================================================
Fine-tuned facebook/wav2vec2-large-fr-voxpopuli on French using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the French (fr) test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-16). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: jonatasgrosman/wav2vec2-large-xlsr-53-french, WER: 15.90%, CER: 5.29%
Model: jonatasgrosman/wav2vec2-large-fr-voxpopuli-french, WER: 17.62%, CER: 6.04%
Model: Ilyes/wav2vec2-large-xlsr-53-french, WER: 19.67%, CER: 6.70%
Model: Nhut/wav2vec2-large-xlsr-french, WER: 24.09%, CER: 8.42%
Model: facebook/wav2vec2-large-xlsr-53-french, WER: 25.45%, CER: 10.35%
Model: MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French, WER: 28.22%, CER: 9.70%
Model: Ilyes/wav2vec2-large-xlsr-53-french\_punctuation, WER: 29.80%, CER: 11.79%
Model: facebook/wav2vec2-base-10k-voxpopuli-ft-fr, WER: 61.06%, CER: 33.31%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-arabic")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ar"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ألديك قلم ؟ | ألديك قلم |
| ليست هناك مسافة على هذه الأرض أبعد من يوم أمس. | ليست نالك مسافة على هذه الأرض أبعد من يوم الأمس م |
| إنك تكبر المشكلة. | إنك تكبر المشكلة |
| يرغب أن يلتقي بك. | يرغب أن يلتقي بك |
| إنهم لا يعرفون لماذا حتى. | إنهم لا يعرفون لماذا حتى |
| سيسعدني مساعدتك أي وقت تحب. | سيسئدنيمساعدتك أي وقد تحب |
| أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة. | أحب نظرية علمية إلي هي أن حل قتزح المكوينا بالكامل من الأمت عن المفقودة |
| سأشتري له قلماً. | سأشتري له قلما |
| أين المشكلة ؟ | أين المشكل |
| وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ | ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون |
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ar"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-14). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-arabic | **39.59%** | **18.18%** |
| bakrianoo/sinai-voice-ar-stt | 45.30% | 21.84% |
| othrif/wav2vec2-large-xlsr-arabic | 45.93% | 20.51% |
| kmfoda/wav2vec2-large-xlsr-arabic | 54.14% | 26.07% |
| mohammed/wav2vec2-large-xlsr-arabic | 56.11% | 26.79% |
| anas/wav2vec2-large-xlsr-arabic | 62.02% | 27.09% |
| elgeish/wav2vec2-large-xlsr-53-arabic | 100.00% | 100.56% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-arabic,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {A}rabic},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-arabic}},
year={2021}
}
``` | {"language": "ar", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "arabic_speech_corpus"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Arabic by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ar", "type": "common_voice", "args": "ar"}, "metrics": [{"type": "wer", "value": 39.59, "name": "Test WER"}, {"type": "cer", "value": 18.18, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-arabic | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"dataset:common_voice",
"dataset:arabic_speech_corpus",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ar #dataset-common_voice #dataset-arabic_speech_corpus #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Arabic
===============================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Arabic using the train and validation splits of Common Voice 6.1 and Arabic Speech Corpus.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the Arabic test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-14). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: jonatasgrosman/wav2vec2-large-xlsr-53-arabic, WER: 39.59%, CER: 18.18%
Model: bakrianoo/sinai-voice-ar-stt, WER: 45.30%, CER: 21.84%
Model: othrif/wav2vec2-large-xlsr-arabic, WER: 45.93%, CER: 20.51%
Model: kmfoda/wav2vec2-large-xlsr-arabic, WER: 54.14%, CER: 26.07%
Model: mohammed/wav2vec2-large-xlsr-arabic, WER: 56.11%, CER: 26.79%
Model: anas/wav2vec2-large-xlsr-arabic, WER: 62.02%, CER: 27.09%
Model: elgeish/wav2vec2-large-xlsr-53-arabic, WER: 100.00%, CER: 100.56%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ar #dataset-common_voice #dataset-arabic_speech_corpus #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Chinese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [ST-CMDS](http://www.openslr.org/38/).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "zh-CN"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| 宋朝末年年间定居粉岭围。 | 宋朝末年年间定居分定为 |
| 渐渐行动不便 | 建境行动不片 |
| 二十一年去世。 | 二十一年去世 |
| 他们自称恰哈拉。 | 他们自称家哈<unk> |
| 局部干涩的例子包括有口干、眼睛干燥、及阴道干燥。 | 菊物干寺的例子包括有口肝眼睛干照以及阴到干<unk> |
| 嘉靖三十八年,登进士第三甲第二名。 | 嘉靖三十八年登进士第三甲第二名 |
| 这一名称一直沿用至今。 | 这一名称一直沿用是心 |
| 同时乔凡尼还得到包税合同和许多明矾矿的经营权。 | 同时桥凡妮还得到包税合同和许多民繁矿的经营权 |
| 为了惩罚西扎城和塞尔柱的结盟,盟军在抵达后将外城烧毁。 | 为了曾罚西扎城和塞尔素的节盟盟军在抵达后将外曾烧毁 |
| 河内盛产黄色无鱼鳞的鳍射鱼。 | 合类生场环色无鱼林的骑射鱼 |
## Evaluation
The model can be evaluated as follows on the Chinese (zh-CN) test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "zh-CN"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-13). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn | **82.37%** | **19.03%** |
| ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt | 84.01% | 20.95% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-chinese,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {C}hinese},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn}},
year={2021}
}
``` | {"language": "zh", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Chinese (zh-CN) by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice zh-CN", "type": "common_voice", "args": "zh-CN"}, "metrics": [{"type": "wer", "value": 82.37, "name": "Test WER"}, {"type": "cer", "value": 19.03, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"zh",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #zh #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Chinese
================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Chinese using the train and validation splits of Common Voice 6.1, CSS10 and ST-CMDS.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the Chinese (zh-CN) test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-13). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn, WER: 82.37%, CER: 19.03%
Model: ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt, WER: 84.01%, CER: 20.95%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #zh #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Dutch
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dutch using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-dutch")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "nl"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-dutch"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| DE ABORIGINALS ZIJN DE OORSPRONKELIJKE BEWONERS VAN AUSTRALIË. | DE ABBORIGENALS ZIJN DE OORSPRONKELIJKE BEWONERS VAN AUSTRALIË |
| MIJN TOETSENBORD ZIT VOL STOF. | MIJN TOETSENBORD ZIT VOL STOF |
| ZE HAD DE BANK BESCHADIGD MET HAAR SKATEBOARD. | ZE HAD DE BANK BESCHADIGD MET HAAR SCHEETBOORD |
| WAAR LAAT JIJ JE ONDERHOUD DOEN? | WAAR LAAT JIJ HET ONDERHOUD DOEN |
| NA HET LEZEN VAN VELE BEOORDELINGEN HAD ZE EINDELIJK HAAR OOG LATEN VALLEN OP EEN LAPTOP MET EEN QWERTY TOETSENBORD. | NA HET LEZEN VAN VELE BEOORDELINGEN HAD ZE EINDELIJK HAAR OOG LATEN VALLEN OP EEN LAPTOP MET EEN QUERTITOETSEMBORD |
| DE TAMPONS ZIJN OP. | DE TAPONT ZIJN OP |
| MARIJKE KENT OLIVIER NU AL MEER DAN TWEE JAAR. | MAARRIJKEN KENT OLIEVIER NU AL MEER DAN TWEE JAAR |
| HET VOEREN VAN BROOD AAN EENDEN IS EIGENLIJK ONGEZOND VOOR DE BEESTEN. | HET VOEREN VAN BEUROT AAN EINDEN IS EIGENLIJK ONGEZOND VOOR DE BEESTEN |
| PARKET MOET JE STOFZUIGEN, TEGELS MOET JE DWEILEN. | PARKET MOET JE STOF ZUIGEN MAAR TEGELS MOET JE DWEILEN |
| IN ONZE BUURT KENT IEDEREEN ELKAAR. | IN ONZE BUURT KENT IEDEREEN ELKAAR |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-dutch --dataset mozilla-foundation/common_voice_6_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-dutch --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-dutch,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {D}utch},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-dutch}},
year={2021}
}
``` | {"language": "nl", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "nl", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Dutch by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice nl", "type": "common_voice", "args": "nl"}, "metrics": [{"type": "wer", "value": 15.72, "name": "Test WER"}, {"type": "cer", "value": 5.35, "name": "Test CER"}, {"type": "wer", "value": 12.84, "name": "Test WER (+LM)"}, {"type": "cer", "value": 4.64, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 35.79, "name": "Dev WER"}, {"type": "cer", "value": 17.67, "name": "Dev CER"}, {"type": "wer", "value": 31.54, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 16.37, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-dutch | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"nl",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"doi:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #nl #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #doi-10.57967/hf/0203 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Dutch
==============================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Dutch using the train and validation splits of Common Voice 6.1 and CSS10.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #nl #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #doi-10.57967/hf/0203 #license-apache-2.0 #model-index... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in English
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "SHE'LL BE ALL RIGHT." | SHE'LL BE ALL RIGHT |
| SIX | SIX |
| "ALL'S WELL THAT ENDS WELL." | ALL AS WELL THAT ENDS WELL |
| DO YOU MEAN IT? | DO YOU MEAN IT |
| THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE, BUT STILL CAUSES REGRESSIONS. | THE NEW PATCH IS LESS INVASIVE THAN THE OLD ONE BUT STILL CAUSES REGRESSION |
| HOW IS MOZILLA GOING TO HANDLE AMBIGUITIES LIKE QUEUE AND CUE? | HOW IS MOSLILLAR GOING TO HANDLE ANDBEWOOTH HIS LIKE Q AND Q |
| "I GUESS YOU MUST THINK I'M KINDA BATTY." | RUSTIAN WASTIN PAN ONTE BATTLY |
| NO ONE NEAR THE REMOTE MACHINE YOU COULD RING? | NO ONE NEAR THE REMOTE MACHINE YOU COULD RING |
| SAUCE FOR THE GOOSE IS SAUCE FOR THE GANDER. | SAUCE FOR THE GUICE IS SAUCE FOR THE GONDER |
| GROVES STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD. | GRAFS STARTED WRITING SONGS WHEN SHE WAS FOUR YEARS OLD |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset mozilla-foundation/common_voice_6_0 --config en --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-english,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {E}nglish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english}},
year={2021}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "en", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 English by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice en", "type": "common_voice", "args": "en"}, "metrics": [{"type": "wer", "value": 19.06, "name": "Test WER"}, {"type": "cer", "value": 7.69, "name": "Test CER"}, {"type": "wer", "value": 14.81, "name": "Test WER (+LM)"}, {"type": "cer", "value": 6.84, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "en"}, "metrics": [{"type": "wer", "value": 27.72, "name": "Dev WER"}, {"type": "cer", "value": 11.65, "name": "Dev CER"}, {"type": "wer", "value": 20.85, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 11.01, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-english | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_vo... | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #en #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in English
================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on English using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #en #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoin... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-finnish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fi"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-finnish"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| MYSTEERIMIES OLI OPPINUT MORAALINSA TARUISTA, ELOKUVISTA JA PELEISTÄ. | MYSTEERIMIES OLI OPPINUT MORALINSA TARUISTA ELOKUVISTA JA PELEISTÄ |
| ÄÄNESTIN MIETINNÖN PUOLESTA! | ÄÄNESTIN MIETINNÖN PUOLESTA |
| VAIN TUNTIA AIKAISEMMIN OLIMME MIEHENI KANSSA TUNTENEET SUURINTA ILOA. | PAIN TUNTIA AIKAISEMMIN OLIN MIEHENI KANSSA TUNTENEET SUURINTA ILAA |
| ENSIMMÄISELLE MIEHELLE SAI KOLME LASTA. | ENSIMMÄISELLE MIEHELLE SAI KOLME LASTA |
| ÄÄNESTIN MIETINNÖN PUOLESTA, SILLÄ POHJIMMILTAAN SIINÄ VASTUSTETAAN TÄTÄ SUUNTAUSTA. | ÄÄNESTIN MIETINNÖN PUOLESTA SILLÄ POHJIMMILTAAN SIINÄ VASTOTTETAAN TÄTÄ SUUNTAUSTA |
| TÄHDENLENTOJENKO VARALTA MINÄ SEN OLISIN TÄNNE KUSKANNUT? | TÄHDEN LENTOJENKO VARALTA MINÄ SEN OLISIN TÄNNE KUSKANNUT |
| SIITÄ SE TULEE. | SIITA SE TULEE |
| NIIN, KUULUU KIROUS, JA KAUHEA KARJAISU. | NIIN KUULUU KIROUS JA KAUHEA KARJAISU |
| ARKIT KUN OVAT NÄES ELEMENTTIRAKENTEISIA. | ARKIT KUN OVAT MÄISS' ELÄMÄTTEROKENTEISIÄ |
| JÄIN ALUKSEN SISÄÄN, MUTTA KUULIN OVEN LÄPI, ETTÄ ULKOPUOLELLA ALKOI TAPAHTUA. | JAKALOKSEHÄN SISÄL MUTTA KUULIN OVENLAPI ETTÄ ULKA KUOLLALLA ALKOI TAPAHTUA |
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fi"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-finnish"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-21). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| aapot/wav2vec2-large-xlsr-53-finnish | **32.51%** | **5.34%** |
| Tommi/wav2vec2-large-xlsr-53-finnish | 35.22% | 5.81% |
| vasilis/wav2vec2-large-xlsr-53-finnish | 38.24% | 6.49% |
| jonatasgrosman/wav2vec2-large-xlsr-53-finnish | 41.60% | 8.23% |
| birgermoell/wav2vec2-large-xlsr-finnish | 53.51% | 9.18% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-finnish,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {F}innish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-finnish}},
year={2021}
}
``` | {"language": "fi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Finnish by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fi", "type": "common_voice", "args": "fi"}, "metrics": [{"type": "wer", "value": 41.6, "name": "Test WER"}, {"type": "cer", "value": 8.23, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-finnish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"fi"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fi #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Finnish
================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Finnish using the train and validation splits of Common Voice 6.1 and CSS10.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the Finnish test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-21). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: aapot/wav2vec2-large-xlsr-53-finnish, WER: 32.51%, CER: 5.34%
Model: Tommi/wav2vec2-large-xlsr-53-finnish, WER: 35.22%, CER: 5.81%
Model: vasilis/wav2vec2-large-xlsr-53-finnish, WER: 38.24%, CER: 6.49%
Model: jonatasgrosman/wav2vec2-large-xlsr-53-finnish, WER: 41.60%, CER: 8.23%
Model: birgermoell/wav2vec2-large-xlsr-finnish, WER: 53.51%, CER: 9.18%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fi #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in French
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on French using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-french")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-french"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| "CE DERNIER A ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE." | CE DERNIER ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE |
| CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ACHÉMÉNIDE ET SEPT DES SASSANIDES. | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ASHEMÉNID ET SEPT DES SASANDNIDES |
| "J'AI DIT QUE LES ACTEURS DE BOIS AVAIENT, SELON MOI, BEAUCOUP D'AVANTAGES SUR LES AUTRES." | JAI DIT QUE LES ACTEURS DE BOIS AVAIENT SELON MOI BEAUCOUP DAVANTAGES SUR LES AUTRES |
| LES PAYS-BAS ONT REMPORTÉ TOUTES LES ÉDITIONS. | LE PAYS-BAS ON REMPORTÉ TOUTES LES ÉDITIONS |
| IL Y A MAINTENANT UNE GARE ROUTIÈRE. | IL AMNARDIGAD LE TIRAN |
| HUIT | HUIT |
| DANS L’ATTENTE DU LENDEMAIN, ILS NE POUVAIENT SE DÉFENDRE D’UNE VIVE ÉMOTION | DANS L'ATTENTE DU LENDEMAIN IL NE POUVAIT SE DÉFENDRE DUNE VIVE ÉMOTION |
| LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES. | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES |
| ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES. | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES |
| ZÉRO | ZEGO |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset mozilla-foundation/common_voice_6_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-french,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {F}rench},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-french}},
year={2021}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "fr", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 French by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice fr", "type": "common_voice", "args": "fr"}, "metrics": [{"type": "wer", "value": 17.65, "name": "Test WER"}, {"type": "cer", "value": 4.89, "name": "Test CER"}, {"type": "wer", "value": 13.59, "name": "Test WER (+LM)"}, {"type": "cer", "value": 3.91, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 34.35, "name": "Dev WER"}, {"type": "cer", "value": 14.09, "name": "Dev CER"}, {"type": "wer", "value": 24.72, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 12.33, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-french | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"fr",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | null | 2022-03-02T23:29:05+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #fr #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in French
===============================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on French using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #fr #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in German
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-german")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "de"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-german"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS. | ZIEHT EUCH BITTE DRAUSSEN DIE SCHUHE AUS |
| ES KOMMT ZUM SHOWDOWN IN GSTAAD. | ES KOMMT ZUG STUNDEDAUTENESTERKT |
| IHRE FOTOSTRECKEN ERSCHIENEN IN MODEMAGAZINEN WIE DER VOGUE, HARPER’S BAZAAR UND MARIE CLAIRE. | IHRE FOTELSTRECKEN ERSCHIENEN MIT MODEMAGAZINEN WIE DER VALG AT DAS BASIN MA RIQUAIR |
| FELIPE HAT EINE AUCH FÜR MONARCHEN UNGEWÖHNLICH LANGE TITELLISTE. | FELIPPE HAT EINE AUCH FÜR MONACHEN UNGEWÖHNLICH LANGE TITELLISTE |
| ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET. | ER WURDE ZU EHREN DES REICHSKANZLERS OTTO VON BISMARCK ERRICHTET M |
| WAS SOLLS, ICH BIN BEREIT. | WAS SOLL'S ICH BIN BEREIT |
| DAS INTERNET BESTEHT AUS VIELEN COMPUTERN, DIE MITEINANDER VERBUNDEN SIND. | DAS INTERNET BESTEHT AUS VIELEN COMPUTERN DIE MITEINANDER VERBUNDEN SIND |
| DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM. | DER URANUS IST DER SIEBENTE PLANET IN UNSEREM SONNENSYSTEM |
| DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND. | DIE WAGEN ERHIELTEN EIN EINHEITLICHES ERSCHEINUNGSBILD IN WEISS MIT ROTEM FENSTERBAND |
| SIE WAR DIE COUSINE VON CARL MARIA VON WEBER. | SIE WAR DIE COUSINE VON KARL-MARIA VON WEBER |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset mozilla-foundation/common_voice_6_0 --config de --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-german,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {G}erman},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german}},
year={2021}
}
``` | {"language": "de", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "de", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 German by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice de", "type": "common_voice", "args": "de"}, "metrics": [{"type": "wer", "value": 12.06, "name": "Test WER"}, {"type": "cer", "value": 2.92, "name": "Test CER"}, {"type": "wer", "value": 8.74, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.28, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 32.75, "name": "Dev WER"}, {"type": "cer", "value": 13.64, "name": "Dev CER"}, {"type": "wer", "value": 26.6, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 12.58, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-german | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | null | 2022-03-02T23:29:05+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #de #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in German
===============================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on German using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #de #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-greek")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "el"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-greek"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ΤΟ ΒΑΣΙΛΌΠΟΥΛΟ, ΠΟΥ ΜΟΙΆΖΕΙ ΛΕΟΝΤΑΡΆΚΙ ΚΑΙ ΑΕΤΟΥΔΆΚΙ | ΤΟ ΒΑΣΙΛΌΠΟΥΛΟ ΠΟΥ ΜΙΑΣΕ ΛΙΟΝΤΑΡΑΚΉ ΚΑΙ ΑΪΤΟΥΔΆΚΙ |
| ΣΥΝΆΜΑ ΞΕΠΡΌΒΑΛΑΝ ΑΠΌ ΜΈΣΑ ΑΠΌ ΤΑ ΔΈΝΤΡΑ, ΔΕΞΙΆ, ΑΡΜΑΤΩΜΈΝΟΙ ΚΑΒΑΛΑΡΈΟΙ. | ΣΥΝΆΜΑ ΚΑΙ ΤΡΌΒΑΛΑΝ ΑΠΌ ΜΈΣΑ ΑΠΌ ΤΑ ΔΈΝΤΡΑ ΔΕΞΙΆ ΑΡΜΑΤΩΜΈΝΟΙ ΚΑΒΑΛΑΡΈΟΙ |
| ΤΑ ΣΥΣΚΕΥΑΣΜΈΝΑ ΒΙΟΛΟΓΙΚΆ ΛΑΧΑΝΙΚΆ ΔΕΝ ΠΕΡΙΈΧΟΥΝ ΣΥΝΤΗΡΗΤΙΚΆ ΚΑΙ ΟΡΜΌΝΕΣ | ΤΑ ΣΥΣΚΕΦΑΣΜΈΝΑ ΒΙΟΛΟΓΙΚΆ ΛΑΧΑΝΙΚΆ ΔΕΝ ΠΕΡΙΈΧΟΥΝ ΣΙΔΗΡΗΤΙΚΆ ΚΑΙ ΟΡΜΌΝΕΣ |
| ΑΚΟΛΟΥΘΉΣΕΤΕ ΜΕ! | ΑΚΟΛΟΥΘΉΣΤΕ ΜΕ |
| ΚΑΙ ΠΟΎ ΜΠΟΡΏ ΝΑ ΤΟΝ ΒΡΩ; | Ε ΠΟΎ ΜΠΟΡΏ ΝΑ ΤΙ ΕΒΡΩ |
| ΝΑΙ! ΑΠΟΚΡΊΘΗΚΕ ΤΟ ΠΑΙΔΊ | ΝΑΙ ΑΠΟΚΡΊΘΗΚΕ ΤΟ ΠΑΙΔΊ |
| ΤΟ ΠΑΛΆΤΙ ΜΟΥ ΤΟ ΠΡΟΜΉΘΕΥΕ. | ΤΟ ΠΑΛΆΤΙ ΜΟΥ ΤΟ ΠΡΟΜΉΘΕΥΕ |
| ΉΛΘΕ ΜΉΝΥΜΑ ΑΠΌ ΤΟ ΘΕΊΟ ΒΑΣΙΛΙΆ; | ΉΛΘΑ ΜΕΊΝΕΙ ΜΕ ΑΠΌ ΤΟ ΘΕΊΟ ΒΑΣΊΛΙΑ |
| ΠΑΡΑΚΆΤΩ, ΈΝΑ ΡΥΆΚΙ ΜΟΥΡΜΟΎΡΙΖΕ ΓΛΥΚΆ, ΚΥΛΏΝΤΑΣ ΤΑ ΚΡΥΣΤΑΛΛΈΝΙΑ ΝΕΡΆ ΤΟΥ ΑΝΆΜΕΣΑ ΣΤΑ ΠΥΚΝΆ ΧΑΜΌΔΕΝΤΡΑ. | ΠΑΡΑΚΆΤΩ ΈΝΑ ΡΥΆΚΙ ΜΟΥΡΜΟΎΡΙΖΕ ΓΛΥΚΆ ΚΥΛΏΝΤΑΣ ΤΑ ΚΡΥΣΤΑΛΛΈΝΙΑ ΝΕΡΆ ΤΟΥ ΑΝΆΜΕΣΑ ΣΤΑ ΠΥΚΡΆ ΧΑΜΌΔΕΝΤΡΑ |
| ΠΡΆΓΜΑΤΙ, ΕΊΝΑΙ ΑΣΤΕΊΟ ΝΑ ΠΆΡΕΙ Ο ΔΙΆΒΟΛΟΣ | ΠΡΆΓΜΑΤΗ ΕΊΝΑΙ ΑΣΤΕΊΟ ΝΑ ΠΆΡΕΙ Ο ΔΙΆΒΟΛΟΣ |
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "el"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-greek"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| lighteternal/wav2vec2-large-xlsr-53-greek | **10.13%** | **2.66%** |
| jonatasgrosman/wav2vec2-large-xlsr-53-greek | 11.62% | 3.36% |
| vasilis/wav2vec2-large-xlsr-53-greek | 19.09% | 5.88% |
| PereLluis13/wav2vec2-large-xlsr-53-greek | 20.16% | 5.71% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-greek,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {G}reek},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-greek}},
year={2021}
}
``` | {"language": "el", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Greek by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice el", "type": "common_voice", "args": "el"}, "metrics": [{"type": "wer", "value": 11.62, "name": "Test WER"}, {"type": "cer", "value": 3.36, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-greek | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"el",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"el"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #el #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Greek
==============================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Greek using the train and validation splits of Common Voice 6.1 and CSS10.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the Greek test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: lighteternal/wav2vec2-large-xlsr-53-greek, WER: 10.13%, CER: 2.66%
Model: jonatasgrosman/wav2vec2-large-xlsr-53-greek, WER: 11.62%, CER: 3.36%
Model: vasilis/wav2vec2-large-xlsr-53-greek, WER: 19.09%, CER: 5.88%
Model: PereLluis13/wav2vec2-large-xlsr-53-greek, WER: 20.16%, CER: 5.71%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #el #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Hungarian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hungarian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-hungarian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "hu"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-hungarian"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| BÜSZKÉK VAGYUNK A MAGYAR EMBEREK NAGYSZERŰ SZELLEMI ALKOTÁSAIRA. | BÜSZKÉK VAGYUNK A MAGYAR EMBEREK NAGYSZERŰ SZELLEMI ALKOTÁSAIRE |
| A NEMZETSÉG TAGJAI KÖZÜL EZT TERMESZTIK A LEGSZÉLESEBB KÖRBEN ÍZLETES TERMÉSÉÉRT. | A NEMZETSÉG TAGJAI KÖZÜL ESZSZERMESZTIK A LEGSZELESEBB KÖRBEN IZLETES TERMÉSSÉÉRT |
| A VÁROSBA VÁGYÓDOTT A LEGJOBBAN, ÉPPEN MERT ODA NEM JUTHATOTT EL SOHA. | A VÁROSBA VÁGYÓDOTT A LEGJOBBAN ÉPPEN MERT ODA NEM JUTHATOTT EL SOHA |
| SÍRJA MÁRA MEGSEMMISÜLT. | SIMGI A MANDO MEG SEMMICSEN |
| MINDEN ZENESZÁMOT DRÁGAKŐNEK NEVEZETT. | MINDEN ZENA SZÁMODRAGAKŐNEK NEVEZETT |
| ÍGY MÚLT EL A DÉLELŐTT. | ÍGY MÚLT EL A DÍN ELŐTT |
| REMEK POFA! | A REMEG PUFO |
| SZEMET SZEMÉRT, FOGAT FOGÉRT. | SZEMET SZEMÉRT FOGADD FOGÉRT |
| BIZTOSAN LAKIK ITT NÉHÁNY ATYÁMFIA. | BIZTOSAN LAKIKÉT NÉHANY ATYAMFIA |
| A SOROK KÖZÖTT OLVAS. | A SOROG KÖZÖTT OLVAS |
## Evaluation
The model can be evaluated as follows on the Hungarian test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "hu"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-hungarian"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-hungarian | **31.40%** | **6.20%** |
| anton-l/wav2vec2-large-xlsr-53-hungarian | 42.39% | 9.39% |
| gchhablani/wav2vec2-large-xlsr-hu | 46.42% | 10.04% |
| birgermoell/wav2vec2-large-xlsr-hungarian | 46.93% | 10.31% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-hungarian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {H}ungarian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-hungarian}},
year={2021}
}
``` | {"language": "hu", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Hungarian by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hu", "type": "common_voice", "args": "hu"}, "metrics": [{"type": "wer", "value": 31.4, "name": "Test WER"}, {"type": "cer", "value": 6.2, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-hungarian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hu",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"hu"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hu #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Hungarian
==================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hungarian using the train and validation splits of Common Voice 6.1 and CSS10.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the Hungarian test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: jonatasgrosman/wav2vec2-large-xlsr-53-hungarian, WER: 31.40%, CER: 6.20%
Model: anton-l/wav2vec2-large-xlsr-53-hungarian, WER: 42.39%, CER: 9.39%
Model: gchhablani/wav2vec2-large-xlsr-hu, WER: 46.42%, CER: 10.04%
Model: birgermoell/wav2vec2-large-xlsr-hungarian, WER: 46.93%, CER: 10.31%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hu #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-italian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "it"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-italian"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| POI LEI MORÌ. | POI LEI MORÌ |
| IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI. | IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI |
| "FIN DALL'INIZIO LA SEDE EPISCOPALE È STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE." | FIN DALL'INIZIO LA SEDE EPISCOPALE È STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE |
| IL VUOTO ASSOLUTO? | IL VUOTO ASSOLUTO |
| DOPO ALCUNI ANNI, EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI. | DOPO ALCUNI ANNI EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI |
| SALVATION SUE | SALVATION SOO |
| IN QUESTO MODO, DECIO OTTENNE IL POTERE IMPERIALE. | IN QUESTO MODO DECHO OTTENNE IL POTERE IMPERIALE |
| SPARTA NOVARA ACQUISISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA. | PARCANOVARACFILISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA |
| IN SEGUITO, KYGO E SHEAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE. | IN SEGUITO KIGO E SHIAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE |
| ALAN CLARKE | ALAN CLARK |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset mozilla-foundation/common_voice_6_0 --config it --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset speech-recognition-community-v2/dev_data --config it --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-italian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {I}talian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian}},
year={2021}
}
```
| {"language": "it", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "it", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Italian by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice it", "type": "common_voice", "args": "it"}, "metrics": [{"type": "wer", "value": 9.41, "name": "Test WER"}, {"type": "cer", "value": 2.29, "name": "Test CER"}, {"type": "wer", "value": 6.91, "name": "Test WER (+LM)"}, {"type": "cer", "value": 1.83, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 21.78, "name": "Dev WER"}, {"type": "cer", "value": 7.94, "name": "Dev CER"}, {"type": "wer", "value": 15.82, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 6.83, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-italian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"it",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | null | 2022-03-02T23:29:05+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #it #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Italian
================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Italian using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #it #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-japanese")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ja"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-japanese"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| 祖母は、おおむね機嫌よく、サイコロをころがしている。 | 人母は重にきね起くさいがしている |
| 財布をなくしたので、交番へ行きます。 | 財布をなく手端ので勾番へ行きます |
| 飲み屋のおやじ、旅館の主人、医者をはじめ、交際のある人にきいてまわったら、みんな、私より収入が多いはずなのに、税金は安い。 | ノ宮屋のお親じ旅館の主に医者をはじめ交際のアル人トに聞いて回ったらみんな私より収入が多いはなうに税金は安い |
| 新しい靴をはいて出かけます。 | だらしい靴をはいて出かけます |
| このためプラズマ中のイオンや電子の持つ平均運動エネルギーを温度で表現することがある | このためプラズマ中のイオンや電子の持つ平均運動エネルギーを温度で表弁することがある |
| 松井さんはサッカーより野球のほうが上手です。 | 松井さんはサッカーより野球のほうが上手です |
| 新しいお皿を使います。 | 新しいお皿を使います |
| 結婚以来三年半ぶりの東京も、旧友とのお酒も、夜行列車も、駅で寝て、朝を待つのも久しぶりだ。 | 結婚ル二来三年半降りの東京も吸とのお酒も野越者も駅で寝て朝を待つの久しぶりた |
| これまで、少年野球、ママさんバレーなど、地域スポーツを支え、市民に密着してきたのは、無数のボランティアだった。 | これまで少年野球<unk>三バレーなど地域スポーツを支え市民に満着してきたのは娘数のボランティアだった |
| 靴を脱いで、スリッパをはきます。 | 靴を脱いでスイパーをはきます |
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ja"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-japanese"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-10). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-japanese | **81.80%** | **20.16%** |
| vumichien/wav2vec2-large-xlsr-japanese | 1108.86% | 23.40% |
| qqhann/w2v_hf_jsut_xlsr53 | 1012.18% | 70.77% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-japanese,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {J}apanese},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese}},
year={2021}
}
``` | {"language": "ja", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Japanese by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ja", "type": "common_voice", "args": "ja"}, "metrics": [{"type": "wer", "value": 81.8, "name": "Test WER"}, {"type": "cer", "value": 20.16, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-japanese | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ja",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ja #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Japanese
=================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Japanese using the train and validation splits of Common Voice 6.1, CSS10 and JSUT.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the Japanese test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-10). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: jonatasgrosman/wav2vec2-large-xlsr-53-japanese, WER: 81.80%, CER: 20.16%
Model: vumichien/wav2vec2-large-xlsr-japanese, WER: 1108.86%, CER: 23.40%
Model: qqhann/w2v\_hf\_jsut\_xlsr53, WER: 1012.18%, CER: 70.77%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ja #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Persian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Persian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-persian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fa"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-persian"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| از مهمونداری کنار بکشم | از مهمانداری کنار بکشم |
| برو از مهرداد بپرس. | برو از ماقدعاد به پرس |
| خب ، تو چیكار می كنی؟ | خوب تو چیکار می کنی |
| مسقط پایتخت عمان در عربی به معنای محل سقوط است | مسقط پایتخت عمان در عربی به بعنای محل سقوط است |
| آه، نه اصلاُ! | اهنه اصلا |
| توانست | توانست |
| قصیده فن شعر میگوید ای دوستان | قصیده فن شعر میگوید ایدوستون |
| دو استایل متفاوت دارین | دوبوست داریل و متفاوت بری |
| دو روز قبل از کریسمس ؟ | اون مفتود پش پشش |
| ساعت های کاری چیست؟ | این توری که موشیکل خب |
## Evaluation
The model can be evaluated as follows on the Persian test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fa"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-persian"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| jonatasgrosman/wav2vec2-large-xlsr-53-persian | **30.12%** | **7.37%** |
| m3hrdadfi/wav2vec2-large-xlsr-persian-v2 | 33.85% | 8.79% |
| m3hrdadfi/wav2vec2-large-xlsr-persian | 34.37% | 8.98% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-persian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}ersian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-persian}},
year={2021}
}
``` | {"language": "fa", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Persian by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fa", "type": "common_voice", "args": "fa"}, "metrics": [{"type": "wer", "value": 30.12, "name": "Test WER"}, {"type": "cer", "value": 7.37, "name": "Test CER"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-persian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fa",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fa #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Persian
================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Persian using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
The model can be evaluated as follows on the Persian test data of Common Voice.
Test Result:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
Model: jonatasgrosman/wav2vec2-large-xlsr-53-persian, WER: 30.12%, CER: 7.37%
Model: m3hrdadfi/wav2vec2-large-xlsr-persian-v2, WER: 33.85%, CER: 8.79%
Model: m3hrdadfi/wav2vec2-large-xlsr-persian, WER: 34.37%, CER: 8.98%
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fa #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Polish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Polish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-polish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "pl"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-polish"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| """CZY DRZWI BYŁY ZAMKNIĘTE?""" | PRZY DRZWI BYŁY ZAMKNIĘTE |
| GDZIEŻ TU POWÓD DO WYRZUTÓW? | WGDZIEŻ TO POM DO WYRYDÓ |
| """O TEM JEDNAK NIE BYŁO MOWY.""" | O TEM JEDNAK NIE BYŁO MOWY |
| LUBIĘ GO. | LUBIĄ GO |
| — TO MI NIE POMAGA. | TO MNIE NIE POMAGA |
| WCIĄŻ LUDZIE WYSIADAJĄ PRZED ZAMKIEM, Z MIASTA, Z PRAGI. | WCIĄŻ LUDZIE WYSIADAJĄ PRZED ZAMKIEM Z MIASTA Z PRAGI |
| ALE ON WCALE INACZEJ NIE MYŚLAŁ. | ONY MONITCENIE PONACZUŁA NA MASU |
| A WY, CO TAK STOICIE? | A WY CO TAK STOICIE |
| A TEN PRZYRZĄD DO CZEGO SŁUŻY? | A TEN PRZYRZĄD DO CZEGO SŁUŻY |
| NA JUTRZEJSZYM KOLOKWIUM BĘDZIE PIĘĆ PYTAŃ OTWARTYCH I TEST WIELOKROTNEGO WYBORU. | NAJUTRZEJSZYM KOLOKWIUM BĘDZIE PIĘĆ PYTAŃ OTWARTYCH I TEST WIELOKROTNEGO WYBORU |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-polish --dataset mozilla-foundation/common_voice_6_0 --config pl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-polish --dataset speech-recognition-community-v2/dev_data --config pl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-polish,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}olish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-polish}},
year={2021}
}
``` | {"language": "pl", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "pl", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Polish by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice pl", "type": "common_voice", "args": "pl"}, "metrics": [{"type": "wer", "value": 14.21, "name": "Test WER"}, {"type": "cer", "value": 3.49, "name": "Test CER"}, {"type": "wer", "value": 10.98, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.93, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pl"}, "metrics": [{"type": "wer", "value": 33.18, "name": "Dev WER"}, {"type": "cer", "value": 15.92, "name": "Dev CER"}, {"type": "wer", "value": 29.31, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 15.17, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-polish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"pl",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pl"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #pl #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Polish
===============================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Polish using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #pl #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-portuguese")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "pt"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-portuguese"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| NEM O RADAR NEM OS OUTROS INSTRUMENTOS DETECTARAM O BOMBARDEIRO STEALTH. | NEMHUM VADAN OS OLTWES INSTRUMENTOS DE TTÉÃN UM BOMBERDEIRO OSTER |
| PEDIR DINHEIRO EMPRESTADO ÀS PESSOAS DA ALDEIA | E DIR ENGINHEIRO EMPRESTAR AS PESSOAS DA ALDEIA |
| OITO | OITO |
| TRANCÁ-LOS | TRANCAUVOS |
| REALIZAR UMA INVESTIGAÇÃO PARA RESOLVER O PROBLEMA | REALIZAR UMA INVESTIGAÇÃO PARA RESOLVER O PROBLEMA |
| O YOUTUBE AINDA É A MELHOR PLATAFORMA DE VÍDEOS. | YOUTUBE AINDA É A MELHOR PLATAFOMA DE VÍDEOS |
| MENINA E MENINO BEIJANDO NAS SOMBRAS | MENINA E MENINO BEIJANDO NAS SOMBRAS |
| EU SOU O SENHOR | EU SOU O SENHOR |
| DUAS MULHERES QUE SENTAM-SE PARA BAIXO LENDO JORNAIS. | DUAS MIERES QUE SENTAM-SE PARA BAICLANE JODNÓI |
| EU ORIGINALMENTE ESPERAVA | EU ORIGINALMENTE ESPERAVA |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-portuguese --dataset mozilla-foundation/common_voice_6_0 --config pt --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-portuguese --dataset speech-recognition-community-v2/dev_data --config pt --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-portuguese,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}ortuguese},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-portuguese}},
year={2021}
}
``` | {"language": "pt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "pt", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Portuguese by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice pt", "type": "common_voice", "args": "pt"}, "metrics": [{"type": "wer", "value": 11.31, "name": "Test WER"}, {"type": "cer", "value": 3.74, "name": "Test CER"}, {"type": "wer", "value": 9.01, "name": "Test WER (+LM)"}, {"type": "cer", "value": 3.21, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pt"}, "metrics": [{"type": "wer", "value": 42.1, "name": "Dev WER"}, {"type": "cer", "value": 17.93, "name": "Dev CER"}, {"type": "wer", "value": 36.92, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 16.88, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-portuguese | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"pt",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #pt #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Portuguese
===================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #pt #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Russian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Russian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-russian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ru"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-russian"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ОН РАБОТАТЬ, А ЕЕ НЕ УДЕРЖАТЬ НИКАК — БЕГАЕТ ЗА КЛЁШЕМ КАЖДОГО БУЛЬВАРНИКА. | ОН РАБОТАТЬ А ЕЕ НЕ УДЕРЖАТ НИКАК БЕГАЕТ ЗА КЛЕШОМ КАЖДОГО БУЛЬБАРНИКА |
| ЕСЛИ НЕ БУДЕТ ВОЗРАЖЕНИЙ, Я БУДУ СЧИТАТЬ, ЧТО АССАМБЛЕЯ СОГЛАСНА С ЭТИМ ПРЕДЛОЖЕНИЕМ. | ЕСЛИ НЕ БУДЕТ ВОЗРАЖЕНИЙ Я БУДУ СЧИТАТЬ ЧТО АССАМБЛЕЯ СОГЛАСНА С ЭТИМ ПРЕДЛОЖЕНИЕМ |
| ПАЛЕСТИНЦАМ НЕОБХОДИМО СНАЧАЛА УСТАНОВИТЬ МИР С ИЗРАИЛЕМ, А ЗАТЕМ ДОБИВАТЬСЯ ПРИЗНАНИЯ ГОСУДАРСТВЕННОСТИ. | ПАЛЕСТИНЦАМ НЕОБХОДИМО СНАЧАЛА УСТАНОВИТЬ С НИ МИР ФЕЗРЕЛЕМ А ЗАТЕМ ДОБИВАТЬСЯ ПРИЗНАНИЯ ГОСУДАРСТВЕНСКИ |
| У МЕНЯ БЫЛО ТАКОЕ ЧУВСТВО, ЧТО ЧТО-ТО ТАКОЕ ОЧЕНЬ ВАЖНОЕ Я ПРИБАВЛЯЮ. | У МЕНЯ БЫЛО ТАКОЕ ЧУВСТВО ЧТО ЧТО-ТО ТАКОЕ ОЧЕНЬ ВАЖНОЕ Я ПРЕДБАВЛЯЕТ |
| ТОЛЬКО ВРЯД ЛИ ПОЙМЕТ. | ТОЛЬКО ВРЯД ЛИ ПОЙМЕТ |
| ВРОНСКИЙ, СЛУШАЯ ОДНИМ УХОМ, ПЕРЕВОДИЛ БИНОКЛЬ С БЕНУАРА НА БЕЛЬ-ЭТАЖ И ОГЛЯДЫВАЛ ЛОЖИ. | ЗЛАЗКИ СЛУШАЮ ОТ ОДНИМ УХАМ ТЫ ВОТИ В ВИНОКОТ СПИЛА НА ПЕРЕТАЧ И ОКЛЯДЫВАЛ БОСУ |
| К СОЖАЛЕНИЮ, СИТУАЦИЯ ПРОДОЛЖАЕТ УХУДШАТЬСЯ. | К СОЖАЛЕНИЮ СИТУАЦИИ ПРОДОЛЖАЕТ УХУЖАТЬСЯ |
| ВСЁ ЖАЛОВАНИЕ УХОДИЛО НА ДОМАШНИЕ РАСХОДЫ И НА УПЛАТУ МЕЛКИХ НЕПЕРЕВОДИВШИХСЯ ДОЛГОВ. | ВСЕ ЖАЛОВАНИЕ УХОДИЛО НА ДОМАШНИЕ РАСХОДЫ И НА УПЛАТУ МЕЛКИХ НЕ ПЕРЕВОДИВШИХСЯ ДОЛГОВ |
| ТЕПЕРЬ ДЕЛО, КОНЕЧНО, ЗА ТЕМ, ЧТОБЫ ПРЕВРАТИТЬ СЛОВА В ДЕЛА. | ТЕПЕРЬ ДЕЛАЮ КОНЕЧНО ЗАТЕМ ЧТОБЫ ПРЕВРАТИТЬ СЛОВА В ДЕЛА |
| ДЕВЯТЬ | ЛЕВЕТЬ |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset mozilla-foundation/common_voice_6_0 --config ru --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset speech-recognition-community-v2/dev_data --config ru --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-russian,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {R}ussian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-russian}},
year={2021}
}
``` | {"language": "ru", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "ru", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Russian by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice ru", "type": "common_voice", "args": "ru"}, "metrics": [{"type": "wer", "value": 13.3, "name": "Test WER"}, {"type": "cer", "value": 2.88, "name": "Test CER"}, {"type": "wer", "value": 9.57, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.24, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 40.22, "name": "Dev WER"}, {"type": "cer", "value": 14.8, "name": "Dev CER"}, {"type": "wer", "value": 33.61, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 13.5, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-russian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"ru",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | null | 2022-03-02T23:29:05+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #ru #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Russian
================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Russian using the train and validation splits of Common Voice 6.1 and CSS10.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #ru #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible... |
automatic-speech-recognition | transformers |
# Fine-tuned XLSR-53 large model for speech recognition in Spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-spanish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "es"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| HABITA EN AGUAS POCO PROFUNDAS Y ROCOSAS. | HABITAN AGUAS POCO PROFUNDAS Y ROCOSAS |
| OPERA PRINCIPALMENTE VUELOS DE CABOTAJE Y REGIONALES DE CARGA. | OPERA PRINCIPALMENTE VUELO DE CARBOTAJES Y REGIONALES DE CARGAN |
| PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN. | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN |
| TRES | TRES |
| REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA, PARA CONTINUAR LUEGO EN ESPAÑA. | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA PARA CONTINUAR LUEGO EN ESPAÑA |
| EN LOS AÑOS QUE SIGUIERON, ESTE TRABAJO ESPARTA PRODUJO DOCENAS DE BUENOS JUGADORES. | EN LOS AÑOS QUE SIGUIERON ESTE TRABAJO ESPARTA PRODUJO DOCENA DE BUENOS JUGADORES |
| SE ESTÁ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS. | SE ESTÓ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS |
| SÍ | SÍ |
| "FUE ""SACADA"" DE LA SERIE EN EL EPISODIO ""LEAD"", EN QUE ALEXANDRA CABOT REGRESÓ." | FUE SACADA DE LA SERIE EN EL EPISODIO LEED EN QUE ALEXANDRA KAOT REGRESÓ |
| SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOKA, EN LA PROVINCIA DE BIOKO SUR. | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOCA EN LA PROVINCIA DE PÍOCOSUR |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset mozilla-foundation/common_voice_6_0 --config es --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-spanish,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {S}panish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}},
year={2021}
}
``` | {"language": "es", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "es", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Spanish by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice es", "type": "common_voice", "args": "es"}, "metrics": [{"type": "wer", "value": 8.82, "name": "Test WER"}, {"type": "cer", "value": 2.58, "name": "Test CER"}, {"type": "wer", "value": 6.27, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.06, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "es"}, "metrics": [{"type": "wer", "value": 30.19, "name": "Dev WER"}, {"type": "cer", "value": 13.56, "name": "Dev CER"}, {"type": "wer", "value": 24.71, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 12.61, "name": "Dev CER (+LM)"}]}]}]} | jonatasgrosman/wav2vec2-large-xlsr-53-spanish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"es",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"lice... | null | 2022-03-02T23:29:05+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #es #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Fine-tuned XLSR-53 large model for speech recognition in Spanish
================================================================
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Spanish using the train and validation splits of Common Voice 6.1.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the HuggingSound library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #es #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in Dutch
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Dutch using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-dutch")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "nl"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-dutch"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-dutch --dataset mozilla-foundation/common_voice_8_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-dutch --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-dutch,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {D}utch},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-dutch}},
year={2022}
}
``` | {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "nl", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 Dutch by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "nl"}, "metrics": [{"type": "wer", "value": 10.38, "name": "Test WER"}, {"type": "cer", "value": 3.04, "name": "Test CER"}, {"type": "wer", "value": 6.83, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.31, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 31.12, "name": "Dev WER"}, {"type": "cer", "value": 15.92, "name": "Dev CER"}, {"type": "wer", "value": 23.95, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 14.18, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "nl"}, "metrics": [{"type": "wer", "value": 20.41, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-dutch | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"nl",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"nl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in Dutch
Fine-tuned facebook/wav2vec2-xls-r-1b on Dutch using the train and validation splits of Common Voice 8.0, Multilingual LibriSpeech, and Voxpopuli.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in Dutch\n\nFine-tuned facebook/wav2vec2-xls-r-1b on Dutch using the train and validation splits of Common Voice 8.0, Multilingual LibriSpeech, and Voxpopuli.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nThis model has been fine... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #nl #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in English
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on English using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual LibriSpeech](https://www.openslr.org/94/), [TED-LIUMv3](https://www.openslr.org/51/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-english")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "en"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-english"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-english --dataset mozilla-foundation/common_voice_8_0 --config en --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-english --dataset speech-recognition-community-v2/dev_data --config en --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-english,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {E}nglish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-english}},
year={2022}
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "en", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 English by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "config": "en", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 21.05, "name": "Test WER"}, {"type": "cer", "value": 8.44, "name": "Test CER"}, {"type": "wer", "value": 17.31, "name": "Test WER (+LM)"}, {"type": "cer", "value": 7.77, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "en"}, "metrics": [{"type": "wer", "value": 20.53, "name": "Dev WER"}, {"type": "cer", "value": 9.31, "name": "Dev CER"}, {"type": "wer", "value": 17.7, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 8.93, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "en"}, "metrics": [{"type": "wer", "value": 17.88, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-english | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #en #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in English
Fine-tuned facebook/wav2vec2-xls-r-1b on English using the train and validation splits of Common Voice 8.0, Multilingual LibriSpeech, TED-LIUMv3, and Voxpopuli.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in English\n\nFine-tuned facebook/wav2vec2-xls-r-1b on English using the train and validation splits of Common Voice 8.0, Multilingual LibriSpeech, TED-LIUMv3, and Voxpopuli.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nThis mod... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #en #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in French
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on French using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [MediaSpeech](https://www.openslr.org/108/), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-french")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "fr"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-french"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset mozilla-foundation/common_voice_8_0 --config fr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-french,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {F}rench},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-french}},
year={2022}
}
``` | {"language": ["fr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "fr", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 French by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "fr"}, "metrics": [{"type": "wer", "value": 16.85, "name": "Test WER"}, {"type": "cer", "value": 4.66, "name": "Test CER"}, {"type": "wer", "value": 16.32, "name": "Test WER (+LM)"}, {"type": "cer", "value": 4.21, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 22.34, "name": "Dev WER"}, {"type": "cer", "value": 9.88, "name": "Dev CER"}, {"type": "wer", "value": 17.16, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 9.38, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "fr"}, "metrics": [{"type": "wer", "value": 19.15, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-french | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"fr"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #fr #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in French
Fine-tuned facebook/wav2vec2-xls-r-1b on French using the train and validation splits of Common Voice 8.0, MediaSpeech, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in French\n\nFine-tuned facebook/wav2vec2-xls-r-1b on French using the train and validation splits of Common Voice 8.0, MediaSpeech, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.\nWhen using this model, make sure that your speech input is sampled at ... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #fr #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in German
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on German using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-german")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "de"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-german"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-german,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {G}erman},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-german}},
year={2022}
}
``` | {"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "de", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 German by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "de"}, "metrics": [{"type": "wer", "value": 10.95, "name": "Test WER"}, {"type": "cer", "value": 2.72, "name": "Test CER"}, {"type": "wer", "value": 8.13, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.18, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 22.68, "name": "Dev WER"}, {"type": "cer", "value": 9.17, "name": "Dev CER"}, {"type": "wer", "value": 17.07, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 8.45, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 19.67, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-german | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in German
Fine-tuned facebook/wav2vec2-xls-r-1b on German using the train and validation splits of Common Voice 8.0, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in German\n\nFine-tuned facebook/wav2vec2-xls-r-1b on German using the train and validation splits of Common Voice 8.0, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nThi... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in Italian
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Italian using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-italian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "it"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-italian"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-italian --dataset mozilla-foundation/common_voice_8_0 --config it --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-italian --dataset speech-recognition-community-v2/dev_data --config it --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-italian,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {I}talian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-italian}},
year={2022}
}
``` | {"language": ["it"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "it", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 Italian by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "it"}, "metrics": [{"type": "wer", "value": 9.04, "name": "Test WER"}, {"type": "cer", "value": 2.2, "name": "Test CER"}, {"type": "wer", "value": 6.75, "name": "Test WER (+LM)"}, {"type": "cer", "value": 1.76, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 23.38, "name": "Dev WER"}, {"type": "cer", "value": 9.41, "name": "Dev CER"}, {"type": "wer", "value": 15.84, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 8.93, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "it"}, "metrics": [{"type": "wer", "value": 18.34, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-italian | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"it",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #it #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in Italian
Fine-tuned facebook/wav2vec2-xls-r-1b on Italian using the train and validation splits of Common Voice 8.0, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in Italian\n\nFine-tuned facebook/wav2vec2-xls-r-1b on Italian using the train and validation splits of Common Voice 8.0, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nT... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #it #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in Polish
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Polish using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-polish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "pl"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-polish"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-polish --dataset mozilla-foundation/common_voice_8_0 --config pl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-polish --dataset speech-recognition-community-v2/dev_data --config pl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-polish,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {P}olish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-polish}},
year={2022}
}
``` | {"language": ["pl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "pl", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 Polish by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "pl"}, "metrics": [{"type": "wer", "value": 11.01, "name": "Test WER"}, {"type": "cer", "value": 2.55, "name": "Test CER"}, {"type": "wer", "value": 7.32, "name": "Test WER (+LM)"}, {"type": "cer", "value": 1.95, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pl"}, "metrics": [{"type": "wer", "value": 26.31, "name": "Dev WER"}, {"type": "cer", "value": 13.85, "name": "Dev CER"}, {"type": "wer", "value": 20.33, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 13.0, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "pl"}, "metrics": [{"type": "wer", "value": 22.77, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-polish | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"pl",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #pl #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in Polish
Fine-tuned facebook/wav2vec2-xls-r-1b on Polish using the train and validation splits of Common Voice 8.0, Multilingual LibriSpeech, and Voxpopuli.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in Polish\n\nFine-tuned facebook/wav2vec2-xls-r-1b on Polish using the train and validation splits of Common Voice 8.0, Multilingual LibriSpeech, and Voxpopuli.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nThis model has been fi... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #pl #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in Portuguese
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Portuguese using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [CORAA](https://github.com/nilc-nlp/CORAA), [Multilingual TEDx](http://www.openslr.org/100), and [Multilingual LibriSpeech](https://www.openslr.org/94/).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-portuguese")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "pt"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-portuguese"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-portuguese --dataset mozilla-foundation/common_voice_8_0 --config pt --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-portuguese --dataset speech-recognition-community-v2/dev_data --config pt --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-portuguese,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {P}ortuguese},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese}},
year={2022}
}
``` | {"language": ["pt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "pt", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 Portuguese by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "pt"}, "metrics": [{"type": "wer", "value": 8.7, "name": "Test WER"}, {"type": "cer", "value": 2.55, "name": "Test CER"}, {"type": "wer", "value": 6.04, "name": "Test WER (+LM)"}, {"type": "cer", "value": 1.98, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "pt"}, "metrics": [{"type": "wer", "value": 24.23, "name": "Dev WER"}, {"type": "cer", "value": 11.3, "name": "Dev CER"}, {"type": "wer", "value": 19.41, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 10.19, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "pt"}, "metrics": [{"type": "wer", "value": 18.8, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-portuguese | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"pt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #pt #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in Portuguese
Fine-tuned facebook/wav2vec2-xls-r-1b on Portuguese using the train and validation splits of Common Voice 8.0, CORAA, Multilingual TEDx, and Multilingual LibriSpeech.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in Portuguese\n\nFine-tuned facebook/wav2vec2-xls-r-1b on Portuguese using the train and validation splits of Common Voice 8.0, CORAA, Multilingual TEDx, and Multilingual LibriSpeech.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #pt #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in Russian
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Russian using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Golos](https://www.openslr.org/114/), and [Multilingual TEDx](http://www.openslr.org/100).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-russian")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "ru"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-russian"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-russian --dataset mozilla-foundation/common_voice_8_0 --config ru --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-russian --dataset speech-recognition-community-v2/dev_data --config ru --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-russian,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {R}ussian},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-russian}},
year={2022}
}
``` | {"language": ["ru"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "ru"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 Russian by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ru"}, "metrics": [{"type": "wer", "value": 9.82, "name": "Test WER"}, {"type": "cer", "value": 2.3, "name": "Test CER"}, {"type": "wer", "value": 7.08, "name": "Test WER (+LM)"}, {"type": "cer", "value": 1.87, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 23.96, "name": "Dev WER"}, {"type": "cer", "value": 8.88, "name": "Dev CER"}, {"type": "wer", "value": 15.88, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 7.42, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 14.23, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-russian | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"ru",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"ru"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #ru #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in Russian
Fine-tuned facebook/wav2vec2-xls-r-1b on Russian using the train and validation splits of Common Voice 8.0, Golos, and Multilingual TEDx.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in Russian\n\nFine-tuned facebook/wav2vec2-xls-r-1b on Russian using the train and validation splits of Common Voice 8.0, Golos, and Multilingual TEDx.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nThis model has been fine-tuned ... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #ru #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
automatic-speech-recognition | transformers |
# Fine-tuned XLS-R 1B model for speech recognition in Spanish
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Spanish using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [MediaSpeech](https://www.openslr.org/108/), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-spanish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "es"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-spanish"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-spanish --dataset mozilla-foundation/common_voice_8_0 --config es --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-spanish,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {S}panish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-spanish}},
year={2022}
}
``` | {"language": ["es"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "es", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R Wav2Vec2 Spanish by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "es"}, "metrics": [{"type": "wer", "value": 9.97, "name": "Test WER"}, {"type": "cer", "value": 2.85, "name": "Test CER"}, {"type": "wer", "value": 6.74, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.24, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "es"}, "metrics": [{"type": "wer", "value": 24.79, "name": "Dev WER"}, {"type": "cer", "value": 9.7, "name": "Dev CER"}, {"type": "wer", "value": 16.37, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 8.84, "name": "Dev CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "es"}, "metrics": [{"type": "wer", "value": 16.67, "name": "Test WER"}]}]}]} | jonatasgrosman/wav2vec2-xls-r-1b-spanish | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:... | null | 2022-03-02T23:29:05+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #es #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Fine-tuned XLS-R 1B model for speech recognition in Spanish
Fine-tuned facebook/wav2vec2-xls-r-1b on Spanish using the train and validation splits of Common Voice 8.0, MediaSpeech, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the HuggingSound tool, and thanks to the GPU credits generously given by the OVHcloud :)
## Usage
Using the HuggingSound library:
Writing your own inference script:
## Evaluation Commands
1. To evaluate on 'mozilla-foundation/common_voice_8_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev_data'
If you want to cite this model you can use this:
| [
"# Fine-tuned XLS-R 1B model for speech recognition in Spanish\n\nFine-tuned facebook/wav2vec2-xls-r-1b on Spanish using the train and validation splits of Common Voice 8.0, MediaSpeech, Multilingual TEDx, Multilingual LibriSpeech, and Voxpopuli.\nWhen using this model, make sure that your speech input is sampled a... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #es #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Fine-tuned XLS-R 1B model for sp... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8494 | 1.0 | 250 | 0.3134 | 0.907 | 0.9051 |
| 0.2504 | 2.0 | 500 | 0.2159 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}, {"type": "f1", "value": 0.9230733583303665, "name": "F1"}]}]}]} | jonc/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2159
* Accuracy: 0.923
* F1: 0.9231
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... |
feature-extraction | transformers |
# Icelandic ConvBERT-Base
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. | {"language": ["is"], "license": "cc-by-4.0", "datasets": ["igc"]} | jonfd/convbert-base-igc-is | null | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"is"
] | TAGS
#transformers #pytorch #tf #convbert #feature-extraction #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us
|
# Icelandic ConvBERT-Base
This model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture. | [
"# Icelandic ConvBERT-Base\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.",
"# Acknowledgments\nThis research was supported with Cloud TPUs from Google's TPU Rese... | [
"TAGS\n#transformers #pytorch #tf #convbert #feature-extraction #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# Icelandic ConvBERT-Base\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a Word... |
feature-extraction | transformers |
# Icelandic ConvBERT-Small
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a Unigram tokenizer with a vocabulary size of 96,000.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. | {"language": ["is"], "license": "cc-by-4.0", "datasets": ["igc"]} | jonfd/convbert-small-igc-is | null | [
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"is"
] | TAGS
#transformers #pytorch #tf #convbert #feature-extraction #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us
|
# Icelandic ConvBERT-Small
This model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a Unigram tokenizer with a vocabulary size of 96,000.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture. | [
"# Icelandic ConvBERT-Small\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a Unigram tokenizer with a vocabulary size of 96,000.",
"# Acknowledgments\nThis research was supported with Cloud TPUs from Google's TPU Resea... | [
"TAGS\n#transformers #pytorch #tf #convbert #feature-extraction #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# Icelandic ConvBERT-Small\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a Uni... |
null | transformers |
# Icelandic ELECTRA-Base
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. | {"language": ["is"], "license": "cc-by-4.0", "datasets": ["igc"]} | jonfd/electra-base-igc-is | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"is"
] | TAGS
#transformers #pytorch #electra #pretraining #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us
|
# Icelandic ELECTRA-Base
This model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture. | [
"# Icelandic ELECTRA-Base\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.",
"# Acknowledgments\nThis research was supported with Cloud TPUs from Google's TPU Resea... | [
"TAGS\n#transformers #pytorch #electra #pretraining #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# Icelandic ELECTRA-Base\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokeniz... |
null | transformers |
# Icelandic ELECTRA-Small
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. | {"language": ["is"], "license": "cc-by-4.0", "datasets": ["igc"]} | jonfd/electra-small-igc-is | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"is"
] | TAGS
#transformers #pytorch #electra #pretraining #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us
|
# Icelandic ELECTRA-Small
This model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture. | [
"# Icelandic ELECTRA-Small\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.",
"# Acknowledgments\nThis research was supported with Cloud TPUs from Google's TPU Rese... | [
"TAGS\n#transformers #pytorch #electra #pretraining #is #dataset-igc #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# Icelandic ELECTRA-Small\nThis model was pretrained on the Icelandic Gigaword Corpus, which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokeni... |
null | transformers |
# Icelandic-Norwegian ELECTRA-Small
This model was pretrained on the following corpora:
* The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
* The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic and Norwegian text obtained from .is and .no domains, respectively
The total size of the corpus after document-level deduplication and filtering was 7.41B tokens, split equally between the two languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 64,105 for 1.1 million steps, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. | {"language": ["is", false], "license": "cc-by-4.0", "datasets": ["igc", "ic3", "jonfd/ICC", "mc4"]} | jonfd/electra-small-is-no | null | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"is",
"no",
"dataset:igc",
"dataset:ic3",
"dataset:jonfd/ICC",
"dataset:mc4",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"is",
"no"
] | TAGS
#transformers #pytorch #tf #electra #pretraining #is #no #dataset-igc #dataset-ic3 #dataset-jonfd/ICC #dataset-mc4 #license-cc-by-4.0 #endpoints_compatible #region-us
|
# Icelandic-Norwegian ELECTRA-Small
This model was pretrained on the following corpora:
* The Icelandic Gigaword Corpus (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The Icelandic Crawled Corpus (ICC)
* The Multilingual Colossal Clean Crawled Corpus (mC4) - Icelandic and Norwegian text obtained from .is and .no domains, respectively
The total size of the corpus after document-level deduplication and filtering was 7.41B tokens, split equally between the two languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 64,105 for 1.1 million steps, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture. | [
"# Icelandic-Norwegian ELECTRA-Small\nThis model was pretrained on the following corpora:\n* The Icelandic Gigaword Corpus (IGC)\n* The Icelandic Common Crawl Corpus (IC3)\n* The Icelandic Crawled Corpus (ICC)\n* The Multilingual Colossal Clean Crawled Corpus (mC4) - Icelandic and Norwegian text obtained from .is a... | [
"TAGS\n#transformers #pytorch #tf #electra #pretraining #is #no #dataset-igc #dataset-ic3 #dataset-jonfd/ICC #dataset-mc4 #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# Icelandic-Norwegian ELECTRA-Small\nThis model was pretrained on the following corpora:\n* The Icelandic Gigaword Corpus (IGC)\n* The... |
null | transformers |
# Nordic ELECTRA-Small
This model was pretrained on the following corpora:
* The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
* The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic, Norwegian, Swedish and Danish text obtained from .is, .no, .se and .dk domains, respectively
The total size of the corpus after document-level deduplication and filtering was 14.82B tokens, split equally between the four languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 96,105 for one million steps with a batch size of 256, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. | {"language": ["is", false, "sv", "da"], "license": "cc-by-4.0", "datasets": ["igc", "ic3", "jonfd/ICC", "mc4"]} | jonfd/electra-small-nordic | null | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"is",
"no",
"sv",
"da",
"dataset:igc",
"dataset:ic3",
"dataset:jonfd/ICC",
"dataset:mc4",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"is",
"no",
"sv",
"da"
] | TAGS
#transformers #pytorch #tf #electra #pretraining #is #no #sv #da #dataset-igc #dataset-ic3 #dataset-jonfd/ICC #dataset-mc4 #license-cc-by-4.0 #endpoints_compatible #has_space #region-us
|
# Nordic ELECTRA-Small
This model was pretrained on the following corpora:
* The Icelandic Gigaword Corpus (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The Icelandic Crawled Corpus (ICC)
* The Multilingual Colossal Clean Crawled Corpus (mC4) - Icelandic, Norwegian, Swedish and Danish text obtained from .is, .no, .se and .dk domains, respectively
The total size of the corpus after document-level deduplication and filtering was 14.82B tokens, split equally between the four languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 96,105 for one million steps with a batch size of 256, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture. | [
"# Nordic ELECTRA-Small\nThis model was pretrained on the following corpora:\n* The Icelandic Gigaword Corpus (IGC)\n* The Icelandic Common Crawl Corpus (IC3)\n* The Icelandic Crawled Corpus (ICC)\n* The Multilingual Colossal Clean Crawled Corpus (mC4) - Icelandic, Norwegian, Swedish and Danish text obtained from .... | [
"TAGS\n#transformers #pytorch #tf #electra #pretraining #is #no #sv #da #dataset-igc #dataset-ic3 #dataset-jonfd/ICC #dataset-mc4 #license-cc-by-4.0 #endpoints_compatible #has_space #region-us \n",
"# Nordic ELECTRA-Small\nThis model was pretrained on the following corpora:\n* The Icelandic Gigaword Corpus (IGC)\... |
text-classification | transformers | ---
Epoch Training Loss Validation Loss F1 Roc Auc Accuracy
1 0.115400 0.099458 0.888763 0.920410 0.731760
2 0.070400 0.080343 0.911700 0.943234 0.781116 | {} | joniponi/bert-finetuned-sem_eval-english | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| ---
Epoch Training Loss Validation Loss F1 Roc Auc Accuracy
1 0.115400 0.099458 0.888763 0.920410 0.731760
2 0.070400 0.080343 0.911700 0.943234 0.781116 | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | The following model is trained on the SUM partition of 20% overlapping mixtures | {} | jonpodtu/02sparseOverlapConvTasNet_SUM_2spk_8k | null | [
"pytorch",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#pytorch #region-us
| The following model is trained on the SUM partition of 20% overlapping mixtures | [] | [
"TAGS\n#pytorch #region-us \n"
] |
text-generation | transformers | # Summary
The app was conceived with the idea of recreating and generate new dialogs for existing games.
In order to generate a dataset for training the steps followed were:
1. Download from [Assassins Creed Fandom Wiki](https://assassinscreed.fandom.com/wiki/Special:Export) from the category "Memories relived using the Animus HR-8.5".
2. Keep only text elements from XML.
3. Keep only the dialog section.
4. Parse wikimarkup with [wikitextparser](https://pypi.org/project/wikitextparser/).
5. Clean description of dialog's context.
Due to the small size of the dataset obtained, a transfer learning approach was considered based on a pretrained ["Dialog GPT" model](https://huggingface.co/microsoft/DialoGPT-small). | {} | jonx18/DialoGPT-small-Creed-Odyssey | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Summary
The app was conceived with the idea of recreating and generate new dialogs for existing games.
In order to generate a dataset for training the steps followed were:
1. Download from Assassins Creed Fandom Wiki from the category "Memories relived using the Animus HR-8.5".
2. Keep only text elements from XML.
3. Keep only the dialog section.
4. Parse wikimarkup with wikitextparser.
5. Clean description of dialog's context.
Due to the small size of the dataset obtained, a transfer learning approach was considered based on a pretrained "Dialog GPT" model. | [
"# Summary\nThe app was conceived with the idea of recreating and generate new dialogs for existing games.\nIn order to generate a dataset for training the steps followed were:\n1. Download from Assassins Creed Fandom Wiki from the category \"Memories relived using the Animus HR-8.5\".\n2. Keep only text elements f... | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Summary\nThe app was conceived with the idea of recreating and generate new dialogs for existing games.\nIn order to generate a dataset for training the steps followe... |
token-classification | transformers | * Fine-tunning "KLUE/roberta-large" model For CER(Company Entity Recognition) With Custom Dataset
* Custom Datasets are composed of news data
```python
label_list = ['O',"B-PER","I-PER","B-ORG","I-ORG","B-COM","I-COM","B-LOC","I-LOC","B-DAT","I-DAT","B-TIM","I-TIM","B-QNT","I-QNT"]
refer_list = ['0','1','2','3','4','5','6','7','8','9','10','11','12','13','14']
```
- EX: "B-PER" : 1 , "B-COM" : 5 | {} | joonhan/roberta-roa | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us
| * Fine-tunning "KLUE/roberta-large" model For CER(Company Entity Recognition) With Custom Dataset
* Custom Datasets are composed of news data
- EX: "B-PER" : 1 , "B-COM" : 5 | [] | [
"TAGS\n#transformers #pytorch #roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 15.037146%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py | {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "apache-2.0", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "PyTorch"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pt", "type": "common_voice", "args": "pt"}, "metrics": [{"type": "wer", "value": "15.037146%", "name": "Test WER"}]}]}]} | joaoalvarenga/model-sid-voxforge-cv-cetuc-0 | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
Test Result (wer): 15.037146%
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found at: URL | [
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Portuguese test data of Common Vo... | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebo... |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 15.037146%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py | {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "apache-2.0", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "PyTorch"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pt", "type": "common_voice", "args": "pt"}, "metrics": [{"type": "wer", "value": "15.037146%", "name": "Test WER"}]}]}]} | joaoalvarenga/wav2vec2-cv-coral-30ep | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
Test Result (wer): 15.037146%
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found at: URL | [
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Portuguese test data of Common Vo... | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebo... |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-100k-VoxPopuli-Portuguese
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
You need to install Enelvo, an open-source spell correction trained with Twitter user posts
`pip install enelvo`
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from enelvo import normaliser
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-100k-voxpopuli-pt")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
norm = normaliser.Normaliser()
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = [norm.normalise(i) for i in processor.batch_decode(pred_ids)]
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 19.735723%
## Training
The Common Voice `train`, `validation` datasets were used for training.
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "apache-2.0", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "PyTorch", "voxpopuli"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "JoaoAlvarenga Wav2Vec2 Large 100k VoxPopuli Portuguese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pt", "type": "common_voice", "args": "pt"}, "metrics": [{"type": "wer", "value": "19.735723%", "name": "Test WER"}]}]}]} | joaoalvarenga/wav2vec2-large-100k-voxpopuli-pt | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"PyTorch",
"voxpopuli",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #PyTorch #voxpopuli #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-100k-VoxPopuli-Portuguese
Fine-tuned facebook/wav2vec2-large-100k-voxpopuli on Portuguese using the Common Voice dataset.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
You need to install Enelvo, an open-source spell correction trained with Twitter user posts
'pip install enelvo'
Test Result (wer): 19.735723%
## Training
The Common Voice 'train', 'validation' datasets were used for training.
| [
"# Wav2Vec2-Large-100k-VoxPopuli-Portuguese\n\nFine-tuned facebook/wav2vec2-large-100k-voxpopuli on Portuguese using the Common Voice dataset.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Portuguese test dat... | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #PyTorch #voxpopuli #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-100k-VoxPopuli-Portuguese\n\nFine-tuned facebook/wa... |
automatic-speech-recognition | null |
# Wav2Vec2-Large-XLSR-53-Spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "es", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "es", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-53-spanish")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' # TODO: adapt this list to include all special characters you removed from the data
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer) **: Training
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-spanish/blob/main/fine-tuning.py | {"language": "es", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "es", "apache-2.0", "spanish-speech-corpus", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "PyTorch"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "JoaoAlvarenga XLSR Wav2Vec2 Large 53 Spanish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ES", "type": "common_voice", "args": "es"}, "metrics": [{"type": "wer", "value": "Training", "name": "Test WER"}]}]}]} | joaoalvarenga/wav2vec2-large-xlsr-53-spanish | null | [
"audio",
"speech",
"wav2vec2",
"es",
"apache-2.0",
"spanish-speech-corpus",
"automatic-speech-recognition",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"es"
] | TAGS
#audio #speech #wav2vec2 #es #apache-2.0 #spanish-speech-corpus #automatic-speech-recognition #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #region-us
|
# Wav2Vec2-Large-XLSR-53-Spanish
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Spanish using the Common Voice dataset.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
Test Result (wer) : Training
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found at: URL | [
"# Wav2Vec2-Large-XLSR-53-Spanish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Spanish using the Common Voice dataset.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Portuguese test data of Common Voice.\n... | [
"TAGS\n#audio #speech #wav2vec2 #es #apache-2.0 #spanish-speech-corpus #automatic-speech-recognition #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Spanish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Spanish using the Common Vo... |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "it", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Italian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 13.914924%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-italian/blob/main/fine_tuning.py
| {"language": "it", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "it", "apache-2.0", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "PyTorch"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "JoaoAlvarenga XLSR Wav2Vec2 Large 53 Italian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice it", "type": "common_voice", "args": "it"}, "metrics": [{"type": "wer", "value": "13.914924%", "name": "Test WER"}]}]}]} | joaoalvarenga/wav2vec2-large-xlsr-italian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"it",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #it #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Italian
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Italian using the Common Voice dataset.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Italian test data of Common Voice.
Test Result (wer): 13.914924%
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found at: URL
| [
"# Wav2Vec2-Large-XLSR-53-Italian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Italian using the Common Voice dataset.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Italian test data of Common Voice.\n\n\... | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #it #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Italian\n\nFine-tuned facebook/... |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 15.037146%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "apache-2.0", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "PyTorch"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese A", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pt", "type": "common_voice", "args": "pt"}, "metrics": [{"type": "wer", "value": "15.037146%", "name": "Test WER"}]}]}]} | joaoalvarenga/wav2vec2-large-xlsr-portuguese-a | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
Test Result (wer): 15.037146%
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found at: URL
| [
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Portuguese test data of Common Vo... | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebo... |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
You need to install Enelvo, an open-source spell correction trained with Twitter user posts
`pip install enelvo`
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from enelvo import normaliser
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
norm = normaliser.Normaliser()
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = [norm.normalise(i) for i in processor.batch_decode(pred_ids)]
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (wer)**: 13.766801%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
| {"language": "pt", "license": "apache-2.0", "tags": ["audio", "speech", "wav2vec2", "pt", "apache-2.0", "portuguese-speech-corpus", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "PyTorch"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "JoaoAlvarenga XLSR Wav2Vec2 Large 53 Portuguese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pt", "type": "common_voice", "args": "pt"}, "metrics": [{"type": "wer", "value": "13.766801%", "name": "Test WER"}]}]}]} | joaoalvarenga/wav2vec2-large-xlsr-portuguese | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"apache-2.0",
"portuguese-speech-corpus",
"xlsr-fine-tuning-week",
"PyTorch",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
You need to install Enelvo, an open-source spell correction trained with Twitter user posts
'pip install enelvo'
Test Result (wer): 13.766801%
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found at: URL
| [
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Portuguese using the Common Voice dataset.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Portuguese test data of Common Vo... | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #pt #apache-2.0 #portuguese-speech-corpus #xlsr-fine-tuning-week #PyTorch #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Portuguese\n\nFine-tuned facebo... |
text-generation | transformers |
### About NegaNetizen
Trained on conversations from a friend for use within their discord server.
### How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained('jordanhagan/DialoGPT-medium-NegaNetizen')
# Let's chat for 5 lines
for step in range(5):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("NNR: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
| {"language": ["en"], "tags": ["conversational", "gpt2"], "datasets": ["Discord transcripts"]} | jordanhagan/DialoGPT-medium-NegaNetizen | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
### About NegaNetizen
Trained on conversations from a friend for use within their discord server.
### How to use
| [
"### About NegaNetizen\nTrained on conversations from a friend for use within their discord server.",
"### How to use"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### About NegaNetizen\nTrained on conversations from a friend for use within their discord server.",
"### How to use"
] |
null | opennmt |
### Introduction
This repository contains a description on how to use OpenNMT on the Grammar Error Correction (GEC) task. The idea is to approch GEC as a translation task
### Usage
Install the necessary dependencies:
```bash
pip3 install ctranslate2 pyonmttok
```
Simple tokenization & translation using Python:
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="jordimas/gec-opennmt-english", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/sp_m.model")
tokenized=tokenizer.tokenize("The water are hot. My friends are going to be late. Today mine mother is in Barcelona.")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
# Model
The model has been training using the [clang8](https://github.com/google-research-datasets/clang8) corpus for English language.
Details:
* Model: TransformerBase
* Tokenizer: SentencePiece
* BLEU = 85.50
# Papers
Relevant papers:
* [Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task](https://aclanthology.org/N18-1055.pdf)
* [A Simple Recipe for Multilingual Grammatical Error Correction](https://arxiv.org/pdf/2106.03830.pdf)
# Contact
Email address: Jordi Mas: jmas@softcatala.org
| {"language": ["en"], "license": "mit", "library_name": "opennmt", "tags": ["gec"], "metrics": ["bleu"], "inference": false} | jordimas/gec-opennmt-english | null | [
"opennmt",
"gec",
"en",
"arxiv:2106.03830",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2106.03830"
] | [
"en"
] | TAGS
#opennmt #gec #en #arxiv-2106.03830 #license-mit #region-us
|
### Introduction
This repository contains a description on how to use OpenNMT on the Grammar Error Correction (GEC) task. The idea is to approch GEC as a translation task
### Usage
Install the necessary dependencies:
Simple tokenization & translation using Python:
# Model
The model has been training using the clang8 corpus for English language.
Details:
* Model: TransformerBase
* Tokenizer: SentencePiece
* BLEU = 85.50
# Papers
Relevant papers:
* Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task
* A Simple Recipe for Multilingual Grammatical Error Correction
# Contact
Email address: Jordi Mas: jmas@URL
| [
"### Introduction\n\nThis repository contains a description on how to use OpenNMT on the Grammar Error Correction (GEC) task. The idea is to approch GEC as a translation task",
"### Usage\n\nInstall the necessary dependencies:\n\n\n\n\n\nSimple tokenization & translation using Python:",
"# Model\n\nThe model ha... | [
"TAGS\n#opennmt #gec #en #arxiv-2106.03830 #license-mit #region-us \n",
"### Introduction\n\nThis repository contains a description on how to use OpenNMT on the Grammar Error Correction (GEC) task. The idea is to approch GEC as a translation task",
"### Usage\n\nInstall the necessary dependencies:\n\n\n\n\n\nSi... |
null | null | test | {} | jordn/thing | null | [
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#region-us
| test | [] | [
"TAGS\n#region-us \n"
] |
text-classification | transformers | This model is a bert for sequence classification model fine-tuned on the MedDialogue dataset. Basically, the task is just to predict if a given sentence in the corpus was spoken by the patient or doctor. | {} | josephgatto/paint_doctor_speaker_identification | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| This model is a bert for sequence classification model fine-tuned on the MedDialogue dataset. Basically, the task is just to predict if a given sentence in the corpus was spoken by the patient or doctor. | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
# Alfred DialoGPT | {"tags": ["conversational"]} | josephmagnayon/DialoGPT-medium-Alfred | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Alfred DialoGPT | [
"# Alfred DialoGPT"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Alfred DialoGPT"
] |
text-generation | transformers |
# HumanChat Model | {"tags": ["conversational"]} | josepjulia/RepoHumanChatBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# HumanChat Model | [
"# HumanChat Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# HumanChat Model"
] |
text-generation | transformers |
# Josh DialoGPT medium Bot | {"tags": ["conversational"]} | josh8/DialoGPT-medium-josh | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Josh DialoGPT medium Bot | [
"# Josh DialoGPT medium Bot"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Josh DialoGPT medium Bot"
] |
text-generation | transformers |
# Josh DialoGPT Model | {"tags": ["conversational"]} | josh8/DialoGPT-small-josh | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Josh DialoGPT Model | [
"# Josh DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Josh DialoGPT Model"
] |
summarization | transformers | # mt5-small-spanish-summarization
## Model description
This is a mt5-small model finetuned for generating headlines from the body of the news in Spanish.
## Training data
The model was trained with 58425 news extracted from the La Razón (31477) and Público (26948) newspapers. These news belong to the following categories: "España", "Cultura", "Economía", "Igualdad" and "Política".
## Training procedure
It was trained with Google Colab's GPU Tesla P100-PCIE-16GB for 2 epochs.
### Hyperparameters
{evaluation_strategy = "epoch",
learning_rate = 2e-4,
per_device_train_batch_size = 6,
per_device_eval_batch_size = 6,
weight_decay = 0.01,
save_total_limi t= 3,
num_train_epochs = 2,
predict_with_generate = True,
fp16 = False}
## Eval results
| metric | score |
| --- | ----- |
| rouge1 | 44.03 |
| rouge2 | 28.2900 |
| rougeL | 40.54 |
| rougeLsum | 40.5587 |
### BibTeX entry and citation info
```bibtex
@inproceedings{ mt5lrpjosmunpen,
year={2020},
}
``` | {"language": ["es"], "license": "apache-2.0", "tags": ["summarization", "mt5", "spanish"], "datasets": ["larazonpublico", "es"], "metrics": ["rouge"], "widget": [{"text": "La Guardia Civil ha desarticulado un grupo organizado dedicado a copiar en los examenes teoricos para la obtencion del permiso de conducir. Para ello, empleaban receptores y camaras de alta tecnologia y operaban desde la misma sede del Centro de examenes de la Direccion General de Trafico (DGT) en Mostoles. Es lo que han llamado la Operacion pinga. El grupo desarticulado ofrecia el servicio de transporte y tecnologia para copiar y poder aprobar. Por dicho servicio cobraban 1.000 euros. Los investigadores sorprendieron in fraganti a una mujer intentando copiar en el examen. Portaba una chaqueta con dispositivos electronicos ocultos, concretamente un telefono movil al que estaba conectada una camara que habia sido insertada en la parte frontal de la chaqueta para transmitir online el examen y que orientada al ordenador del Centro de Examenes en el que aparecen las preguntas, permitia visualizar las imagenes en otro ordenador alojado en el interior de un vehiculo estacionado en las inmediaciones del centro. En este vehiculo, se encontraban el resto del grupo desarticulado con varios ordenadores portatiles y tablets abiertos y conectados a paginas de test de la DGT para consultar las respuestas. Estos, comunicaban con la mujer que estaba en el aula haciendo el examen a traves de un diminuto receptor bluetooth que portaba en el interior de su oido. Luis de Lama, portavoz de la Guardia Civil de Trafico destaca que los ciudadanos, eran de origen chino, y copiaban en el examen utilizando la tecnologia facilitada por una organizacion. Destaca que, ademas de parte del fraude que supone copiar en un examen muchos de estos ciudadanos desconocian el idioma, no hablan ni entienden el espa\u00f1ol lo que supone un grave riesgo para la seguridad vial por desconocer las se\u00f1ales y letreros que avisan en carretera de muchas incidencias. "}]} | josmunpen/mt5-small-spanish-summarization | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"spanish",
"es",
"dataset:larazonpublico",
"dataset:es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #spanish #es #dataset-larazonpublico #dataset-es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-spanish-summarization
===============================
Model description
-----------------
This is a mt5-small model finetuned for generating headlines from the body of the news in Spanish.
Training data
-------------
The model was trained with 58425 news extracted from the La Razón (31477) and Público (26948) newspapers. These news belong to the following categories: "España", "Cultura", "Economía", "Igualdad" and "Política".
Training procedure
------------------
It was trained with Google Colab's GPU Tesla P100-PCIE-16GB for 2 epochs.
### Hyperparameters
{evaluation\_strategy = "epoch",
learning\_rate = 2e-4,
per\_device\_train\_batch\_size = 6,
per\_device\_eval\_batch\_size = 6,
weight\_decay = 0.01,
save\_total\_limi t= 3,
num\_train\_epochs = 2,
predict\_with\_generate = True,
fp16 = False}
Eval results
------------
### BibTeX entry and citation info
| [
"### Hyperparameters\n\n\n{evaluation\\_strategy = \"epoch\",\nlearning\\_rate = 2e-4,\nper\\_device\\_train\\_batch\\_size = 6,\nper\\_device\\_eval\\_batch\\_size = 6,\nweight\\_decay = 0.01,\nsave\\_total\\_limi t= 3,\nnum\\_train\\_epochs = 2,\npredict\\_with\\_generate = True,\nfp16 = False}\n\n\nEval results\... | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #spanish #es #dataset-larazonpublico #dataset-es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Hyperparameters\n\n\n{evaluation\\_strategy = \"epoch\",\nlearning\\_rate = 2e-4,... |
multiple-choice | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-swag", "results": []}]} | joykirat/bert-base-uncased-finetuned-swag | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
| [
"# bert-base-uncased-finetuned-swag\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training p... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-finetuned-swag\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.",
"## Model description\n\nMore information ne... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7588
- Matthews Correlation: 0.5230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5261 | 1.0 | 535 | 0.5125 | 0.4124 |
| 0.3502 | 2.0 | 1070 | 0.5439 | 0.5076 |
| 0.2378 | 3.0 | 1605 | 0.6629 | 0.4946 |
| 0.1809 | 4.0 | 2140 | 0.7588 | 0.5230 |
| 0.1309 | 5.0 | 2675 | 0.8901 | 0.5056 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5229586822934302, "name": "Matthews Correlation"}]}]}]} | jpabbuehl/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7588
* Matthews Correlation: 0.5230
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1446
- Accuracy: 0.929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9345 | 1.0 | 500 | 0.2509 | 0.918 |
| 0.1855 | 2.0 | 1000 | 0.1626 | 0.928 |
| 0.1036 | 3.0 | 1500 | 0.1446 | 0.929 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "sagemaker-distilbert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.929, "name": "Accuracy"}]}]}]} | jpabbuehl/sagemaker-distilbert-emotion | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| sagemaker-distilbert-emotion
============================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1446
* Accuracy: 0.929
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 32
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps... | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3... |
summarization | transformers |
# Samsum Pegasus (Reddit/TIFU) for conversational summaries
## Model description
Pegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!
## Training data
The data is the [samsum](https://huggingface.co/datasets/samsum) dataset for conversional summaries.
The initial weigths were from the [google/pegasus-reddit_tifu](https://huggingface.co/google/pegasus-reddit_tifu). The hypothesis being that it would help the convergence on the samsum dataset to have weights trained on a larger summarization dataset first like the Reddit TIFU using casual language.
## Training procedure
Used the _example/seq2seq/run_summarization.py_ script from the transformers source _4.5.0dev0_.
n_epochs: 3,\
batch_size: 8, \
max_source_length: 256,\
max_target_length: 128
## Eval results
eval_gen_len: 35.9939,\
eval_loss: 1.4284523725509644,\
eval_rouge1: 46.5613,\
eval_rouge2: 23.6137,\
eval_rougeL: 37.2397,\
eval_rougeLsum: 42.7126,\
eval_samples_per_second: 4.302
## Example
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = "jpcorb20/pegasus-large-reddit_tifu-samsum-256"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\r\nAlexis: Thanks Carter. Yeah, I really enjoyed myself as well.\r\nCarter: If you are up for it, I would really like to see you again soon.\r\nAlexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\r\nCarter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\r\nAlexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\r\n"""
token_params = dict(max_length=256, truncation=True, padding='longest', return_tensors="pt")
batch = tokenizer(src_text, **token_params)
translated = model.generate(**batch)
decode_params = dict(num_beams=5, min_length=16, max_length=128, length_penalty=2)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True, **decode_params)
print(tgt_text) | {"language": ["en"], "tags": ["pytorch", "google/pegasus-reddit_tifu", "summarization", "samsum"], "datasets": ["samsum"], "metrics": ["rouge"]} | jpcorb20/pegasus-large-reddit_tifu-samsum-256 | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"google/pegasus-reddit_tifu",
"summarization",
"samsum",
"en",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #pegasus #text2text-generation #google/pegasus-reddit_tifu #summarization #samsum #en #dataset-samsum #autotrain_compatible #endpoints_compatible #region-us
|
# Samsum Pegasus (Reddit/TIFU) for conversational summaries
## Model description
Pegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!
## Training data
The data is the samsum dataset for conversional summaries.
The initial weigths were from the google/pegasus-reddit_tifu. The hypothesis being that it would help the convergence on the samsum dataset to have weights trained on a larger summarization dataset first like the Reddit TIFU using casual language.
## Training procedure
Used the _example/seq2seq/run_summarization.py_ script from the transformers source _4.5.0dev0_.
n_epochs: 3,\
batch_size: 8, \
max_source_length: 256,\
max_target_length: 128
## Eval results
eval_gen_len: 35.9939,\
eval_loss: 1.4284523725509644,\
eval_rouge1: 46.5613,\
eval_rouge2: 23.6137,\
eval_rougeL: 37.2397,\
eval_rougeLsum: 42.7126,\
eval_samples_per_second: 4.302
## Example
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = "jpcorb20/pegasus-large-reddit_tifu-samsum-256"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\r\nAlexis: Thanks Carter. Yeah, I really enjoyed myself as well.\r\nCarter: If you are up for it, I would really like to see you again soon.\r\nAlexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\r\nCarter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\r\nAlexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\r\n"""
token_params = dict(max_length=256, truncation=True, padding='longest', return_tensors="pt")
batch = tokenizer(src_text, token_params)
translated = model.generate(batch)
decode_params = dict(num_beams=5, min_length=16, max_length=128, length_penalty=2)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True, decode_params)
print(tgt_text) | [
"# Samsum Pegasus (Reddit/TIFU) for conversational summaries",
"## Model description\n\nPegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!",
"## Training data\n\nThe data is the samsum dataset for conversional summaries.\n\nThe initial weigths were from the google/pegasus-reddit_t... | [
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #google/pegasus-reddit_tifu #summarization #samsum #en #dataset-samsum #autotrain_compatible #endpoints_compatible #region-us \n",
"# Samsum Pegasus (Reddit/TIFU) for conversational summaries",
"## Model description\n\nPegasus (Reddit/TIFU) for conver... |
summarization | transformers |
# Samsum Pegasus (Reddit/TIFU) for conversational summaries
## Model description
Pegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!
## Training data
The data is the [samsum](https://huggingface.co/datasets/samsum) dataset for conversional summaries.
The initial weigths were from the [google/pegasus-reddit_tifu](https://huggingface.co/google/pegasus-reddit_tifu). The hypothesis being that it would help the convergence on the samsum dataset to have weights trained on a larger summarization dataset first like the Reddit TIFU using casual language.
## Training procedure
Used the example/seq2seq/run_summarization.py script from the transformers source 4.5.0dev0.
n_epochs: 3,\
batch_size: 4, \
max_source_length: 512,\
max_target_length: 128
## Eval results
eval_gen_len: 35.89,\
eval_loss: 1.3807392120361328,\
eval_rouge1: 47.3372,\
eval_rouge2: 24.4728,\
eval_rougeL: 37.9078,\
eval_rougeLsum: 43.5744,\
eval_samples_per_second: 2.814
## Example
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = "jpcorb20/pegasus-large-reddit_tifu-samsum-256"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\\r\
Alexis: Thanks Carter. Yeah, I really enjoyed myself as well.\\r\
Carter: If you are up for it, I would really like to see you again soon.\\r\
Alexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\\r\
Carter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\\r\
Alexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\\r\
"""
token_params = dict(max_length=512, truncation=True, padding='longest', return_tensors="pt")
batch = tokenizer(src_text, **token_params)
translated = model.generate(**batch)
decode_params = dict(num_beams=5, min_length=16, max_length=128, length_penalty=2)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True, **decode_params)
print(tgt_text) | {"language": ["en"], "tags": ["pytorch", "google/pegasus-reddit_tifu", "summarization", "samsum"], "datasets": ["samsum"], "metrics": ["rouge"]} | jpcorb20/pegasus-large-reddit_tifu-samsum-512 | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"google/pegasus-reddit_tifu",
"summarization",
"samsum",
"en",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #pegasus #text2text-generation #google/pegasus-reddit_tifu #summarization #samsum #en #dataset-samsum #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Samsum Pegasus (Reddit/TIFU) for conversational summaries
## Model description
Pegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!
## Training data
The data is the samsum dataset for conversional summaries.
The initial weigths were from the google/pegasus-reddit_tifu. The hypothesis being that it would help the convergence on the samsum dataset to have weights trained on a larger summarization dataset first like the Reddit TIFU using casual language.
## Training procedure
Used the example/seq2seq/run_summarization.py script from the transformers source 4.5.0dev0.
n_epochs: 3,\
batch_size: 4, \
max_source_length: 512,\
max_target_length: 128
## Eval results
eval_gen_len: 35.89,\
eval_loss: 1.3807392120361328,\
eval_rouge1: 47.3372,\
eval_rouge2: 24.4728,\
eval_rougeL: 37.9078,\
eval_rougeLsum: 43.5744,\
eval_samples_per_second: 2.814
## Example
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = "jpcorb20/pegasus-large-reddit_tifu-samsum-256"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
src_text = """Carter: Hey Alexis, I just wanted to let you know that I had a really nice time with you tonight.\\r\
Alexis: Thanks Carter. Yeah, I really enjoyed myself as well.\\r\
Carter: If you are up for it, I would really like to see you again soon.\\r\
Alexis: Thanks Carter, I'm flattered. But I have a really busy week coming up.\\r\
Carter: Yeah, no worries. I totally understand. But if you ever want to go grab dinner again, just let me know.\\r\
Alexis: Yeah of course. Thanks again for tonight. Carter: Sure. Have a great night.\\r\
"""
token_params = dict(max_length=512, truncation=True, padding='longest', return_tensors="pt")
batch = tokenizer(src_text, token_params)
translated = model.generate(batch)
decode_params = dict(num_beams=5, min_length=16, max_length=128, length_penalty=2)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True, decode_params)
print(tgt_text) | [
"# Samsum Pegasus (Reddit/TIFU) for conversational summaries",
"## Model description\n\nPegasus (Reddit/TIFU) for conversational summaries trained on the samsum dataset!",
"## Training data\n\nThe data is the samsum dataset for conversional summaries.\n\nThe initial weigths were from the google/pegasus-reddit_t... | [
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #google/pegasus-reddit_tifu #summarization #samsum #en #dataset-samsum #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Samsum Pegasus (Reddit/TIFU) for conversational summaries",
"## Model description\n\nPegasus (Reddit/TIFU)... |
text-classification | transformers | # Distilroberta for toxic comment detection
See my GitHub repo [toxic-comment-server](https://github.com/jpcorb20/toxic-comment-server)
The model was trained from [DistilRoberta](https://huggingface.co/distilroberta-base) on [Kaggle Toxic Comments](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like the HF widget).
## Evaluation
F1 scores:
toxic: 0.72
severe_toxic: 0.38
obscene: 0.72
threat: 0.52
insult: 0.69
identity_hate: 0.60
Macro-F1: 0.61 | {} | jpcorb20/toxic-detector-distilroberta | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
| # Distilroberta for toxic comment detection
See my GitHub repo toxic-comment-server
The model was trained from DistilRoberta on Kaggle Toxic Comments with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like the HF widget).
## Evaluation
F1 scores:
toxic: 0.72
severe_toxic: 0.38
obscene: 0.72
threat: 0.52
insult: 0.69
identity_hate: 0.60
Macro-F1: 0.61 | [
"# Distilroberta for toxic comment detection\n\nSee my GitHub repo toxic-comment-server\n\nThe model was trained from DistilRoberta on Kaggle Toxic Comments with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like th... | [
"TAGS\n#transformers #pytorch #jax #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Distilroberta for toxic comment detection\n\nSee my GitHub repo toxic-comment-server\n\nThe model was trained from DistilRoberta on Kaggle Toxic Comments with the BCEWithLogits loss for ... |
text-classification | transformers |
# Longformer-base for Machine-Paraphrase Detection
If you are using this model in your research work, please cite
```
@InProceedings{10.1007/978-3-030-96957-8_34,
author="Wahle, Jan Philip and Ruas, Terry and Folt{\'y}nek, Tom{\'a}{\v{s}} and Meuschke, Norman and Gipp, Bela",
title="Identifying Machine-Paraphrased Plagiarism",
booktitle="Information for a Better World: Shaping the Global Future",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="393--413",
abstract="Employing paraphrasing tools to conceal plagiarized text is a severe threat to academic integrity. To enable the detection of machine-paraphrased text, we evaluate the effectiveness of five pre-trained word embedding models combined with machine learning classifiers and state-of-the-art neural language models. We analyze preprints of research papers, graduation theses, and Wikipedia articles, which we paraphrased using different configurations of the tools SpinBot and SpinnerChief. The best performing technique, Longformer, achieved an average F1 score of 80.99{\%} (F1=99.68{\%} for SpinBot and F1=71.64{\%} for SpinnerChief cases), while human evaluators achieved F1=78.4{\%} for SpinBot and F1=65.6{\%} for SpinnerChief cases. We show that the automated classification alleviates shortcomings of widely-used text-matching systems, such as Turnitin and PlagScan.",
isbn="978-3-030-96957-8"
}
```
This is the checkpoint for Longformer-base after being trained on the [Machine-Paraphrased Plagiarism Dataset](https://doi.org/10.5281/zenodo.3608000)
Additional information about this model:
* [The longformer-base-4096 model page](https://huggingface.co/allenai/longformer-base-4096)
* [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf)
* [Official implementation by AllenAI](https://github.com/allenai/longformer)
The model can be loaded to perform Plagiarism like so:
```py
from transformers import AutoModelForSequenceClassification, AutoTokenizer
AutoModelForSequenceClassification("jpelhaw/longformer-base-plagiarism-detection")
AutoTokenizer.from_pretrained("jpelhaw/longformer-base-plagiarism-detection")
input = "Plagiarism is the representation of another author's writing, \
thoughts, ideas, or expressions as one's own work."
example = tokenizer.tokenize(input, add_special_tokens=True)
answer = model(**example)
# "plagiarised"
``` | {"language": "en", "tags": ["array", "of", "tags"], "datasets": ["jpwahle/machine-paraphrase-dataset"], "thumbnail": "url to a thumbnail used in social sharing", "widget": [{"text": "Plagiarism is the representation of another author's writing, thoughts, ideas, or expressions as one's own work."}]} | jpwahle/longformer-base-plagiarism-detection | null | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"text-classification",
"array",
"of",
"tags",
"en",
"dataset:jpwahle/machine-paraphrase-dataset",
"arxiv:2004.05150",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"2004.05150"
] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #longformer #text-classification #array #of #tags #en #dataset-jpwahle/machine-paraphrase-dataset #arxiv-2004.05150 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Longformer-base for Machine-Paraphrase Detection
If you are using this model in your research work, please cite
This is the checkpoint for Longformer-base after being trained on the Machine-Paraphrased Plagiarism Dataset
Additional information about this model:
* The longformer-base-4096 model page
* Longformer: The Long-Document Transformer
* Official implementation by AllenAI
The model can be loaded to perform Plagiarism like so:
| [
"# Longformer-base for Machine-Paraphrase Detection\n\nIf you are using this model in your research work, please cite\n\n\n\nThis is the checkpoint for Longformer-base after being trained on the Machine-Paraphrased Plagiarism Dataset\n\nAdditional information about this model:\n\n* The longformer-base-4096 model pa... | [
"TAGS\n#transformers #pytorch #safetensors #longformer #text-classification #array #of #tags #en #dataset-jpwahle/machine-paraphrase-dataset #arxiv-2004.05150 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Longformer-base for Machine-Paraphrase Detection\n\nIf you are using this model i... |
text2text-generation | transformers |
# T5-large for Word Sense Disambiguation
If you are using this model in your research work, please cite
```bib
@article{wahle2021incorporating,
title={Incorporating Word Sense Disambiguation in Neural Language Models},
author={Wahle, Jan Philip and Ruas, Terry and Meuschke, Norman and Gipp, Bela},
journal={arXiv preprint arXiv:2106.07967},
year={2021}
}
```
This is the checkpoint for T5-large after being trained on the [SemCor 3.0 dataset](http://lcl.uniroma1.it/wsdeval/).
Additional information about this model:
* [The t5-large model page](https://huggingface.co/t5-large)
* [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
* [Official implementation by Google](https://github.com/google-research/text-to-text-transfer-transformer)
The model can be loaded to perform a few-shot classification like so:
```py
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("jpelhaw/t5-word-sense-disambiguation")
tokenizer = AutoTokenizer.from_pretrained("jpelhaw/t5-word-sense-disambiguation")
input = '''question: which description describes the word " java "\
best in the following context? \
descriptions:[ " A drink consisting of an infusion of ground coffee beans ",
" a platform-independent programming language ", or
" an island in Indonesia to the south of Borneo " ]
context: I like to drink " java " in the morning .'''
example = tokenizer.tokenize(input, add_special_tokens=True)
answer = model.generate(input_ids=example['input_ids'],
attention_mask=example['attention_mask'],
max_length=135)
# "a drink consisting of an infusion of ground coffee beans"
```
| {"language": "en", "tags": ["array", "of", "tags"], "thumbnail": "url to a thumbnail used in social sharing", "widget": [{"text": "question: which description describes the word \" java \" best in the following context? descriptions: [ \" A drink consisting of an infusion of ground coffee beans \" , \" a platform-independent programming lanugage \" , or \" an island in Indonesia to the south of Borneo \" ] context: I like to drink ' java ' in the morning ."}]} | jpwahle/t5-large-word-sense-disambiguation | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"array",
"of",
"tags",
"en",
"arxiv:1910.10683",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [
"1910.10683"
] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #array #of #tags #en #arxiv-1910.10683 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# T5-large for Word Sense Disambiguation
If you are using this model in your research work, please cite
This is the checkpoint for T5-large after being trained on the SemCor 3.0 dataset.
Additional information about this model:
* The t5-large model page
* Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
* Official implementation by Google
The model can be loaded to perform a few-shot classification like so:
| [
"# T5-large for Word Sense Disambiguation\n\nIf you are using this model in your research work, please cite\n\n\n\nThis is the checkpoint for T5-large after being trained on the SemCor 3.0 dataset.\n\nAdditional information about this model:\n\n* The t5-large model page\n* Exploring the Limits of Transfer Learning ... | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #array #of #tags #en #arxiv-1910.10683 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# T5-large for Word Sense Disambiguation\n\nIf you are using this model in your research work, please cite\... |
fill-mask | transformers | # Tensorflow CamemBERT
In this repository you will find different versions of the CamemBERT model for Tensorflow.
## CamemBERT
[CamemBERT](https://camembert-model.fr/) is a state-of-the-art language model for French based on the RoBERTa architecture pretrained on the French subcorpus of the newly available multilingual corpus OSCAR.
## Model Weights
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `jplu/tf-camembert-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-camembert-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-camembert-base/tf_model.h5)
## Usage
With Transformers >= 2.4 the Tensorflow models of CamemBERT can be loaded like:
```python
from transformers import TFCamembertModel
model = TFCamembertModel.from_pretrained("jplu/tf-camembert-base")
```
## Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/jplu).
## Acknowledgments
Thanks to all the Huggingface team for the support and their amazing library!
| {} | jplu/tf-camembert-base | null | [
"transformers",
"tf",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Tensorflow CamemBERT
====================
In this repository you will find different versions of the CamemBERT model for Tensorflow.
CamemBERT
---------
CamemBERT is a state-of-the-art language model for French based on the RoBERTa architecture pretrained on the French subcorpus of the newly available multilingual corpus OSCAR.
Model Weights
-------------
Usage
-----
With Transformers >= 2.4 the Tensorflow models of CamemBERT can be loaded like:
Huggingface model hub
---------------------
All models are available on the Huggingface model hub.
Acknowledgments
---------------
Thanks to all the Huggingface team for the support and their amazing library!
| [] | [
"TAGS\n#transformers #tf #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification | transformers |
# XLM-R + NER
This model is a fine-tuned [XLM-Roberta-base](https://arxiv.org/abs/1911.02116) over the 40 languages proposed in [XTREME](https://github.com/google-research/xtreme) from [Wikiann](https://aclweb.org/anthology/P17-1178). This is still an on-going work and the results will be updated everytime an improvement is reached.
The covered labels are:
```
LOC
ORG
PER
O
```
## Metrics on evaluation set:
### Average over the 40 languages
Number of documents: 262300
```
precision recall f1-score support
ORG 0.81 0.81 0.81 102452
PER 0.90 0.91 0.91 108978
LOC 0.86 0.89 0.87 121868
micro avg 0.86 0.87 0.87 333298
macro avg 0.86 0.87 0.87 333298
```
### Afrikaans
Number of documents: 1000
```
precision recall f1-score support
ORG 0.89 0.88 0.88 582
PER 0.89 0.97 0.93 369
LOC 0.84 0.90 0.86 518
micro avg 0.87 0.91 0.89 1469
macro avg 0.87 0.91 0.89 1469
```
### Arabic
Number of documents: 10000
```
precision recall f1-score support
ORG 0.83 0.84 0.84 3507
PER 0.90 0.91 0.91 3643
LOC 0.88 0.89 0.88 3604
micro avg 0.87 0.88 0.88 10754
macro avg 0.87 0.88 0.88 10754
```
### Basque
Number of documents: 10000
```
precision recall f1-score support
LOC 0.88 0.93 0.91 5228
ORG 0.86 0.81 0.83 3654
PER 0.91 0.91 0.91 4072
micro avg 0.89 0.89 0.89 12954
macro avg 0.89 0.89 0.89 12954
```
### Bengali
Number of documents: 1000
```
precision recall f1-score support
ORG 0.86 0.89 0.87 325
LOC 0.91 0.91 0.91 406
PER 0.96 0.95 0.95 364
micro avg 0.91 0.92 0.91 1095
macro avg 0.91 0.92 0.91 1095
```
### Bulgarian
Number of documents: 1000
```
precision recall f1-score support
ORG 0.86 0.83 0.84 3661
PER 0.92 0.95 0.94 4006
LOC 0.92 0.95 0.94 6449
micro avg 0.91 0.92 0.91 14116
macro avg 0.91 0.92 0.91 14116
```
### Burmese
Number of documents: 100
```
precision recall f1-score support
LOC 0.60 0.86 0.71 37
ORG 0.68 0.63 0.66 30
PER 0.44 0.44 0.44 36
micro avg 0.57 0.65 0.61 103
macro avg 0.57 0.65 0.60 103
```
### Chinese
Number of documents: 10000
```
precision recall f1-score support
ORG 0.70 0.69 0.70 4022
LOC 0.76 0.81 0.78 3830
PER 0.84 0.84 0.84 3706
micro avg 0.76 0.78 0.77 11558
macro avg 0.76 0.78 0.77 11558
```
### Dutch
Number of documents: 10000
```
precision recall f1-score support
ORG 0.87 0.87 0.87 3930
PER 0.95 0.95 0.95 4377
LOC 0.91 0.92 0.91 4813
micro avg 0.91 0.92 0.91 13120
macro avg 0.91 0.92 0.91 13120
```
### English
Number of documents: 10000
```
precision recall f1-score support
LOC 0.83 0.84 0.84 4781
PER 0.89 0.90 0.89 4559
ORG 0.75 0.75 0.75 4633
micro avg 0.82 0.83 0.83 13973
macro avg 0.82 0.83 0.83 13973
```
### Estonian
Number of documents: 10000
```
precision recall f1-score support
LOC 0.89 0.92 0.91 5654
ORG 0.85 0.85 0.85 3878
PER 0.94 0.94 0.94 4026
micro avg 0.90 0.91 0.90 13558
macro avg 0.90 0.91 0.90 13558
```
### Finnish
Number of documents: 10000
```
precision recall f1-score support
ORG 0.84 0.83 0.84 4104
LOC 0.88 0.90 0.89 5307
PER 0.95 0.94 0.94 4519
micro avg 0.89 0.89 0.89 13930
macro avg 0.89 0.89 0.89 13930
```
### French
Number of documents: 10000
```
precision recall f1-score support
LOC 0.90 0.89 0.89 4808
ORG 0.84 0.87 0.85 3876
PER 0.94 0.93 0.94 4249
micro avg 0.89 0.90 0.90 12933
macro avg 0.89 0.90 0.90 12933
```
### Georgian
Number of documents: 10000
```
precision recall f1-score support
PER 0.90 0.91 0.90 3964
ORG 0.83 0.77 0.80 3757
LOC 0.82 0.88 0.85 4894
micro avg 0.84 0.86 0.85 12615
macro avg 0.84 0.86 0.85 12615
```
### German
Number of documents: 10000
```
precision recall f1-score support
LOC 0.85 0.90 0.87 4939
PER 0.94 0.91 0.92 4452
ORG 0.79 0.78 0.79 4247
micro avg 0.86 0.86 0.86 13638
macro avg 0.86 0.86 0.86 13638
```
### Greek
Number of documents: 10000
```
precision recall f1-score support
ORG 0.86 0.85 0.85 3771
LOC 0.88 0.91 0.90 4436
PER 0.91 0.93 0.92 3894
micro avg 0.88 0.90 0.89 12101
macro avg 0.88 0.90 0.89 12101
```
### Hebrew
Number of documents: 10000
```
precision recall f1-score support
PER 0.87 0.88 0.87 4206
ORG 0.76 0.75 0.76 4190
LOC 0.85 0.85 0.85 4538
micro avg 0.83 0.83 0.83 12934
macro avg 0.82 0.83 0.83 12934
```
### Hindi
Number of documents: 1000
```
precision recall f1-score support
ORG 0.78 0.81 0.79 362
LOC 0.83 0.85 0.84 422
PER 0.90 0.95 0.92 427
micro avg 0.84 0.87 0.85 1211
macro avg 0.84 0.87 0.85 1211
```
### Hungarian
Number of documents: 10000
```
precision recall f1-score support
PER 0.95 0.95 0.95 4347
ORG 0.87 0.88 0.87 3988
LOC 0.90 0.92 0.91 5544
micro avg 0.91 0.92 0.91 13879
macro avg 0.91 0.92 0.91 13879
```
### Indonesian
Number of documents: 10000
```
precision recall f1-score support
ORG 0.88 0.89 0.88 3735
LOC 0.93 0.95 0.94 3694
PER 0.93 0.93 0.93 3947
micro avg 0.91 0.92 0.92 11376
macro avg 0.91 0.92 0.92 11376
```
### Italian
Number of documents: 10000
```
precision recall f1-score support
LOC 0.88 0.88 0.88 4592
ORG 0.86 0.86 0.86 4088
PER 0.96 0.96 0.96 4732
micro avg 0.90 0.90 0.90 13412
macro avg 0.90 0.90 0.90 13412
```
### Japanese
Number of documents: 10000
```
precision recall f1-score support
ORG 0.62 0.61 0.62 4184
PER 0.76 0.81 0.78 3812
LOC 0.68 0.74 0.71 4281
micro avg 0.69 0.72 0.70 12277
macro avg 0.69 0.72 0.70 12277
```
### Javanese
Number of documents: 100
```
precision recall f1-score support
ORG 0.79 0.80 0.80 46
PER 0.81 0.96 0.88 26
LOC 0.75 0.75 0.75 40
micro avg 0.78 0.82 0.80 112
macro avg 0.78 0.82 0.80 112
```
### Kazakh
Number of documents: 1000
```
precision recall f1-score support
ORG 0.76 0.61 0.68 307
LOC 0.78 0.90 0.84 461
PER 0.87 0.91 0.89 367
micro avg 0.81 0.83 0.82 1135
macro avg 0.81 0.83 0.81 1135
```
### Korean
Number of documents: 10000
```
precision recall f1-score support
LOC 0.86 0.89 0.88 5097
ORG 0.79 0.74 0.77 4218
PER 0.83 0.86 0.84 4014
micro avg 0.83 0.83 0.83 13329
macro avg 0.83 0.83 0.83 13329
```
### Malay
Number of documents: 1000
```
precision recall f1-score support
ORG 0.87 0.89 0.88 368
PER 0.92 0.91 0.91 366
LOC 0.94 0.95 0.95 354
micro avg 0.91 0.92 0.91 1088
macro avg 0.91 0.92 0.91 1088
```
### Malayalam
Number of documents: 1000
```
precision recall f1-score support
ORG 0.75 0.74 0.75 347
PER 0.84 0.89 0.86 417
LOC 0.74 0.75 0.75 391
micro avg 0.78 0.80 0.79 1155
macro avg 0.78 0.80 0.79 1155
```
### Marathi
Number of documents: 1000
```
precision recall f1-score support
PER 0.89 0.94 0.92 394
LOC 0.82 0.84 0.83 457
ORG 0.84 0.78 0.81 339
micro avg 0.85 0.86 0.85 1190
macro avg 0.85 0.86 0.85 1190
```
### Persian
Number of documents: 10000
```
precision recall f1-score support
PER 0.93 0.92 0.93 3540
LOC 0.93 0.93 0.93 3584
ORG 0.89 0.92 0.90 3370
micro avg 0.92 0.92 0.92 10494
macro avg 0.92 0.92 0.92 10494
```
### Portuguese
Number of documents: 10000
```
precision recall f1-score support
LOC 0.90 0.91 0.91 4819
PER 0.94 0.92 0.93 4184
ORG 0.84 0.88 0.86 3670
micro avg 0.89 0.91 0.90 12673
macro avg 0.90 0.91 0.90 12673
```
### Russian
Number of documents: 10000
```
precision recall f1-score support
PER 0.93 0.96 0.95 3574
LOC 0.87 0.89 0.88 4619
ORG 0.82 0.80 0.81 3858
micro avg 0.87 0.88 0.88 12051
macro avg 0.87 0.88 0.88 12051
```
### Spanish
Number of documents: 10000
```
precision recall f1-score support
PER 0.95 0.93 0.94 3891
ORG 0.86 0.88 0.87 3709
LOC 0.89 0.91 0.90 4553
micro avg 0.90 0.91 0.90 12153
macro avg 0.90 0.91 0.90 12153
```
### Swahili
Number of documents: 1000
```
precision recall f1-score support
ORG 0.82 0.85 0.83 349
PER 0.95 0.92 0.94 403
LOC 0.86 0.89 0.88 450
micro avg 0.88 0.89 0.88 1202
macro avg 0.88 0.89 0.88 1202
```
### Tagalog
Number of documents: 1000
```
precision recall f1-score support
LOC 0.90 0.91 0.90 338
ORG 0.83 0.91 0.87 339
PER 0.96 0.93 0.95 350
micro avg 0.90 0.92 0.91 1027
macro avg 0.90 0.92 0.91 1027
```
### Tamil
Number of documents: 1000
```
precision recall f1-score support
PER 0.90 0.92 0.91 392
ORG 0.77 0.76 0.76 370
LOC 0.78 0.81 0.79 421
micro avg 0.82 0.83 0.82 1183
macro avg 0.82 0.83 0.82 1183
```
### Telugu
Number of documents: 1000
```
precision recall f1-score support
ORG 0.67 0.55 0.61 347
LOC 0.78 0.87 0.82 453
PER 0.73 0.86 0.79 393
micro avg 0.74 0.77 0.76 1193
macro avg 0.73 0.77 0.75 1193
```
### Thai
Number of documents: 10000
```
precision recall f1-score support
LOC 0.63 0.76 0.69 3928
PER 0.78 0.83 0.80 6537
ORG 0.59 0.59 0.59 4257
micro avg 0.68 0.74 0.71 14722
macro avg 0.68 0.74 0.71 14722
```
### Turkish
Number of documents: 10000
```
precision recall f1-score support
PER 0.94 0.94 0.94 4337
ORG 0.88 0.89 0.88 4094
LOC 0.90 0.92 0.91 4929
micro avg 0.90 0.92 0.91 13360
macro avg 0.91 0.92 0.91 13360
```
### Urdu
Number of documents: 1000
```
precision recall f1-score support
LOC 0.90 0.95 0.93 352
PER 0.96 0.96 0.96 333
ORG 0.91 0.90 0.90 326
micro avg 0.92 0.94 0.93 1011
macro avg 0.92 0.94 0.93 1011
```
### Vietnamese
Number of documents: 10000
```
precision recall f1-score support
ORG 0.86 0.87 0.86 3579
LOC 0.88 0.91 0.90 3811
PER 0.92 0.93 0.93 3717
micro avg 0.89 0.90 0.90 11107
macro avg 0.89 0.90 0.90 11107
```
### Yoruba
Number of documents: 100
```
precision recall f1-score support
LOC 0.54 0.72 0.62 36
ORG 0.58 0.31 0.41 35
PER 0.77 1.00 0.87 36
micro avg 0.64 0.68 0.66 107
macro avg 0.63 0.68 0.63 107
```
## Reproduce the results
Download and prepare the dataset from the [XTREME repo](https://github.com/google-research/xtreme#download-the-data). Next, from the root of the transformers repo run:
```
cd examples/ner
python run_tf_ner.py \
--data_dir . \
--labels ./labels.txt \
--model_name_or_path jplu/tf-xlm-roberta-base \
--output_dir model \
--max-seq-length 128 \
--num_train_epochs 2 \
--per_gpu_train_batch_size 16 \
--per_gpu_eval_batch_size 32 \
--do_train \
--do_eval \
--logging_dir logs \
--mode token-classification \
--evaluate_during_training \
--optimizer_name adamw
```
## Usage with pipelines
```python
from transformers import pipeline
nlp_ner = pipeline(
"ner",
model="jplu/tf-xlm-r-ner-40-lang",
tokenizer=(
'jplu/tf-xlm-r-ner-40-lang',
{"use_fast": True}),
framework="tf"
)
text_fr = "Barack Obama est né à Hawaï."
text_en = "Barack Obama was born in Hawaii."
text_es = "Barack Obama nació en Hawai."
text_zh = "巴拉克·奧巴馬(Barack Obama)出生於夏威夷。"
text_ar = "ولد باراك أوباما في هاواي."
nlp_ner(text_fr)
#Output: [{'word': '▁Barack', 'score': 0.9894659519195557, 'entity': 'PER'}, {'word': '▁Obama', 'score': 0.9888848662376404, 'entity': 'PER'}, {'word': '▁Hawa', 'score': 0.998701810836792, 'entity': 'LOC'}, {'word': 'ï', 'score': 0.9987035989761353, 'entity': 'LOC'}]
nlp_ner(text_en)
#Output: [{'word': '▁Barack', 'score': 0.9929141998291016, 'entity': 'PER'}, {'word': '▁Obama', 'score': 0.9930834174156189, 'entity': 'PER'}, {'word': '▁Hawaii', 'score': 0.9986202120780945, 'entity': 'LOC'}]
nlp_ner(test_es)
#Output: [{'word': '▁Barack', 'score': 0.9944776296615601, 'entity': 'PER'}, {'word': '▁Obama', 'score': 0.9949177503585815, 'entity': 'PER'}, {'word': '▁Hawa', 'score': 0.9987911581993103, 'entity': 'LOC'}, {'word': 'i', 'score': 0.9984861612319946, 'entity': 'LOC'}]
nlp_ner(test_zh)
#Output: [{'word': '夏威夷', 'score': 0.9988449215888977, 'entity': 'LOC'}]
nlp_ner(test_ar)
#Output: [{'word': '▁با', 'score': 0.9903655648231506, 'entity': 'PER'}, {'word': 'راك', 'score': 0.9850614666938782, 'entity': 'PER'}, {'word': '▁أوباما', 'score': 0.9850308299064636, 'entity': 'PER'}, {'word': '▁ها', 'score': 0.9477543234825134, 'entity': 'LOC'}, {'word': 'وا', 'score': 0.9428229928016663, 'entity': 'LOC'}, {'word': 'ي', 'score': 0.9319471716880798, 'entity': 'LOC'}]
```
| {"language": ["multilingual", "af", "ar", "bg", "bn", "de", "el", "en", "es", "et", "eu", "fa", "fi", "fr", "he", "hi", "hu", "id", "it", "ja", "jv", "ka", "kk", "ko", "ml", "mr", "ms", "my", "nl", "pt", "ru", "sw", "ta", "te", "th", "tl", "tr", "ur", "vi", "yo", "zh"], "language_bcp47": ["fa-IR"]} | jplu/tf-xlm-r-ner-40-lang | null | [
"transformers",
"tf",
"xlm-roberta",
"token-classification",
"multilingual",
"af",
"ar",
"bg",
"bn",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"he",
"hi",
"hu",
"id",
"it",
"ja",
"jv",
"ka",
"kk",
"ko",
"ml",
"mr",
"ms",
"my",
"nl",
"pt",... | null | 2022-03-02T23:29:05+00:00 | [
"1911.02116"
] | [
"multilingual",
"af",
"ar",
"bg",
"bn",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"he",
"hi",
"hu",
"id",
"it",
"ja",
"jv",
"ka",
"kk",
"ko",
"ml",
"mr",
"ms",
"my",
"nl",
"pt",
"ru",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ur",
"v... | TAGS
#transformers #tf #xlm-roberta #token-classification #multilingual #af #ar #bg #bn #de #el #en #es #et #eu #fa #fi #fr #he #hi #hu #id #it #ja #jv #ka #kk #ko #ml #mr #ms #my #nl #pt #ru #sw #ta #te #th #tl #tr #ur #vi #yo #zh #arxiv-1911.02116 #autotrain_compatible #endpoints_compatible #region-us
|
# XLM-R + NER
This model is a fine-tuned XLM-Roberta-base over the 40 languages proposed in XTREME from Wikiann. This is still an on-going work and the results will be updated everytime an improvement is reached.
The covered labels are:
## Metrics on evaluation set:
### Average over the 40 languages
Number of documents: 262300
### Afrikaans
Number of documents: 1000
### Arabic
Number of documents: 10000
### Basque
Number of documents: 10000
### Bengali
Number of documents: 1000
### Bulgarian
Number of documents: 1000
### Burmese
Number of documents: 100
### Chinese
Number of documents: 10000
### Dutch
Number of documents: 10000
### English
Number of documents: 10000
### Estonian
Number of documents: 10000
### Finnish
Number of documents: 10000
### French
Number of documents: 10000
### Georgian
Number of documents: 10000
### German
Number of documents: 10000
### Greek
Number of documents: 10000
### Hebrew
Number of documents: 10000
### Hindi
Number of documents: 1000
### Hungarian
Number of documents: 10000
### Indonesian
Number of documents: 10000
### Italian
Number of documents: 10000
### Japanese
Number of documents: 10000
### Javanese
Number of documents: 100
### Kazakh
Number of documents: 1000
### Korean
Number of documents: 10000
### Malay
Number of documents: 1000
### Malayalam
Number of documents: 1000
### Marathi
Number of documents: 1000
### Persian
Number of documents: 10000
### Portuguese
Number of documents: 10000
### Russian
Number of documents: 10000
### Spanish
Number of documents: 10000
### Swahili
Number of documents: 1000
### Tagalog
Number of documents: 1000
### Tamil
Number of documents: 1000
### Telugu
Number of documents: 1000
### Thai
Number of documents: 10000
### Turkish
Number of documents: 10000
### Urdu
Number of documents: 1000
### Vietnamese
Number of documents: 10000
### Yoruba
Number of documents: 100
## Reproduce the results
Download and prepare the dataset from the XTREME repo. Next, from the root of the transformers repo run:
## Usage with pipelines
| [
"# XLM-R + NER\n\nThis model is a fine-tuned XLM-Roberta-base over the 40 languages proposed in XTREME from Wikiann. This is still an on-going work and the results will be updated everytime an improvement is reached. \n\nThe covered labels are:",
"## Metrics on evaluation set:",
"### Average over the 40 langua... | [
"TAGS\n#transformers #tf #xlm-roberta #token-classification #multilingual #af #ar #bg #bn #de #el #en #es #et #eu #fa #fi #fr #he #hi #hu #id #it #ja #jv #ka #kk #ko #ml #mr #ms #my #nl #pt #ru #sw #ta #te #th #tl #tr #ur #vi #yo #zh #arxiv-1911.02116 #autotrain_compatible #endpoints_compatible #region-us \n",
"#... |
fill-mask | transformers | # Tensorflow XLM-RoBERTa
In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.
## XLM-RoBERTa
[XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.
## Model Weights
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5)
| `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5)
## Usage
With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:
```python
from transformers import TFXLMRobertaModel
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base")
```
Or
```
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large")
```
## Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/jplu).
## Acknowledgments
Thanks to all the Huggingface team for the support and their amazing library!
| {} | jplu/tf-xlm-roberta-base | null | [
"transformers",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Tensorflow XLM-RoBERTa
======================
In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.
XLM-RoBERTa
-----------
XLM-RoBERTa is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.
Model Weights
-------------
Usage
-----
With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:
Or
Huggingface model hub
---------------------
All models are available on the Huggingface model hub.
Acknowledgments
---------------
Thanks to all the Huggingface team for the support and their amazing library!
| [] | [
"TAGS\n#transformers #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | # Tensorflow XLM-RoBERTa
In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.
## XLM-RoBERTa
[XLM-RoBERTa](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.
## Model Weights
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `jplu/tf-xlm-roberta-base` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-base/tf_model.h5)
| `jplu/tf-xlm-roberta-large` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/config.json) • [`tf_model.h5`](https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5)
## Usage
With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:
```python
from transformers import TFXLMRobertaModel
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base")
```
Or
```
model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large")
```
## Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/jplu).
## Acknowledgments
Thanks to all the Huggingface team for the support and their amazing library!
| {} | jplu/tf-xlm-roberta-large | null | [
"transformers",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Tensorflow XLM-RoBERTa
======================
In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.
XLM-RoBERTa
-----------
XLM-RoBERTa is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.
Model Weights
-------------
Usage
-----
With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:
Or
Huggingface model hub
---------------------
All models are available on the Huggingface model hub.
Acknowledgments
---------------
Thanks to all the Huggingface team for the support and their amazing library!
| [] | [
"TAGS\n#transformers #tf #xlm-roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers | First model for storytelling
| {} | jppaolim/homerGPT2 | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| First model for storytelling
| [] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers | Second model for storytelling
| {} | jppaolim/homerGPT2L | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Second model for storytelling
| [] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | jpsxlr8/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "urdu-colab", "results": []}]} | js-rockstar/urdu-colab | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# urdu-colab
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| [
"# urdu-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# urdu-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.",
"## Model description\n\nMore information needed",
... |
text2text-generation | transformers |
Answer generator model of [ELI5-Category Dataset](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/) | {"language": "en", "license": "mit", "datasets": ["eli5_category"]} | jsgao/bart-eli5c | null | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:eli5_category",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #en #dataset-eli5_category #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Answer generator model of ELI5-Category Dataset | [] | [
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #en #dataset-eli5_category #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers |
Document Retriever model of [ELI5-Category Dataset](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/), need additional projection layer (see GitHub [repo](https://github.com/rexarski/ANLY580-final-project/blob/main/model_deploy/models/eli5c_qa_model.py)) | {"language": "en", "license": "MIT", "datasets": ["eli5_category"]} | jsgao/bert-eli5c-retriever | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"en",
"dataset:eli5_category",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #feature-extraction #en #dataset-eli5_category #endpoints_compatible #region-us
|
Document Retriever model of ELI5-Category Dataset, need additional projection layer (see GitHub repo) | [] | [
"TAGS\n#transformers #pytorch #safetensors #bert #feature-extraction #en #dataset-eli5_category #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-German-GPT2
This is an encoder-decoder model for automatic speech recognition trained on on the
MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from
[jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) and
the decoder from [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2).
It was trained using a two step process:
* fine-tuning only the cross-attention weights and the decoder using the pre-computed outputs of the Wav2Vec-Modell
* relatively fast training
* also works on small GPU (eg. 8 GB)
* but may take a lot of disk space
* should already yield decent results
* fine-tuning the model end-to-end
* much slower
* needs a bigger GPU
There is also one trick, which seemed to improve performance significantly: adding position embeddings to the
encoder outputs and initializing them with the pre-trained position embeddings of the GPT2 model (See `eval.py`).
The training notebooks are still early drafts. Also results can probably improved a lot by using for example a learning
rate schedule. | {"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "de", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "Wav2Vec2-Large-XLSR-53-German-GPT2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "de"}, "metrics": [{"type": "wer", "value": 10.02, "name": "Test WER"}, {"type": "cer", "value": 4.7, "name": "Test CER"}]}]}]} | jsnfly/wav2vec2-large-xlsr-53-german-gpt2 | null | [
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:u... | null | 2022-03-02T23:29:05+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #de #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-German-GPT2
This is an encoder-decoder model for automatic speech recognition trained on on the
MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from
jonatasgrosman/wav2vec2-large-xlsr-53-german and
the decoder from dbmdz/german-gpt2.
It was trained using a two step process:
* fine-tuning only the cross-attention weights and the decoder using the pre-computed outputs of the Wav2Vec-Modell
* relatively fast training
* also works on small GPU (eg. 8 GB)
* but may take a lot of disk space
* should already yield decent results
* fine-tuning the model end-to-end
* much slower
* needs a bigger GPU
There is also one trick, which seemed to improve performance significantly: adding position embeddings to the
encoder outputs and initializing them with the pre-trained position embeddings of the GPT2 model (See 'URL').
The training notebooks are still early drafts. Also results can probably improved a lot by using for example a learning
rate schedule. | [
"# Wav2Vec2-Large-XLSR-53-German-GPT2\n\nThis is an encoder-decoder model for automatic speech recognition trained on on the\nMOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from\njonatasgrosman/wav2vec2-large-xlsr-53-german and\nthe decoder from dbmdz/german-gpt2.\n\nIt was trained us... | [
"TAGS\n#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #de #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-German... |
automatic-speech-recognition | transformers |
# XLS-R-1b-DE
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset. (See `run.sh` for training parameters). | {"language": ["de"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "de", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-1B - German", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "de"}, "metrics": [{"type": "wer", "value": 11.37, "name": "Test WER"}, {"type": "cer", "value": 2.89, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "de"}, "metrics": [{"type": "wer", "value": 31.16, "name": "Dev WER"}, {"type": "cer", "value": 13.41, "name": "Dev CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "de"}, "metrics": [{"type": "wer", "value": 36.79, "name": "Test WER"}]}]}]} | jsnfly/wav2vec2-xls-r-1b-de-cv8 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"de"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# XLS-R-1b-DE
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset. (See 'URL' for training parameters). | [
"# XLS-R-1b-DE\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - DE dataset. (See 'URL' for training parameters)."
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #de #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# XLS-R-1b-DE\n\nThis model is a fine-tuned v... |
token-classification | transformers |
This is a SciBERT-based model fine-tuned to perform Named Entity Recognition for drug names and adverse drug effects.

This model classifies input tokens into one of five classes:
- `B-DRUG`: beginning of a drug entity
- `I-DRUG`: within a drug entity
- `B-EFFECT`: beginning of an AE entity
- `I-EFFECT`: within an AE entity
- `O`: outside either of the above entities
To get started using this model for inference, simply set up an NER `pipeline` like below:
```python
from transformers import (AutoModelForTokenClassification,
AutoTokenizer,
pipeline,
)
model_checkpoint = "jsylee/scibert_scivocab_uncased-finetuned-ner"
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=5,
id2label={0: 'O', 1: 'B-DRUG', 2: 'I-DRUG', 3: 'B-EFFECT', 4: 'I-EFFECT'}
)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model_pipeline = pipeline(task="ner", model=model, tokenizer=tokenizer)
print( model_pipeline ("Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug."))
```
SciBERT: https://huggingface.co/allenai/scibert_scivocab_uncased
Dataset: https://huggingface.co/datasets/ade_corpus_v2
| {"language": ["en"], "tags": ["Named Entity Recognition", "SciBERT", "Adverse Effect", "Drug", "Medical"], "datasets": ["ade_corpus_v2"], "widget": [{"text": "Abortion, miscarriage or uterine hemorrhage associated with misoprostol (Cytotec), a labor-inducing drug.", "example_title": "Abortion, miscarriage, ..."}, {"text": "Addiction to many sedatives and analgesics, such as diazepam, morphine, etc.", "example_title": "Addiction to many..."}, {"text": "Birth defects associated with thalidomide", "example_title": "Birth defects associated..."}, {"text": "Bleeding of the intestine associated with aspirin therapy", "example_title": "Bleeding of the intestine..."}, {"text": "Cardiovascular disease associated with COX-2 inhibitors (i.e. Vioxx)", "example_title": "Cardiovascular disease..."}]} | jsylee/scibert_scivocab_uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"Named Entity Recognition",
"SciBERT",
"Adverse Effect",
"Drug",
"Medical",
"en",
"dataset:ade_corpus_v2",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #token-classification #Named Entity Recognition #SciBERT #Adverse Effect #Drug #Medical #en #dataset-ade_corpus_v2 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
This is a SciBERT-based model fine-tuned to perform Named Entity Recognition for drug names and adverse drug effects.
!model image
This model classifies input tokens into one of five classes:
- 'B-DRUG': beginning of a drug entity
- 'I-DRUG': within a drug entity
- 'B-EFFECT': beginning of an AE entity
- 'I-EFFECT': within an AE entity
- 'O': outside either of the above entities
To get started using this model for inference, simply set up an NER 'pipeline' like below:
SciBERT: URL
Dataset: URL
| [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #Named Entity Recognition #SciBERT #Adverse Effect #Drug #Medical #en #dataset-ade_corpus_v2 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text-generation | transformers |
# Rick dialoGPT Model | {"tags": ["conversational"]} | jth1903/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick dialoGPT Model | [
"# Rick dialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick dialoGPT Model"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [gerulata/slovakbert](https://huggingface.co/gerulata/slovakbert) on the [ju-bezdek/conll2003-SK-NER](https://huggingface.co/datasets/ju-bezdek/conll2003-SK-NER) dataset.
It achieves the following results on the evaluation (validation) set:
- Loss: 0.1752
- Precision: 0.8190
- Recall: 0.8390
- F1: 0.8288
- Accuracy: 0.9526
## Model description
More information needed
## Code example
```python:
from transformers import pipeline, AutoModel, AutoTokenizer
from spacy import displacy
import os
model_path="ju-bezdek/slovakbert-conll2003-sk-ner"
aggregation_strategy="max"
ner_pipeline = pipeline(task='ner', model=model_path, aggregation_strategy=aggregation_strategy)
input_sentence= "Ruský premiér Viktor Černomyrdin v piatok povedal, že prezident Boris Jeľcin , ktorý je na dovolenke mimo Moskvy , podporil mierový plán šéfa bezpečnosti Alexandra Lebedu pre Čečensko, uviedla tlačová agentúra Interfax"
ner_ents = ner_pipeline(input_sentence)
print(ner_ents)
ent_group_labels = [ner_pipeline.model.config.id2label[i][2:] for i in ner_pipeline.model.config.id2label if i>0]
options = {"ents":ent_group_labels}
dicplacy_ents = [{"start":ent["start"], "end":ent["end"], "label":ent["entity_group"]} for ent in ner_ents]
displacy.render({"text":input_sentence, "ents":dicplacy_ents}, style="ent", options=options, jupyter=True, manual=True)
```
### Result:
<div>
<span class="tex2jax_ignore"><div class="entities" style="line-height: 2.5; direction: ltr">
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Ruský
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">MISC</span>
</mark>
premiér
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Viktor Černomyrdin
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span>
</mark>
v piatok povedal, že prezident
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Boris Jeľcin,
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span>
</mark>
, ktorý je na dovolenke mimo
<mark class="entity" style="background: #ff9561; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Moskvy
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOC</span>
</mark>
, podporil mierový plán šéfa bezpečnosti
<mark class="entity" style="background: #ddd; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Alexandra Lebedu
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">PER</span>
</mark>
pre
<mark class="entity" style="background: #ff9561; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Čečensko,
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">LOC</span>
</mark>
uviedla tlačová agentúra
<mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Interfax
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">ORG</span>
</mark>
</div></span>
</div>
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3237 | 1.0 | 878 | 0.2541 | 0.7125 | 0.8059 | 0.7563 | 0.9283 |
| 0.1663 | 2.0 | 1756 | 0.2370 | 0.7775 | 0.8090 | 0.7929 | 0.9394 |
| 0.1251 | 3.0 | 2634 | 0.2289 | 0.7732 | 0.8029 | 0.7878 | 0.9385 |
| 0.0984 | 4.0 | 3512 | 0.2818 | 0.7294 | 0.8189 | 0.7715 | 0.9294 |
| 0.0808 | 5.0 | 4390 | 0.3138 | 0.7615 | 0.7900 | 0.7755 | 0.9326 |
| 0.0578 | 6.0 | 5268 | 0.3072 | 0.7548 | 0.8222 | 0.7871 | 0.9370 |
| 0.0481 | 7.0 | 6146 | 0.2778 | 0.7897 | 0.8156 | 0.8025 | 0.9408 |
| 0.0414 | 8.0 | 7024 | 0.3336 | 0.7695 | 0.8201 | 0.7940 | 0.9389 |
| 0.0268 | 9.0 | 7902 | 0.3294 | 0.7868 | 0.8140 | 0.8002 | 0.9409 |
| 0.0204 | 10.0 | 8780 | 0.3693 | 0.7657 | 0.8239 | 0.7938 | 0.9376 |
| 0.016 | 11.0 | 9658 | 0.3816 | 0.7932 | 0.8242 | 0.8084 | 0.9425 |
| 0.0108 | 12.0 | 10536 | 0.3607 | 0.7929 | 0.8256 | 0.8089 | 0.9431 |
| 0.0078 | 13.0 | 11414 | 0.3980 | 0.7915 | 0.8240 | 0.8074 | 0.9423 |
| 0.0062 | 14.0 | 12292 | 0.4096 | 0.7995 | 0.8247 | 0.8119 | 0.9436 |
| 0.0035 | 15.0 | 13170 | 0.4177 | 0.8006 | 0.8251 | 0.8127 | 0.9438 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["ju-bezdek/conll2003-SK-NER"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "outputs", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "ju-bezdek/conll2003-SK-NER", "type": "ju-bezdek/conll2003-SK-NER", "args": "conll2003-SK-NER"}, "metrics": [{"type": "precision", "value": 0.8189727994593682, "name": "Precision"}, {"type": "recall", "value": 0.8389581169955002, "name": "Recall"}, {"type": "f1", "value": 0.8288450029922203, "name": "F1"}, {"type": "accuracy", "value": 0.9526157920337243, "name": "Accuracy"}]}]}]} | ju-bezdek/slovakbert-conll2003-sk-ner | null | [
"transformers",
"pytorch",
"generated_from_trainer",
"dataset:ju-bezdek/conll2003-SK-NER",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #generated_from_trainer #dataset-ju-bezdek/conll2003-SK-NER #license-mit #model-index #endpoints_compatible #region-us
| outputs
=======
This model is a fine-tuned version of gerulata/slovakbert on the ju-bezdek/conll2003-SK-NER dataset.
It achieves the following results on the evaluation (validation) set:
* Loss: 0.1752
* Precision: 0.8190
* Recall: 0.8390
* F1: 0.8288
* Accuracy: 0.9526
Model description
-----------------
More information needed
Code example
------------
### Result:
Ruský
MISC
premiér
Viktor Černomyrdin
PER
v piatok povedal, že prezident
Boris Jeľcin,
PER
, ktorý je na dovolenke mimo
Moskvy
LOC
, podporil mierový plán šéfa bezpečnosti
Alexandra Lebedu
PER
pre
Čečensko,
LOC
uviedla tlačová agentúra
Interfax
ORG
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu102
* Datasets 1.17.0
* Tokenizers 0.10.3
| [
"### Result:\n\n\n\n\n\n Ruský\n MISC\n\n premiér \n \n Viktor Černomyrdin\n PER\n\n v piatok povedal, že prezident \n \n Boris Jeľcin,\n PER\n\n , ktorý je na dovolenke mimo \n \n Moskvy\n LOC\n\n , podporil mierový plán šéfa bezpečnosti \n \n Alexandra Lebedu\n PER\n\n pre \n \n Čečensko,\n LOC\n\n uviedla tlačov... | [
"TAGS\n#transformers #pytorch #generated_from_trainer #dataset-ju-bezdek/conll2003-SK-NER #license-mit #model-index #endpoints_compatible #region-us \n",
"### Result:\n\n\n\n\n\n Ruský\n MISC\n\n premiér \n \n Viktor Černomyrdin\n PER\n\n v piatok povedal, že prezident \n \n Boris Jeľcin,\n PER\n\n , ktorý je na ... |
image-classification | transformers |
# ice_cream
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chocolate ice cream

#### vanilla ice cream
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | juanfiguera/ice_cream | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# ice_cream
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### chocolate ice cream
!chocolate ice cream
#### vanilla ice cream
!vanilla ice cream | [
"# ice_cream\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### chocolate ice cream\n\n!chocolate ice cream",
"#### vanilla ice cream\n\n!vanilla ice cream"... | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# ice_cream\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues wit... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.