sha null | last_modified null | library_name stringclasses 154
values | text stringlengths 1 900k | metadata stringlengths 2 348k | pipeline_tag stringclasses 45
values | id stringlengths 5 122 | tags listlengths 1 1.84k | created_at stringlengths 25 25 | arxiv listlengths 0 201 | languages listlengths 0 1.83k | tags_str stringlengths 17 9.34k | text_str stringlengths 0 389k | text_lists listlengths 0 722 | processed_texts listlengths 1 723 | tokens_length listlengths 1 723 | input_texts listlengths 1 61 | embeddings listlengths 768 768 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs10](https://huggingface.co/A... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs15", "results": []}]} | automatic-speech-recognition | AKulk/wav2vec2-base-timit-epochs15 | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### T... | [
"# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"##... | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.",
"## Model descri... | [
56,
48,
6,
12,
8,
3,
140,
4,
35
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.## Model descripti... | [
-0.0954214408993721,
0.10474158823490143,
-0.003669228870421648,
0.055591728538274765,
0.1258770227432251,
0.025360284373164177,
0.10450559109449387,
0.12859275937080383,
-0.10188861191272736,
0.05186747759580612,
0.05815146863460541,
0.039368703961372375,
0.066119484603405,
0.083024010062... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs5
This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs5", "results": []}]} | automatic-speech-recognition | AKulk/wav2vec2-base-timit-epochs5 | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-epochs5
This model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### ... | [
"# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"#... | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.",
"## Model descr... | [
56,
49,
6,
12,
8,
3,
140,
4,
35
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.## Model descript... | [
-0.10958375036716461,
0.1167384684085846,
-0.0031598068308085203,
0.05707205832004547,
0.12061597406864166,
0.015062602236866951,
0.08752227574586868,
0.12658683955669403,
-0.07713346183300018,
0.05981020629405975,
0.06725892424583435,
0.012788740918040276,
0.06926058232784271,
0.119842551... |
null | null | transformers |
# summarization_fanpage128
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041
... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_fanpage128", "results": []}]} | summarization | ARTeLab/it5-summarization-fanpage | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# summarization_fanpage128
This model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041
## Usage
### Training hyperparameters
... | [
"# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.5348\n- Rouge1: 34.1882\n- Rouge2: 15.7866\n- Rougel: 25.141\n- Rougelsum: 28.4882\n- Gen Len: 69.3041",
"## Usage",
"### Traini... | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti... | [
84,
85,
3,
90,
44
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_fanpage128\n\nThis model is a fine-tuned version of gsa... | [
-0.10802867263555527,
0.17435681819915771,
-0.004688513930886984,
0.08284550905227661,
0.08682821691036224,
0.03391453996300697,
0.06403793394565582,
0.12853342294692993,
-0.10557505488395691,
0.14090128242969513,
0.1068645566701889,
0.06795131415128708,
0.04429629445075989,
0.179766565561... |
null | null | transformers |
# summarization_ilpost
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.6020
- Rouge1: 33.7802
- Rouge2: 16.2953
- Rougel: 27.4797
- Rougelsum: 30.2273
- Gen Len: 45.3175
## U... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_ilpost", "results": []}]} | summarization | ARTeLab/it5-summarization-ilpost | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/ilpost",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# summarization_ilpost
This model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.6020
- Rouge1: 33.7802
- Rouge2: 16.2953
- Rougel: 27.4797
- Rougelsum: 30.2273
- Gen Len: 45.3175
## Usage
### Training hyperparameters
The... | [
"# summarization_ilpost\n\nThis model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.6020\n- Rouge1: 33.7802\n- Rouge2: 16.2953\n- Rougel: 27.4797\n- Rougelsum: 30.2273\n- Gen Len: 45.3175",
"## Usage",
"### Training h... | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# summarization_ilpost\n\nThis model is a fine-tuned version o... | [
88,
86,
3,
90,
37
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_ilpost\n\nThis model is a fine-tuned versio... | [
-0.1379970908164978,
0.1521882861852646,
-0.004651216324418783,
0.0647406056523323,
0.09556883573532104,
0.04738730937242508,
0.0877595841884613,
0.14777474105358124,
-0.05945650488138199,
0.1266050785779953,
0.0929919183254242,
0.055788468569517136,
0.06502176821231842,
0.1818566918373108... |
null | null | transformers |
# summarization_mlsum
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on MLSum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 2.0190
- Rouge1: 19.3739
- Rouge2: 5.9753
- Rougel: 16.691
- Rougelsum: 16.7862
- Gen Len: 32.5268
## Usage
```... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_mlsum", "results": []}]} | summarization | ARTeLab/it5-summarization-mlsum | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/mlsum-it",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# summarization_mlsum
This model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 2.0190
- Rouge1: 19.3739
- Rouge2: 5.9753
- Rougel: 16.691
- Rougelsum: 16.7862
- Gen Len: 32.5268
## Usage
### Training hyperparameters
The following... | [
"# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.0190\n- Rouge1: 19.3739\n- Rouge2: 5.9753\n- Rougel: 16.691\n- Rougelsum: 16.7862\n- Gen Len: 32.5268",
"## Usage",
"### Training hyperparam... | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5... | [
86,
85,
3,
90,
44
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/... | [
-0.1425154209136963,
0.14501653611660004,
-0.004585064947605133,
0.0638197511434555,
0.10454908013343811,
0.027872417122125626,
0.05407509207725525,
0.14063917100429535,
-0.08451662212610245,
0.1276850551366806,
0.10304296016693115,
0.08815712481737137,
0.04760599881410599,
0.1878906488418... |
null | null | transformers |
# mbart-summarization-fanpage
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.1833
- Rouge1: 36.5027
- Rouge2: 17.4428
- Rougel: 26.1734
- Rougelsum: 30.2... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_fanpage4epoch", "results": []}]} | summarization | ARTeLab/mbart-summarization-fanpage | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mbart-summarization-fanpage
This model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.1833
- Rouge1: 36.5027
- Rouge2: 17.4428
- Rougel: 26.1734
- Rougelsum: 30.2636
- Gen Len: 75.2413
## Usage
### Training hy... | [
"# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.1833\n- Rouge1: 36.5027\n- Rouge2: 17.4428\n- Rougel: 26.1734\n- Rougelsum: 30.2636\n- Gen Len: 75.2413",
"## Usage",... | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-la... | [
79,
93,
3,
90,
43
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart... | [
-0.15857934951782227,
0.1624511033296585,
-0.003749730996787548,
0.0807233527302742,
0.0887390747666359,
0.012787952087819576,
0.027494054287672043,
0.11613187193870544,
-0.10056351870298386,
0.1519976109266281,
0.09454315900802612,
0.012696301564574242,
0.05323679745197296,
0.239163100719... |
null | null | transformers |
# mbart_summarization_ilpost
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.3640
- Rouge1: 38.9101
- Rouge2: 21.384
- Rougel: 32.0517
- Rougelsum: 35.0743... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_ilpost", "results": []}]} | summarization | ARTeLab/mbart-summarization-ilpost | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/ilpost",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mbart_summarization_ilpost
This model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.3640
- Rouge1: 38.9101
- Rouge2: 21.384
- Rougel: 32.0517
- Rougelsum: 35.0743
- Gen Len: 39.8843
## Usage
### Training hyper... | [
"# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.3640\n- Rouge1: 38.9101\n- Rouge2: 21.384\n- Rougel: 32.0517\n- Rougelsum: 35.0743\n- Gen Len: 39.8843",
"## Usage",
... | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-larg... | [
79,
91,
3,
90,
43
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-l... | [
-0.16381677985191345,
0.1661672443151474,
-0.004303179681301117,
0.061665039509534836,
0.09342695027589798,
0.0007336697308346629,
0.06844419240951538,
0.12763524055480957,
-0.056458715349435806,
0.13584864139556885,
0.10983137786388397,
0.04716213792562485,
0.04968132823705673,
0.21831001... |
null | null | transformers |
# mbart_summarization_mlsum
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on mlsum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 3.3336
- Rouge1: 19.3489
- Rouge2: 6.4028
- Rougel: 16.3497
- Rougelsum: 16.5387
- Gen ... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_mlsum", "results": []}]} | summarization | ARTeLab/mbart-summarization-mlsum | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/mlsum-it",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mbart_summarization_mlsum
This model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 3.3336
- Rouge1: 19.3489
- Rouge2: 6.4028
- Rougel: 16.3497
- Rougelsum: 16.5387
- Gen Len: 33.5945
## Usage
### Training hyperparamet... | [
"# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 3.3336\n- Rouge1: 19.3489\n- Rouge2: 6.4028\n- Rougel: 16.3497\n- Rougelsum: 16.5387\n- Gen Len: 33.5945",
"## Usage",
"### Tr... | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-lar... | [
81,
91,
3,
90,
43
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-... | [
-0.17066195607185364,
0.12930946052074432,
-0.00402539549395442,
0.06319595873355865,
0.1054205596446991,
0.0006059043807908893,
0.03792320564389229,
0.12456624954938889,
-0.07079330086708069,
0.14381717145442963,
0.10653956234455109,
0.04468348249793053,
0.04680679365992546,
0.22824627161... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "PENGMENGJIE-finetuned-emotion", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | text-classification | ASCCCCCCCC/PENGMENGJIE-finetuned-emotion | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training h... | [
"# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
"## Model ... | [
57,
40,
6,
12,
8,
3,
90,
34
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.## Model des... | [
-0.08195730298757553,
0.07103101164102554,
-0.002232304075732827,
0.08612740784883499,
0.16439902782440186,
0.03159385547041893,
0.14543454349040985,
0.10028411448001862,
-0.10198250412940979,
0.03726338595151901,
0.055958837270736694,
0.08778274059295654,
0.02162586897611618,
0.0884978547... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/ber... | {"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-chinese-finetuned-amazon_zh_20000", "results": []}]} | text-classification | ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| bert-base-chinese-finetuned-amazon\_zh\_20000
=============================================
This model is a fine-tuned version of bert-base-chinese on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1683
* Accuracy: 0.5224
* F1: 0.5194
Model description
-----------------
M... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_b... | [
47,
98,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\... | [
-0.08701301366090775,
0.03905447572469711,
-0.001837060204707086,
0.11247409880161285,
0.21317344903945923,
0.035672321915626526,
0.11259725689888,
0.10010166466236115,
-0.11717566102743149,
0.024880437180399895,
0.11344445496797562,
0.16847625374794006,
-0.0006388574838638306,
0.084162376... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-chinese-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-ba... | {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-chinese-amazon_zh_20000", "results": []}]} | text-classification | ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000 | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-chinese-amazon\_zh\_20000
=========================================
This model is a fine-tuned version of bert-base-chinese on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1518
* Accuracy: 0.5092
Model description
-----------------
More information neede... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_... | [
47,
98,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval... | [
-0.08623109757900238,
0.034294623881578445,
-0.001779143582098186,
0.11161649972200394,
0.21475018560886383,
0.034256305545568466,
0.11224021017551422,
0.09823402017354965,
-0.11950747668743134,
0.025662919506430626,
0.11525366455316544,
0.16848422586917877,
-0.0012362711131572723,
0.08532... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-multilingual-cased](ht... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-multilingual-cased-amazon_zh_20000", "results": []}]} | text-classification | ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-multilingual-cased-amazon\_zh\_20000
====================================================
This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3031
* Accuracy: 0.4406
Model description
---... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_b... | [
57,
98,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\... | [
-0.09505623579025269,
0.06580236554145813,
-0.0021996446885168552,
0.11599233746528625,
0.17974494397640228,
0.02055610716342926,
0.11109361797571182,
0.12300596386194229,
-0.11357199400663376,
0.010416093282401562,
0.118222676217556,
0.18831869959831238,
0.00455077551305294,
0.11346118897... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggin... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-amazon_zh_20000", "results": []}]} | text-classification | ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-amazon\_zh\_20000
===================================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3516
* Accuracy: 0.414
Model description
-----------------... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_b... | [
57,
98,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\... | [
-0.09505623579025269,
0.06580236554145813,
-0.0021996446885168552,
0.11599233746528625,
0.17974494397640228,
0.02055610716342926,
0.11109361797571182,
0.12300596386194229,
-0.11357199400663376,
0.010416093282401562,
0.118222676217556,
0.18831869959831238,
0.00455077551305294,
0.11346118897... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | text-classification | ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### ... | [
"# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"#... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
... | [
57,
44,
6,
12,
8,
3,
90,
34
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.##... | [
-0.07148110121488571,
0.09172748029232025,
-0.0019257401581853628,
0.08142565190792084,
0.17345057427883148,
0.03701271116733551,
0.11829107999801636,
0.08549080789089203,
-0.10427279025316238,
0.040570907294750214,
0.06417793780565262,
0.102497398853302,
0.019179973751306534,
0.0794919505... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilr... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-finetuned-wikitext2", "results": []}]} | fill-mask | AT/distilroberta-base-finetuned-wikitext2 | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of distilroberta-base on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Trainin... | [
"# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Train... | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.",
"## Model descriptio... | [
56,
38,
6,
12,
8,
3,
91,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.## Model description\n... | [
-0.10208059847354889,
0.05119477957487106,
-0.0017580764833837748,
0.08746824413537979,
0.16583506762981415,
0.027064606547355652,
0.1290665864944458,
0.10939839482307434,
-0.1304401010274887,
0.029197368770837784,
0.058949943631887436,
0.08761929720640182,
0.026701809838414192,
0.12448880... |
null | null | transformers |
#Harry Potter DialoGPT Model | {"tags": ["conversational"]} | text-generation | ATGdev/DialoGPT-small-harrypotter | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
51
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
-0.009697278961539268,
0.03208012506365776,
-0.007204889785498381,
0.004809224978089333,
0.16726240515708923,
0.014898733235895634,
0.09765533357858658,
0.13672804832458496,
-0.007841327227652073,
-0.031050153076648712,
0.14490588009357452,
0.20411323010921478,
-0.006439372431486845,
0.066... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-lar... | {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]} | fill-mask | AVSilva/bertimbau-large-fine-tuned-md | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# result
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation dat... | [
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7458",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Train... | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:... | [
48,
54,
6,
12,
8,
3,
90,
4,
36
] | [
"passage: TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation s... | [
-0.07977238297462463,
0.08149079233407974,
-0.0019694312941282988,
0.10045184940099716,
0.19418662786483765,
0.03334180638194084,
0.08718033134937286,
0.10544977337121964,
-0.12517957389354706,
0.07235980778932571,
0.07156473398208618,
0.09931442886590958,
0.02753392979502678,
0.1203515604... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-lar... | {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]} | fill-mask | AVSilva/bertimbau-large-fine-tuned-sd | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# result
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation dat... | [
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7570",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Train... | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:... | [
48,
54,
6,
12,
8,
3,
90,
4,
36
] | [
"passage: TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation s... | [
-0.08033296465873718,
0.08053704351186752,
-0.0019347650231793523,
0.09992503374814987,
0.19386038184165955,
0.03363344818353653,
0.08701848238706589,
0.10515984892845154,
-0.1253071129322052,
0.07142333686351776,
0.07186280936002731,
0.09966716170310974,
0.027872733771800995,
0.1209220886... |
null | null | transformers |
#Tony Stark DialoGPT model | {"tags": ["conversational"]} | text-generation | AVeryRealHuman/DialoGPT-small-TonyStark | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
#Tony Stark DialoGPT model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] | [
55
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] | [
-0.0020731890108436346,
0.034266941249370575,
-0.005774117540568113,
0.004248267505317926,
0.14393343031406403,
0.004326540976762772,
0.08896105736494064,
0.14543265104293823,
-0.02302609197795391,
0.005462841596454382,
0.15414410829544067,
0.16123731434345245,
-0.01616818644106388,
0.0662... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp_znj9o4r
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## M... | {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "tmp_znj9o4r", "results": []}]} | text-classification | AWTStress/stress_classifier | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
# tmp_znj9o4r
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
... | [
"# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",... | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nM... | [
48,
37,
6,
12,
8,
3,
33,
4,
34
] | [
"passage: TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:## Model description\n\nMore... | [
-0.0718684121966362,
0.023954803124070168,
-0.0005222507752478123,
0.08498486131429672,
0.16690625250339508,
0.018718160688877106,
0.08609471470117569,
0.11229293048381805,
-0.14354625344276428,
-0.006508303340524435,
0.053223006427288055,
0.15455791354179382,
0.020938530564308167,
0.09971... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stress_score
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## ... | {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "stress_score", "results": []}]} | text-classification | AWTStress/stress_score | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
# stress_score
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure... | [
"# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed"... | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\n... | [
48,
31,
6,
12,
8,
3,
33,
4,
34
] | [
"passage: TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:## Model description\n\nMor... | [
-0.07949482649564743,
0.003999642096459866,
-0.000503388699144125,
0.07455523312091827,
0.18542110919952393,
0.01766563206911087,
0.12547491490840912,
0.11496234685182571,
-0.13007980585098267,
0.019242245703935623,
0.08275371044874191,
0.13955703377723694,
0.02691587433218956,
0.123653575... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | automatic-speech-recognition | Pinwheel/wav2vec2-base-timit-demo-colab | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4812
* Wer: 0.3557
Model description
-----------------
More information needed
Intended uses & limi... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps... | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 3... | [
56,
130,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size... | [
-0.10822959244251251,
0.099675752222538,
-0.003300065640360117,
0.06340761482715607,
0.10860926657915115,
-0.020167825743556023,
0.1288769543170929,
0.15049001574516296,
-0.09271349757909775,
0.07457399368286133,
0.12636904418468475,
0.1505885273218155,
0.04232662543654442,
0.1459311991930... |
null | null | null |
#FashionMNIST
PyTorch Quick Start | {"tags": ["image-classification", "pytorch", "huggingpics", "some_thing"], "metrics": ["accuracy"], "private": false} | image-classification | Ab0/foo-model | [
"pytorch",
"image-classification",
"huggingpics",
"some_thing",
"model-index",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#pytorch #image-classification #huggingpics #some_thing #model-index #region-us
|
#FashionMNIST
PyTorch Quick Start | [] | [
"TAGS\n#pytorch #image-classification #huggingpics #some_thing #model-index #region-us \n"
] | [
28
] | [
"passage: TAGS\n#pytorch #image-classification #huggingpics #some_thing #model-index #region-us \n"
] | [
-0.035085003823041916,
0.07374754548072815,
-0.00933443196117878,
0.05657544732093811,
0.1682635247707367,
0.10385128855705261,
0.04515744000673294,
0.0970364362001419,
0.184097558259964,
-0.028076693415641785,
0.11240050196647644,
0.07390487939119339,
-0.015689007937908173,
0.044986866414... |
null | null | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | text-classification | Abdou/arabert-base-algerian | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
59
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
-0.048443201929330826,
0.09173168241977692,
-0.005922634154558182,
0.027874184772372246,
0.15967309474945068,
0.03353098779916763,
0.1223534643650055,
0.1015472337603569,
0.09503154456615448,
-0.05138532817363739,
0.11538945883512497,
0.2223052829504013,
0.012671849690377712,
0.06934879720... |
null | null | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | text-classification | Abdou/arabert-large-algerian | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
59
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
-0.048443201929330826,
0.09173168241977692,
-0.005922634154558182,
0.027874184772372246,
0.15967309474945068,
0.03353098779916763,
0.1223534643650055,
0.1015472337603569,
0.09503154456615448,
-0.05138532817363739,
0.11538945883512497,
0.2223052829504013,
0.012671849690377712,
0.06934879720... |
null | null | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | text-classification | Abdou/arabert-medium-algerian | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
59
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
-0.048443201929330826,
0.09173168241977692,
-0.005922634154558182,
0.027874184772372246,
0.15967309474945068,
0.03353098779916763,
0.1223534643650055,
0.1015472337603569,
0.09503154456615448,
-0.05138532817363739,
0.11538945883512497,
0.2223052829504013,
0.012671849690377712,
0.06934879720... |
null | null | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | text-classification | Abdou/arabert-mini-algerian | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
59
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
-0.048443201929330826,
0.09173168241977692,
-0.005922634154558182,
0.027874184772372246,
0.15967309474945068,
0.03353098779916763,
0.1223534643650055,
0.1015472337603569,
0.09503154456615448,
-0.05138532817363739,
0.11538945883512497,
0.2223052829504013,
0.012671849690377712,
0.06934879720... |
null | null | null | Model details available [here](https://github.com/awasthiabhijeet/PIE) | {} | null | AbhijeetA/PIE | [
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Model details available here | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] | [
0.024608636274933815,
-0.026205500587821007,
-0.009666500613093376,
-0.10395516455173492,
0.08638657629489899,
0.059816278517246246,
0.01882290467619896,
0.020661840215325356,
0.23975107073783875,
-0.005599027033895254,
0.1219947561621666,
0.0015615287702530622,
-0.037353623658418655,
0.03... |
null | null | transformers |
#HarryPotter DialoGPT Model | {"tags": ["conversational"]} | text-generation | AbhinavSaiTheGreat/DialoGPT-small-harrypotter | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#HarryPotter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
51
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
-0.009697278961539268,
0.03208012506365776,
-0.007204889785498381,
0.004809224978089333,
0.16726240515708923,
0.014898733235895634,
0.09765533357858658,
0.13672804832458496,
-0.007841327227652073,
-0.031050153076648712,
0.14490588009357452,
0.20411323010921478,
-0.006439372431486845,
0.066... |
null | null | transformers |
## Petrained Model BERT: base model (cased)
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This mode... | {} | text-classification | Abirate/bert_fine_tuned_cola | [
"transformers",
"tf",
"bert",
"text-classification",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [] | TAGS
#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Petrained Model BERT: base model (cased)
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.
## P... | [
"## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.",
... | [
"TAGS\n#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was intr... | [
48,
76,
96,
126,
5,
18,
20
] | [
"passage: TAGS\n#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was i... | [
-0.0557975098490715,
0.07406655699014664,
-0.0033413756173104048,
0.04518413171172142,
0.06270574033260345,
-0.0457032136619091,
-0.04789060354232788,
0.019957534968852997,
0.011396362446248531,
0.08350992202758789,
0.057704858481884,
0.0367988757789135,
-0.01007442269474268,
0.15551836788... |
null | null | transformers |
# jeff's 100% authorized brain scan | {"tags": ["conversational"]} | text-generation | AccurateIsaiah/DialoGPT-small-jefftastic | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# jeff's 100% authorized brain scan | [
"# jeff's 100% authorized brain scan"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# jeff's 100% authorized brain scan"
] | [
51,
10
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# jeff's 100% authorized brain scan"
] | [
-0.039373498409986496,
0.08864568918943405,
-0.0034721335396170616,
-0.04092903807759285,
0.03520398586988449,
0.010666578076779842,
0.1014297679066658,
0.15743567049503326,
-0.009883549064397812,
0.0465865358710289,
0.10276548564434052,
0.164160817861557,
0.008088107220828533,
0.150872960... |
null | null | transformers |
# Mozark's Brain Uploaded to Hugging Face | {"tags": ["conversational"]} | text-generation | AccurateIsaiah/DialoGPT-small-mozark | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mozark's Brain Uploaded to Hugging Face | [
"# Mozark's Brain Uploaded to Hugging Face"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mozark's Brain Uploaded to Hugging Face"
] | [
51,
13
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Mozark's Brain Uploaded to Hugging Face"
] | [
-0.0031721845734864473,
-0.07610717415809631,
-0.003111664205789566,
0.008046663366258144,
0.19636450707912445,
0.04983844608068466,
0.11955051124095917,
0.15140044689178467,
0.02104688249528408,
0.005679826717823744,
0.06520772725343704,
0.14508789777755737,
0.06304925680160522,
0.0390765... |
null | null | transformers |
# Mozark's Brain Uploaded to Hugging Face but v2 | {"tags": ["conversational"]} | text-generation | AccurateIsaiah/DialoGPT-small-mozarkv2 | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mozark's Brain Uploaded to Hugging Face but v2 | [
"# Mozark's Brain Uploaded to Hugging Face but v2"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mozark's Brain Uploaded to Hugging Face but v2"
] | [
51,
16
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Mozark's Brain Uploaded to Hugging Face but v2"
] | [
0.003168402938172221,
-0.08590717613697052,
-0.002794380998238921,
0.01660344749689102,
0.19002312421798706,
0.054376550018787384,
0.1066361516714096,
0.1636747568845749,
0.01005462184548378,
0.0049474178813397884,
0.06715501844882965,
0.13582928478717804,
0.06739211827516556,
0.0519790723... |
null | null | transformers |
# Un Filtered brain upload of sinclair | {"tags": ["conversational"]} | text-generation | AccurateIsaiah/DialoGPT-small-sinclair | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Un Filtered brain upload of sinclair | [
"# Un Filtered brain upload of sinclair"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Un Filtered brain upload of sinclair"
] | [
51,
9
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Un Filtered brain upload of sinclair"
] | [
-0.01943400502204895,
0.08245186507701874,
-0.00316404877230525,
0.013878962025046349,
0.14190874993801117,
0.05707838758826256,
0.17225104570388794,
0.11887965351343155,
0.021572496742010117,
-0.025342930108308792,
0.11748050898313522,
0.2545144855976105,
0.008744923397898674,
-0.00352248... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... | text-classification | ActivationAI/distilbert-base-uncased-finetuned-emotion | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2128
* Accuracy: 0.928
* F1: 0.9280
Model description
-----------------
Mor... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... | [
67,
98,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* le... | [
-0.10365526378154755,
0.11108539253473282,
-0.0026109113823622465,
0.1317654550075531,
0.16546793282032013,
0.045472968369722366,
0.1148209348320961,
0.12493137270212173,
-0.08185860514640808,
0.032128069549798965,
0.10837704688310623,
0.1617085337638855,
0.02285127155482769,
0.09674810618... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-anli_r3` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["anli"]} | text-classification | AdapterHub/bert-base-uncased-pf-anli_r3 | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:anli",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-t... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, inst... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classificat... | [
34,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classifi... | [
-0.060997553169727325,
-0.02165103517472744,
-0.0028114498127251863,
0.059509895741939545,
0.19203327596187592,
0.026307489722967148,
0.18563374876976013,
0.04678478464484215,
0.08582951873540878,
0.015614373609423637,
0.03765048459172249,
0.09434476494789124,
0.047018587589263916,
0.06509... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-art` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[ad... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["art"]} | null | AdapterHub/bert-base-uncased-pf-art | [
"adapter-transformers",
"bert",
"en",
"dataset:art",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-trans... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install ... | [
"TAGS\n#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was c... | [
28,
78,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter wa... | [
-0.05836805701255798,
-0.016129881143569946,
-0.0012639745837077498,
0.038258086889982224,
0.18027223646640778,
0.0383332222700119,
0.10476774722337723,
0.05918099358677864,
0.06616906076669693,
0.0383734256029129,
0.04387135058641434,
0.06531771272420883,
0.05052924528717995,
0.0368785709... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-boolq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification.
This adapter was created for usage with ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:qa/boolq", "adapter-transformers"], "datasets": ["boolq"]} | text-classification | AdapterHub/bert-base-uncased-pf-boolq | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:qa/boolq",
"en",
"dataset:boolq",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a predict... | [
42,
82,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a pred... | [
-0.08009015768766403,
0.025012627243995667,
-0.0031497282907366753,
0.026109052821993828,
0.17533522844314575,
-0.00008857827924657613,
0.12279533594846725,
0.07660476863384247,
0.06595240533351898,
0.03541136533021927,
0.019806064665317535,
0.1014779657125473,
0.048795219510793686,
0.0451... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-cola` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification.
This adapter was created fo... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:lingaccept/cola", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-cola | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:lingaccept/cola",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a predictio... | [
37,
82,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a predic... | [
-0.05057975649833679,
-0.0180866327136755,
-0.0033590008970350027,
0.04290704056620598,
0.17661334574222565,
0.012858504429459572,
0.13297739624977112,
0.03554709628224373,
0.056126609444618225,
0.04108665511012077,
0.03707744553685188,
0.09924179315567017,
0.026194410398602486,
0.05632592... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-commonsense_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice.
This adapter was cre... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/csqa", "adapter-transformers"], "datasets": ["commonsense_qa"]} | null | AdapterHub/bert-base-uncased-pf-commonsense_qa | [
"adapter-transformers",
"bert",
"adapterhub:comsense/csqa",
"en",
"dataset:commonsense_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, i... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usa... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a ... | [
41,
86,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes... | [
-0.08003511279821396,
-0.005911387037485838,
-0.003213708521798253,
0.007182664703577757,
0.16886401176452637,
0.004110119305551052,
0.1408555954694748,
0.05776885896921158,
0.0748690515756607,
0.04646134749054909,
0.012176153250038624,
0.08266710489988327,
0.06562143564224243,
0.024434775... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-comqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["com_qa"]} | question-answering | AdapterHub/bert-base-uncased-pf-comqa | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:com_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question a... | [
36,
82,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for questio... | [
-0.08429394662380219,
-0.01947433315217495,
-0.0027758972719311714,
0.019041793420910835,
0.15931561589241028,
0.02989649400115013,
0.1285819113254547,
0.058574870228767395,
0.07099133729934692,
0.032627493143081665,
0.040478989481925964,
0.07366455346345901,
0.06437762826681137,
0.0198482... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2000` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging.
This adapter was created for ... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:chunk/conll2000", "adapter-transformers"], "datasets": ["conll2000"]} | token-classification | AdapterHub/bert-base-uncased-pf-conll2000 | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:chunk/conll2000",
"en",
"dataset:conll2000",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ada... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset... | [
46,
85,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 data... | [
-0.07606091350317001,
-0.002257005078718066,
-0.0021144866477698088,
0.02533949725329876,
0.1479262411594391,
0.028836091980338097,
0.15827929973602295,
0.051306698471307755,
0.0603700689971447,
0.05995500460267067,
0.007360696326941252,
0.10852662473917007,
0.03753936290740967,
0.05568810... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usag... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/conll2003", "adapter-transformers"], "datasets": ["conll2003"]} | token-classification | AdapterHub/bert-base-uncased-pf-conll2003 | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:ner/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and... | [
45,
84,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset ... | [
-0.05573133006691933,
0.026048246771097183,
-0.0023946613073349,
0.03199859336018562,
0.14949557185173035,
0.004119238816201687,
0.13960497081279755,
0.04515539109706879,
0.002041463041678071,
0.06360745429992676,
0.021692151203751564,
0.11077336966991425,
0.027136215940117836,
0.048033583... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for ... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:pos/conll2003", "adapter-transformers"], "datasets": ["conll2003"]} | token-classification | AdapterHub/bert-base-uncased-pf-conll2003_pos | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:pos/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset... | [
45,
86,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 data... | [
-0.07241908460855484,
-0.009041129611432552,
-0.002200418384745717,
0.025261467322707176,
0.15555880963802338,
0.018586518242955208,
0.15931209921836853,
0.05340324342250824,
0.05159808322787285,
0.053003422915935516,
0.0017033793264999986,
0.1180972307920456,
0.03738030791282654,
0.047634... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-copa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice.
This adapter was created for u... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/copa", "adapter-transformers"]} | null | AdapterHub/bert-base-uncased-pf-copa | [
"adapter-transformers",
"bert",
"adapterhub:comsense/copa",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choic... | [
32,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple ch... | [
-0.05988139659166336,
-0.041318658739328384,
-0.002947726519778371,
0.008126992732286453,
0.16489844024181366,
0.041419804096221924,
0.1524876207113266,
0.041865769773721695,
0.037036534398794174,
0.0339941643178463,
0.0470198430120945,
0.09813429415225983,
0.0662112906575203,
0.0037155072... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-cosmos_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice.
This adapter was ... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/cosmosqa", "adapter-transformers"], "datasets": ["cosmos_qa"]} | null | AdapterHub/bert-base-uncased-pf-cosmos_qa | [
"adapter-transformers",
"bert",
"adapterhub:comsense/cosmosqa",
"en",
"dataset:cosmos_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, in... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usag... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a pr... | [
41,
86,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a... | [
-0.07311957329511642,
-0.027938606217503548,
-0.0037979057524353266,
0.004727288149297237,
0.16952380537986755,
-0.0041707465425133705,
0.1652962863445282,
0.05034941807389259,
0.06497528403997421,
0.04155777767300606,
0.0019795296248048544,
0.07708828896284103,
0.06741926074028015,
0.0466... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-cq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the *... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/cq", "adapter-transformers"]} | question-answering | AdapterHub/bert-base-uncased-pf-cq | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/cq",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-t... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, inst... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question ans... | [
37,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question ... | [
-0.08601219952106476,
-0.022721845656633377,
-0.0032341713085770607,
0.015087837353348732,
0.16563794016838074,
0.012257436290383339,
0.10703831166028976,
0.062446121126413345,
0.09911995381116867,
0.03623168170452118,
0.036450114101171494,
0.08161510527133942,
0.04805738478899002,
0.01860... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-drop` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["drop"]} | question-answering | AdapterHub/bert-base-uncased-pf-drop | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:drop",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answer... | [
34,
79,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question ans... | [
-0.0677952989935875,
-0.03230288252234459,
-0.002592234406620264,
0.03912556543946266,
0.15691590309143066,
0.032752133905887604,
0.12018798291683197,
0.07308772206306458,
0.09071703255176544,
0.03737544268369675,
0.05842987820506096,
0.1262783259153366,
0.040039341896772385,
0.03747818619... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_p` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]} | question-answering | AdapterHub/bert-base-uncased-pf-duorc_p | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question a... | [
35,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for questio... | [
-0.06836381554603577,
-0.008645700290799141,
-0.0034914924763143063,
0.02984936535358429,
0.15950927138328552,
0.03787956386804581,
0.15004298090934753,
0.04978448897600174,
0.0658622682094574,
0.0400506891310215,
0.051501549780368805,
0.0888567790389061,
0.032561011612415314,
0.0333479084... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_s` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]} | question-answering | AdapterHub/bert-base-uncased-pf-duorc_s | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question a... | [
35,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for questio... | [
-0.06480386108160019,
-0.009418697096407413,
-0.00346206477843225,
0.029971061274409294,
0.1585463583469391,
0.03818304464221001,
0.15517280995845795,
0.04898108169436455,
0.06943102926015854,
0.0419064499437809,
0.05138804763555527,
0.081756092607975,
0.030663784593343735,
0.0308057088404... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-emo` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[ada... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emo"]} | text-classification | AdapterHub/bert-base-uncased-pf-emo | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:emo",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transf... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install '... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n... | [
33,
78,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification... | [
-0.02701437845826149,
-0.0506429485976696,
-0.0019794744439423084,
0.0030898062977939844,
0.1835860162973404,
0.06260914355516434,
0.12962926924228668,
0.0461985319852829,
0.08866125345230103,
0.01384005043655634,
0.05992849916219711,
0.13380500674247742,
0.04947569593787193,
0.01682644709... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-emotion` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emotion"]} | text-classification | AdapterHub/bert-base-uncased-pf-emotion | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:emotion",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapte... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, i... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for class... | [
34,
79,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for cl... | [
-0.06666062772274017,
-0.02281641960144043,
-0.002817473839968443,
0.048429958522319794,
0.18548668920993805,
0.05733742564916611,
0.08845143020153046,
0.062496479600667953,
0.10187183320522308,
0.041000667959451675,
0.024165673181414604,
0.09621437638998032,
0.05510709807276726,
-0.004997... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-fce_error_detection` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage ... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ged/fce", "adapter-transformers"], "datasets": ["fce_error_detection"]} | token-classification | AdapterHub/bert-base-uncased-pf-fce_error_detection | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:ged/fce",
"en",
"dataset:fce_error_detection",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dat... | [
48,
87,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce ... | [
-0.08894431591033936,
-0.03785998374223709,
-0.0023186183534562588,
0.019648313522338867,
0.18893414735794067,
0.0482783243060112,
0.15592119097709656,
0.07392500340938568,
0.13213586807250977,
0.044608306139707565,
-0.028007900342345238,
0.09239660948514938,
0.036122556775808334,
0.077037... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-hellaswag` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice.
This adapter wa... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/hellaswag", "adapter-transformers"], "datasets": ["hellaswag"]} | null | AdapterHub/bert-base-uncased-pf-hellaswag | [
"adapter-transformers",
"bert",
"adapterhub:comsense/hellaswag",
"en",
"dataset:hellaswag",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, i... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usa... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a ... | [
40,
85,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes... | [
-0.08323977142572403,
-0.04182260110974312,
-0.0036496310494840145,
0.025907311588525772,
0.16924454271793365,
0.035445790737867355,
0.12068334966897964,
0.024176279082894325,
0.025773542001843452,
0.04952065274119377,
0.004901639651507139,
0.08958639204502106,
0.05095302686095238,
-0.0359... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-hotpotqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering.
This adapter was created for ... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["hotpot_qa"]} | question-answering | AdapterHub/bert-base-uncased-pf-hotpotqa | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:hotpot_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nF... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for q... | [
37,
84,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head fo... | [
-0.09529922902584076,
-0.023692937567830086,
-0.0025819079019129276,
0.027944810688495636,
0.15874074399471283,
0.03327721729874611,
0.12470590323209763,
0.0713641494512558,
0.09280839562416077,
0.029594000428915024,
0.028688618913292885,
0.09401710331439972,
0.05842495709657669,
0.0288737... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-imdb` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification.
This adapter was created for ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/imdb", "adapter-transformers"], "datasets": ["imdb"]} | text-classification | AdapterHub/bert-base-uncased-pf-imdb | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sentiment/imdb",
"en",
"dataset:imdb",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes... | [
43,
82,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and inclu... | [
-0.0756290853023529,
0.001093953032977879,
-0.003225023625418544,
0.018876222893595695,
0.17026397585868835,
0.006392668467015028,
0.17352858185768127,
0.06613791733980179,
0.08152066171169281,
0.047276031225919724,
0.010348938405513763,
0.10951226204633713,
0.048071641474962234,
0.0168285... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-mit_movie_trivia` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/mit_movie_trivia", "adapter-transformers"]} | token-classification | AdapterHub/bert-base-uncased-pf-mit_movie_trivia | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:ner/mit_movie_trivia",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Us... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset a... | [
41,
90,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia datase... | [
-0.06728016585111618,
-0.027926914393901825,
-0.001842428115196526,
0.023197481408715248,
0.18482151627540588,
0.017976775765419006,
0.177022784948349,
0.07692977041006088,
0.091158427298069,
0.05131422355771065,
-0.04181302711367607,
0.10296674817800522,
0.03913160413503647,
0.04168137535... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-mnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification.
This adapter was created for usag... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/multinli", "adapter-transformers"], "datasets": ["multi_nli"]} | text-classification | AdapterHub/bert-base-uncased-pf-mnli | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/multinli",
"en",
"dataset:multi_nli",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and include... | [
46,
84,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and incl... | [
-0.055667489767074585,
-0.02241385355591774,
-0.0027240572962909937,
0.030470946803689003,
0.16987644135951996,
0.01909445971250534,
0.21448704600334167,
0.04974238574504852,
0.07144055515527725,
0.052399955689907074,
-0.005643308162689209,
0.11584603041410446,
0.017118044197559357,
0.0409... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-mrpc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with t... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sts/mrpc", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-mrpc | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sts/mrpc",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for cla... | [
37,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for ... | [
-0.07484020292758942,
-0.017599472776055336,
-0.002444884739816189,
0.028674708679318428,
0.19968575239181519,
0.024499638006091118,
0.13842375576496124,
0.02961345762014389,
0.046421315521001816,
0.04196861386299133,
0.06553839892148972,
0.11059202998876572,
0.02588162012398243,
0.0506472... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-multirc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification.
This adapter was created for usage... | {"language": ["en"], "tags": ["text-classification", "adapterhub:rc/multirc", "bert", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-multirc | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:rc/multirc",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ada... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head ... | [
36,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction he... | [
-0.056546974927186966,
-0.04545219987630844,
-0.002678577322512865,
0.038605302572250366,
0.18843859434127808,
0.042826712131500244,
0.19662299752235413,
0.0224478617310524,
0.03658852353692055,
0.035585545003414154,
0.04839418828487396,
0.1142050102353096,
0.01902354136109352,
0.018154542... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-newsqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering.
This adapter was created for usage wi... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["newsqa"]} | question-answering | AdapterHub/bert-base-uncased-pf-newsqa | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:newsqa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question ... | [
35,
81,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for questi... | [
-0.06382467597723007,
0.00004553420876618475,
-0.0027437719982117414,
0.007998292334377766,
0.15380030870437622,
0.029713958501815796,
0.15872126817703247,
0.0588369183242321,
0.05361460521817207,
0.024160336703062057,
0.0700095146894455,
0.06472714245319366,
0.046474434435367584,
0.031290... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-pmb_sem_tagging` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usag... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:semtag/pmb", "adapter-transformers"]} | token-classification | AdapterHub/bert-base-uncased-pf-pmb_sem_tagging | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:semtag/pmb",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a predict... | [
38,
88,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a pred... | [
-0.06455903500318527,
-0.02644280157983303,
-0.003579622134566307,
0.010479917749762535,
0.17856314778327942,
-0.0021869137417525053,
0.14723679423332214,
0.04617758467793465,
0.04631425067782402,
0.03906962275505066,
0.013586083427071571,
0.11936941742897034,
0.028938202187418938,
0.03691... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-qnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with t... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/qnli", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-qnli | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/qnli",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for cla... | [
38,
85,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for ... | [
-0.0716300755739212,
0.015563433058559895,
-0.0028344907332211733,
0.039634883403778076,
0.17043009400367737,
0.005544697400182486,
0.13880716264247894,
0.06451492011547089,
0.08222184330224991,
0.03594439849257469,
0.01301303505897522,
0.09273798763751984,
0.045093242079019547,
0.01269599... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-qqp` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["text-classification", "adapter-transformers", "adapterhub:sts/qqp", "bert"]} | text-classification | AdapterHub/bert-base-uncased-pf-qqp | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sts/qqp",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tr... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, insta... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classi... | [
37,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for cla... | [
-0.0847463458776474,
0.027039220556616783,
-0.003209249582141638,
0.02822030335664749,
0.1775553971529007,
0.012232816778123379,
0.1157778725028038,
0.0679129883646965,
0.08767501264810562,
0.03378531336784363,
0.037216249853372574,
0.09023064374923706,
0.054993316531181335,
0.050404243171... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-quail` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quail"]} | null | AdapterHub/bert-base-uncased-pf-quail | [
"adapter-transformers",
"bert",
"en",
"dataset:quail",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-t... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, inst... | [
"TAGS\n#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter... | [
29,
80,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adap... | [
-0.06141962856054306,
0.02149612084031105,
-0.0018654053565114737,
0.03203333541750908,
0.18236218392848969,
0.026906387880444527,
0.11845558136701584,
0.0461234413087368,
0.0625784620642662,
0.014263185672461987,
0.05273230001330376,
0.07881125807762146,
0.04854791238903999,
0.00557189201... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-quartz` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with ... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quartz"]} | null | AdapterHub/bert-base-uncased-pf-quartz | [
"adapter-transformers",
"bert",
"en",
"dataset:quartz",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adap... | [
29,
80,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis a... | [
-0.06762102991342545,
-0.004624171182513237,
-0.0017360174097120762,
0.03689789026975632,
0.17673854529857635,
0.03599271923303604,
0.13059136271476746,
0.04002678021788597,
0.09517233073711395,
0.039172250777482986,
0.06270283460617065,
0.08523861318826675,
0.04052644595503807,
0.01378002... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-quoref` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering.
This adapter was created for usage wi... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["quoref"]} | question-answering | AdapterHub/bert-base-uncased-pf-quoref | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:quoref",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question ... | [
35,
81,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for questi... | [
-0.07469946891069412,
0.007003358565270901,
-0.003153465921059251,
0.033292513340711594,
0.16727502644062042,
0.02214272879064083,
0.13517725467681885,
0.05700011923909187,
0.07351814955472946,
0.04472759738564491,
0.056770697236061096,
0.07517453283071518,
0.04088360816240311,
0.034448310... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-race` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with th... | {"language": ["en"], "tags": ["adapterhub:rc/race", "bert", "adapter-transformers"], "datasets": ["race"]} | null | AdapterHub/bert-base-uncased-pf-race | [
"adapter-transformers",
"bert",
"adapterhub:rc/race",
"en",
"dataset:race",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple cho... | [
35,
81,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple ... | [
-0.09752549976110458,
0.0028916816227138042,
-0.0016497289761900902,
0.0591721385717392,
0.16825316846370697,
0.04660987854003906,
0.14074592292308807,
0.06099995970726013,
0.0682573914527893,
0.033482033759355545,
0.06939789652824402,
0.07669036090373993,
0.059390146285295486,
0.026054872... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-record` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage wi... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:rc/record", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-record | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:rc/record",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for... | [
36,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head ... | [
-0.0631992444396019,
0.005081566981971264,
-0.002711985493078828,
0.027401452884078026,
0.18088014423847198,
0.028224986046552658,
0.1443743109703064,
0.0505165196955204,
0.0634913444519043,
0.02837553806602955,
0.03721502795815468,
0.10489144176244736,
0.03266352042555809,
0.0212302580475... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-rotten_tomatoes` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classificatio... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/rotten_tomatoes", "adapter-transformers"], "datasets": ["rotten_tomatoes"]} | text-classification | AdapterHub/bert-base-uncased-pf-rotten_tomatoes | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sentiment/rotten_tomatoes",
"en",
"dataset:rotten_tomatoes",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usa... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the se... | [
51,
90,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the... | [
-0.05068407580256462,
0.026295484974980354,
-0.00208571320399642,
0.02990788035094738,
0.1784546971321106,
0.045580703765153885,
0.16682784259319305,
0.1035604476928711,
0.1272713541984558,
0.036320362240076065,
-0.08384685963392258,
0.12920619547367096,
0.030263016000390053,
0.01637761294... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-rte` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/rte", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-rte | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/rte",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tr... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, insta... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classi... | [
36,
81,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for cla... | [
-0.04394754767417908,
-0.023261891677975655,
-0.00274193799123168,
0.04945578798651695,
0.16301704943180084,
0.028168978169560432,
0.14820565283298492,
0.059410445392131805,
0.060162220150232315,
0.022935032844543457,
0.022557979449629784,
0.08894114196300507,
0.03695854917168617,
0.036384... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-scicite` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["scicite"]} | text-classification | AdapterHub/bert-base-uncased-pf-scicite | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:scicite",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapte... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, i... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for class... | [
34,
80,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for cl... | [
-0.07331566512584686,
-0.00776580860838294,
-0.002367740962654352,
0.03678159415721893,
0.17926035821437836,
0.034871604293584824,
0.14650532603263855,
0.04922257736325264,
0.12343898415565491,
0.04753324016928673,
0.043908942490816116,
0.10045607388019562,
0.06237318739295006,
0.049983125... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-scitail` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification.
This adapter was created for usa... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/scitail", "adapter-transformers"], "datasets": ["scitail"]} | text-classification | AdapterHub/bert-base-uncased-pf-scitail | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/scitail",
"en",
"dataset:scitail",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes... | [
43,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and inclu... | [
-0.05867182835936546,
0.04187051206827164,
-0.003119073109701276,
0.03281543776392937,
0.17163611948490143,
-0.005022227298468351,
0.1579284816980362,
0.06489366292953491,
0.07278335839509964,
0.06319823861122131,
0.016867592930793762,
0.11421796679496765,
0.04654529318213463,
0.0618638955... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-sick` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with t... | {"language": ["en"], "tags": ["text-classification", "adapter-transformers", "bert", "adapterhub:nli/sick"], "datasets": ["sick"]} | text-classification | AdapterHub/bert-base-uncased-pf-sick | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/sick",
"en",
"dataset:sick",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a predictio... | [
43,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a predic... | [
-0.059863124042749405,
0.04498805105686188,
-0.0030083300080150366,
0.03957868739962578,
0.18030190467834473,
0.013027384877204895,
0.1386716067790985,
0.07887903600931168,
0.07233333587646484,
0.05712994560599327,
0.023047545924782753,
0.11928104609251022,
0.03693542629480362,
0.038629796... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-snli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["snli"]} | text-classification | AdapterHub/bert-base-uncased-pf-snli | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:snli",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tran... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification... | [
35,
82,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classificat... | [
-0.07116951793432236,
0.023081373423337936,
-0.0030622438061982393,
0.03799805790185928,
0.1788833886384964,
0.009305101819336414,
0.18817760050296783,
0.03769344836473465,
0.06518907099962234,
0.02292129397392273,
0.030085638165473938,
0.11762137711048126,
0.0546862930059433,
0.0752001628... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-social_i_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice.
This adapter was created ... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["social_i_qa"]} | null | AdapterHub/bert-base-uncased-pf-social_i_qa | [
"adapter-transformers",
"bert",
"en",
"dataset:social_i_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, instal... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\... | [
"TAGS\n#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choic... | [
32,
86,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple ch... | [
-0.09970592707395554,
-0.03520241752266884,
-0.0028068784158676863,
0.015052556991577148,
0.17346309125423431,
0.022904587909579277,
0.1288675218820572,
0.03705090656876564,
0.10800201445817947,
0.017926858738064766,
0.01888963207602501,
0.07518205046653748,
0.06508474797010422,
0.04860711... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-squad` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad1", "adapter-transformers"], "datasets": ["squad"]} | question-answering | AdapterHub/bert-base-uncased-pf-squad | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/squad1",
"en",
"dataset:squad",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a predic... | [
44,
84,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a pre... | [
-0.07807569950819016,
-0.025752801448106766,
-0.0029690838418900967,
0.02338295243680477,
0.1758498251438141,
-0.004271816462278366,
0.13816401362419128,
0.0704936683177948,
0.09214037656784058,
0.04076017066836357,
0.012860158458352089,
0.08247873932123184,
0.04099889099597931,
0.01106990... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for us... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad2", "adapter-transformers"], "datasets": ["squad_v2"]} | question-answering | AdapterHub/bert-base-uncased-pf-squad_v2 | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/squad2",
"en",
"dataset:squad_v2",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nF... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a ... | [
47,
87,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes... | [
-0.08959303051233292,
-0.0800928846001625,
-0.002109825611114502,
0.014859231188893318,
0.17421025037765503,
0.03986210748553276,
0.12903381884098053,
0.08932571858167648,
0.11410865187644958,
0.03059842810034752,
-0.03957613557577133,
0.08574852347373962,
0.05698542669415474,
0.0260497592... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-sst2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created fo... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/sst-2", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-sst2 | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sentiment/sst-2",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a predictio... | [
38,
84,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a predic... | [
-0.06721916049718857,
0.008885465562343597,
-0.0032678369898349047,
0.02311023883521557,
0.17421379685401917,
0.008396070450544357,
0.1657990664243698,
0.05718222260475159,
0.07172773033380508,
0.053326014429330826,
0.03245044872164726,
0.08169252425432205,
0.05445817485451698,
0.060086403... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-stsb` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sts/sts-b", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-stsb | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sts/sts-b",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for c... | [
39,
86,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head fo... | [
-0.06751745194196701,
-0.010997641831636429,
-0.0029483523685485125,
0.014848065562546253,
0.1882697343826294,
-0.0008821141091175377,
0.16652396321296692,
0.04079752415418625,
0.05256408452987671,
0.04807468131184578,
0.0402386374771595,
0.07604765892028809,
0.041586436331272125,
0.072558... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-swag` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["swag"]} | null | AdapterHub/bert-base-uncased-pf-swag | [
"adapter-transformers",
"bert",
"en",
"dataset:swag",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tra... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, instal... | [
"TAGS\n#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter wa... | [
29,
80,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter... | [
-0.059914086014032364,
-0.024620357900857925,
-0.0011084262514486909,
0.019965821877121925,
0.17971013486385345,
0.037087421864271164,
0.12657104432582855,
0.032256849110126495,
0.0726638212800026,
0.02201336994767189,
0.0628814548254013,
0.06951986998319626,
0.04736171290278435,
0.0437791... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-trec` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["trec"]} | text-classification | AdapterHub/bert-base-uncased-pf-trec | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:trec",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tran... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification... | [
34,
79,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classificat... | [
-0.05174366757273674,
-0.046926070004701614,
-0.0024467569310218096,
0.04119610786437988,
0.18585623800754547,
0.042695462703704834,
0.1189441829919815,
0.048163048923015594,
0.052967049181461334,
0.016375400125980377,
0.05344178155064583,
0.1110185980796814,
0.03421730920672417,
0.0246662... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_deprel` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usag... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:deprel/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | token-classification | AdapterHub/bert-base-uncased-pf-ud_deprel | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:deprel/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ew... | [
51,
89,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud... | [
-0.04051785171031952,
-0.04068269208073616,
-0.003224774030968547,
0.012158120051026344,
0.18954716622829437,
0.05829472094774246,
0.17464812099933624,
0.06769035011529922,
0.10020217299461365,
0.023531941697001457,
-0.05858209356665611,
0.09870229661464691,
0.041226826608181,
0.0212598517... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_en_ewt` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing.
This adapter was created for u... | {"language": ["en"], "tags": ["bert", "adapterhub:dp/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | null | AdapterHub/bert-base-uncased-pf-ud_en_ewt | [
"adapter-transformers",
"bert",
"adapterhub:dp/ud_ewt",
"en",
"dataset:universal_dependencies",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us
| Adapter 'AdapterHub/bert-base-uncased-pf-ud\_en\_ewt' for bert-base-uncased
===========================================================================
An adapter for the 'bert-base-uncased' model that was trained on the dp/ud\_ewt dataset and includes a prediction head for dependency parsing.
This adapter was crea... | [] | [
"TAGS\n#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] | [
36
] | [
"passage: TAGS\n#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] | [
-0.07226882874965668,
-0.02208099327981472,
-0.00858865212649107,
-0.03436117619276047,
0.0960790365934372,
0.07635773718357086,
0.1282142698764801,
0.016133278608322144,
0.1846838891506195,
-0.05140646547079086,
0.11908279359340668,
0.10398616641759872,
-0.028887808322906494,
0.0125936297... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with th... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:pos/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | token-classification | AdapterHub/bert-base-uncased-pf-ud_pos | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:pos/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tra... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, instal... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset... | [
49,
85,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt data... | [
-0.07727738469839096,
-0.043847814202308655,
-0.0026412010192871094,
0.01470312848687172,
0.19099178910255432,
0.053104598075151443,
0.1567039042711258,
0.0649162083864212,
0.11091810464859009,
0.012311178259551525,
-0.027378521859645844,
0.0986839160323143,
0.04211809113621712,
0.06703875... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-wic` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification.
This adapter was created for usa... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:wordsence/wic", "adapter-transformers"]} | text-classification | AdapterHub/bert-base-uncased-pf-wic | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:wordsence/wic",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction hea... | [
37,
81,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction ... | [
-0.0692567527294159,
0.016001611948013306,
-0.0028586587868630886,
0.028686312958598137,
0.1482207477092743,
0.012334628961980343,
0.1396930068731308,
0.0379987433552742,
0.06683724373579025,
0.038291752338409424,
0.04393107816576958,
0.07410100847482681,
0.05246869474649429,
0.02592541649... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-wikihop` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering.
This adapter was created for u... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/wikihop", "adapter-transformers"]} | question-answering | AdapterHub/bert-base-uncased-pf-wikihop | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/wikihop",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nF... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head f... | [
37,
83,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction hea... | [
-0.06845208257436752,
-0.028924813494086266,
-0.0030588589143007994,
0.026663241907954216,
0.14647099375724792,
0.005902788136154413,
0.1159442737698555,
0.0741325318813324,
0.10568834841251373,
0.027075281366705894,
0.026002369821071625,
0.08937019109725952,
0.06102767214179039,
0.0219270... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-winogrande` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice.
This adapter... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/winogrande", "adapter-transformers"], "datasets": ["winogrande"]} | null | AdapterHub/bert-base-uncased-pf-winogrande | [
"adapter-transformers",
"bert",
"adapterhub:comsense/winogrande",
"en",
"dataset:winogrande",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First,... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## U... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and include... | [
40,
85,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and incl... | [
-0.0609574019908905,
0.008045891299843788,
-0.0032151208724826574,
0.03047424741089344,
0.15771332383155823,
0.007417172659188509,
0.15947243571281433,
0.043027013540267944,
0.017253652215003967,
0.033763587474823,
0.02306087501347065,
0.0834675058722496,
0.039416734129190445,
0.0145097412... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-wnut_17` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the *... | {"language": ["en"], "tags": ["token-classification", "bert", "adapter-transformers"], "datasets": ["wnut_17"]} | token-classification | AdapterHub/bert-base-uncased-pf-wnut_17 | [
"adapter-transformers",
"bert",
"token-classification",
"en",
"dataset:wnut_17",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-trans... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install ... | [
"TAGS\n#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagg... | [
37,
84,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for t... | [
-0.08211524784564972,
0.00914554763585329,
-0.0021787318401038647,
0.048059847205877304,
0.16799697279930115,
0.007411843631416559,
0.11160780489444733,
0.0432235524058342,
0.06079093739390373,
0.026052579283714294,
0.042313262820243835,
0.10137537866830826,
0.03363113850355148,
0.05183324... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-yelp_polarity` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification.
This adapter was cre... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["yelp_polarity"]} | text-classification | AdapterHub/bert-base-uncased-pf-yelp_polarity | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:yelp_polarity",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, ins... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a predict... | [
38,
88,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a pred... | [
-0.05562546104192734,
0.0025414456613361835,
-0.003375509288161993,
0.02222864329814911,
0.19802147150039673,
0.008972598239779472,
0.16032768785953522,
0.03957366570830345,
0.04720002040266991,
0.041491858661174774,
0.044117581099271774,
0.09942679852247238,
0.04799981415271759,
0.0365187... |
null | null | adapter-transformers |
# Adapter `AdapterHub/bioASQyesno` for facebook/bart-base
An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/bioasq](https://adapterhub.ml/explore/qa/bioasq/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/a... | {"tags": ["adapterhub:qa/bioasq", "adapter-transformers", "bart"]} | null | AdapterHub/bioASQyesno | [
"adapter-transformers",
"bart",
"adapterhub:qa/bioasq",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#adapter-transformers #bart #adapterhub-qa/bioasq #region-us
|
# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base
An adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of tra... | [
"# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers... | [
"TAGS\n#adapter-transformers #bart #adapterhub-qa/bioasq #region-us \n",
"# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage... | [
22,
62,
57,
49,
25
] | [
"passage: TAGS\n#adapter-transformers #bart #adapterhub-qa/bioasq #region-us \n# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\... | [
-0.13004747033119202,
0.010143529623746872,
-0.0017046782886609435,
-0.037370599806308746,
0.08919916301965714,
0.00696165906265378,
0.18489845097064972,
0.10339701920747757,
0.2552741765975952,
0.03196858987212181,
0.04942413792014122,
-0.04417141154408455,
0.03792404383420944,
0.24184118... |
null | null | adapter-transformers |
# Adapter `hSterz/narrativeqa` for facebook/bart-base
An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/narrativeqa](https://adapterhub.ml/explore/qa/narrativeqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter... | {"tags": ["adapterhub:qa/narrativeqa", "adapter-transformers", "bart"], "datasets": ["narrativeqa"]} | null | AdapterHub/narrativeqa | [
"adapter-transformers",
"bart",
"adapterhub:qa/narrativeqa",
"dataset:narrativeqa",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us
|
# Adapter 'hSterz/narrativeqa' for facebook/bart-base
An adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of tr... | [
"# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformer... | [
"TAGS\n#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us \n",
"# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transfor... | [
31,
61,
57,
5,
4
] | [
"passage: TAGS\n#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us \n# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-trans... | [
-0.09562403708696365,
-0.13347218930721283,
-0.0033293410670012236,
-0.025264326483011246,
0.11188049614429474,
0.08142675459384918,
0.19874152541160583,
0.003196367993950844,
0.21527643501758575,
-0.1010078638792038,
0.0653453916311264,
-0.007691832724958658,
0.06498424708843231,
0.142923... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-anli_r3` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-tran... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["anli"]} | text-classification | AdapterHub/roberta-base-pf-anli_r3 | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:anli",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base
An adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tr... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis... | [
35,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nT... | [
-0.0002479679824318737,
-0.04250883311033249,
-0.0018236670875921845,
0.04693203046917915,
0.1944967806339264,
0.02546229027211666,
0.1700686514377594,
0.06769715994596481,
0.021171944215893745,
0.03024192713201046,
0.03730807453393936,
0.08629783242940903,
0.06480363756418228,
0.036679662... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-art` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transform... | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["art"]} | null | AdapterHub/roberta-base-pf-art | [
"adapter-transformers",
"roberta",
"en",
"dataset:art",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base
An adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_No... | [
"# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transf... | [
"TAGS\n#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for u... | [
29,
68,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created fo... | [
-0.03226030245423317,
-0.07350148260593414,
-0.0023190376814454794,
0.045486364513635635,
0.17753072082996368,
0.030539512634277344,
0.12709765136241913,
0.06384480744600296,
0.016873590648174286,
0.05171020328998566,
0.01917562261223793,
0.08564990758895874,
0.03977428749203682,
0.0783318... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-boolq` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:qa/boolq", "adapter-transformers"], "datasets": ["boolq"]} | text-classification | AdapterHub/roberta-base-pf-boolq | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:qa/boolq",
"en",
"dataset:boolq",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':... | [
"# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for... | [
43,
72,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head ... | [
-0.03880169615149498,
-0.008361022919416428,
-0.003110624384135008,
0.02814510278403759,
0.17646558582782745,
0.01739988476037979,
0.1325577050447464,
0.07295648753643036,
0.023686567321419716,
0.020771987736225128,
0.040390923619270325,
0.07917774468660355,
0.06608796119689941,
0.04810937... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-cola` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification.
This adapter was created for usage with th... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:lingaccept/cola", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-cola | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:lingaccept/cola",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base
An adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transfor... | [
"# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ad... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for c... | [
38,
72,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head fo... | [
0.015135260298848152,
-0.05318101495504379,
-0.0029047802090644836,
0.020471647381782532,
0.1736154705286026,
0.025450674816966057,
0.13035018742084503,
0.050126634538173676,
0.0032738028094172478,
0.04222571849822998,
0.05000452324748039,
0.08624926954507828,
0.04538708180189133,
0.023829... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-commonsense_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage ... | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/csqa", "adapter-transformers"], "datasets": ["commonsense_qa"]} | null | AdapterHub/roberta-base-pf-commonsense_qa | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/csqa",
"en",
"dataset:commonsense_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction h... | [
42,
76,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a predictio... | [
-0.04951002821326256,
-0.02616850472986698,
-0.0036667482927441597,
0.002022108295932412,
0.15496766567230225,
0.021685641258955002,
0.16750618815422058,
0.04858043044805527,
-0.001195669174194336,
0.036430828273296356,
0.042972177267074585,
0.06992753595113754,
0.08095771074295044,
0.0431... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-comqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapte... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["com_qa"]} | question-answering | AdapterHub/roberta-base-pf-comqa | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:com_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers... | [
"# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapte... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\... | [
37,
72,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.... | [
-0.021959694102406502,
-0.04722658172249794,
-0.0020901167299598455,
0.0055557857267558575,
0.168034628033638,
0.036897074431180954,
0.11963310837745667,
0.0740930438041687,
0.010512378066778183,
0.03627534210681915,
0.0522238090634346,
0.06858167052268982,
0.0709172934293747,
-0.002986696... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-conll2000` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:chunk/conll2000", "adapter-transformers"], "datasets": ["conll2000"]} | token-classification | AdapterHub/roberta-base-pf-conll2000 | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:chunk/conll2000",
"en",
"dataset:conll2000",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base
An adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transforme... | [
"# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adap... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and include... | [
47,
75,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and incl... | [
-0.05101762339472771,
0.016933875158429146,
-0.002670468995347619,
0.02506471984088421,
0.15045151114463806,
0.015266003087162971,
0.1477363556623459,
0.04395647719502449,
-0.05621742829680443,
0.042503394186496735,
0.049829695373773575,
0.10174845159053802,
0.05283960700035095,
0.08295380... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-conll2003` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[a... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ner/conll2003", "adapter-transformers"], "datasets": ["conll2003"]} | token-classification | AdapterHub/roberta-base-pf-conll2003 | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:ner/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base
An adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers... | [
"# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapte... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a ... | [
46,
74,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes... | [
-0.020700078457593918,
0.0017153254011645913,
-0.0031110194977372885,
0.025641117244958878,
0.1425832062959671,
0.01251554861664772,
0.14843446016311646,
0.03310723602771759,
-0.07274292409420013,
0.04675598442554474,
0.05454360693693161,
0.10993572324514389,
0.03931037336587906,
0.0612526... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.