modelId stringlengths 4 112 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringclasses 21
values | files list | publishedBy stringlengths 2 37 | downloads_last_month int32 0 9.44M | library stringclasses 15
values | modelCard large_stringlengths 0 100k |
|---|---|---|---|---|---|---|---|---|
AlexaRyck/KEITH | 2021-01-21T15:42:09.000Z | [] | [
".gitattributes"
] | AlexaRyck | 0 | |||
AlexeyIgnatov/albert-xlarge-v2-squad-v2 | 2021-03-26T11:37:40.000Z | [] | [
".gitattributes"
] | AlexeyIgnatov | 1 | |||
Alfia/anekdotes | 2021-02-28T21:02:56.000Z | [] | [
".gitattributes"
] | Alfia | 0 | |||
Amir99/toxic | 2021-04-09T10:47:58.000Z | [] | [
".gitattributes"
] | Amir99 | 0 | |||
AmirServi/MyModel | 2021-03-24T12:57:36.000Z | [] | [
".gitattributes",
"README.md"
] | AmirServi | 0 | |||
Amro-Kamal/gpt | 2020-12-19T13:24:23.000Z | [] | [
".gitattributes"
] | Amro-Kamal | 0 | |||
Amrrs/wav2vec2-large-xlsr-53-tamil | 2021-03-22T07:04:07.000Z | [
"pytorch",
"wav2vec2",
"ta",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
] | automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"vocab.json"
] | Amrrs | 18 | transformers | ---
language: ta
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Tamil by Amrrs
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voic... |
AnnettJaeger/AnneJae | 2021-01-19T17:24:27.000Z | [] | [
".gitattributes"
] | AnnettJaeger | 0 | |||
Anonymous/ReasonBERT-BERT | 2021-05-23T02:33:35.000Z | [
"pytorch",
"bert",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | Anonymous | 13 | transformers | Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx
This is based on bert-base-uncased model and pre-trained for text input | |
Anonymous/ReasonBERT-RoBERTa | 2021-05-23T02:34:08.000Z | [
"pytorch",
"roberta",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | Anonymous | 9 | transformers | Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx
This is based on roberta-base model and pre-trained for text input | |
Anonymous/ReasonBERT-TAPAS | 2021-05-23T02:34:38.000Z | [
"pytorch",
"tapas",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin"
] | Anonymous | 10 | transformers | Pre-trained to have better reasoning ability, try this if you are working with task like QA. For more details please see https://openreview.net/forum?id=cGB7CMFtrSx
This is based on tapas-base(no_reset) model and pre-trained for table input | |
AnonymousNLP/pretrained-model-1 | 2021-05-21T09:27:54.000Z | [
"pytorch",
"gpt2",
"transformers"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | AnonymousNLP | 10 | transformers | ||
AnonymousNLP/pretrained-model-2 | 2021-05-21T09:28:24.000Z | [
"pytorch",
"gpt2",
"transformers"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | AnonymousNLP | 9 | transformers | ||
AnonymousSubmission/pretrained-model-1 | 2021-02-01T09:22:13.000Z | [] | [
".gitattributes"
] | AnonymousSubmission | 0 | |||
Aries/T5_question_answering | 2020-12-11T17:10:33.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | Aries | 12 | transformers | |
Aries/T5_question_generation | 2020-11-28T20:11:38.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | Aries | 73 | transformers | |
ArseniyBolotin/bert-multi-PAD-ner | 2021-05-18T17:06:50.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
] | token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | ArseniyBolotin | 20 | transformers | |
Ashl3y/model_name | 2021-05-14T15:54:02.000Z | [] | [
".gitattributes"
] | Ashl3y | 0 | |||
Ateeb/EmotionDetector | 2021-03-22T18:03:50.000Z | [
"pytorch",
"funnel",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Ateeb | 32 | transformers | |
Ateeb/FullEmotionDetector | 2021-03-22T19:28:37.000Z | [
"pytorch",
"funnel",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Ateeb | 22 | transformers | |
Ateeb/QA | 2021-05-03T11:41:12.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"__init__.py",
"config.json",
"main.py",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt",
"__pycache__/preprocess.cpython-37.pyc"
] | Ateeb | 21 | transformers | |
Ateeb/SquadQA | 2021-05-03T09:47:52.000Z | [] | [
".gitattributes"
] | Ateeb | 0 | |||
Ateeb/asd | 2021-05-03T09:31:28.000Z | [] | [
".gitattributes"
] | Ateeb | 0 | |||
Atlasky/Turkish-Negator | 2021-01-24T09:27:53.000Z | [] | [
".gitattributes",
"README.md"
] | Atlasky | 0 | Placeholder | ||
Atlasky/turkish-negator-nn | 2021-01-24T09:57:49.000Z | [] | [
".gitattributes"
] | Atlasky | 0 | |||
Aurora/asdawd | 2021-04-06T19:15:11.000Z | [] | [
".gitattributes",
"README.md"
] | Aurora | 0 | https://www.geogebra.org/m/bbuczchu
https://www.geogebra.org/m/xwyasqje
https://www.geogebra.org/m/mx2cqkwr
https://www.geogebra.org/m/tkqqqthm
https://www.geogebra.org/m/asdaf9mj
https://www.geogebra.org/m/ywuaj7p5
https://www.geogebra.org/m/jkfkayj3
https://www.geogebra.org/m/hptnn7ar
https://www.geogebra.org/m/de9cw... | ||
Aurora/community.afpglobal | 2021-04-08T08:34:53.000Z | [] | [
".gitattributes",
"README.md"
] | Aurora | 0 | https://community.afpglobal.org/network/members/profile?UserKey=b0b38adc-86c7-4d30-85c6-ac7d15c5eeb0
https://community.afpglobal.org/network/members/profile?UserKey=f4ddef89-b508-4695-9d1e-3d4d1a583279
https://community.afpglobal.org/network/members/profile?UserKey=36081479-5e7b-41ba-8370-ecf72989107a
https://community... | ||
Aviora/news2vec | 2021-01-29T08:11:40.000Z | [] | [
".gitattributes",
"README.md"
] | Aviora | 0 | # w2v with news | ||
Aviora/phobert-ner | 2021-04-29T06:49:47.000Z | [] | [
".gitattributes"
] | Aviora | 0 | |||
Azura/data | 2021-03-01T08:08:20.000Z | [] | [
".gitattributes",
"README.md"
] | Azura | 0 | |||
BOON/electra-xlnet | 2021-02-11T05:57:07.000Z | [] | [
".gitattributes"
] | BOON | 0 | |||
BOON/electra_qa | 2021-02-11T05:45:36.000Z | [] | [
".gitattributes"
] | BOON | 0 | |||
Bakkes/BakkesModWiki | 2021-04-06T17:04:42.000Z | [] | [
".gitattributes",
"README.md"
] | Bakkes | 0 | |||
BaptisteDoyen/camembert-base-xlni | 2021-04-08T14:11:55.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"dataset:xnli",
"transformers",
"zero-shot-classification",
"xnli",
"nli",
"license:mit",
"pipeline_tag:zero-shot-classification"
] | zero-shot-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | BaptisteDoyen | 3,725 | transformers | ---
language:
- fr
thumbnail:
tags:
- zero-shot-classification
- xnli
- nli
- fr
license: mit
pipeline_tag: zero-shot-classification
datasets:
- xnli
metrics:
- accuracy
---
# camembert-base-xnli
## Model description
Camembert-base model fine-tuned on french part of XNLI dataset. <br>
One of the few Zero-Shot c... |
BeIR/query-gen-msmarco-t5-base-v1 | 2021-03-01T15:25:52.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | BeIR | 241 | transformers | # Query Generation
This model is the t5-base model from [docTTTTTquery](https://github.com/castorini/docTTTTTquery).
The T5-base model was trained on the [MS MARCO Passage Dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking), which consists of about 500k real search queries from Bing together with the releva... |
BeIR/query-gen-msmarco-t5-large-v1 | 2021-03-01T15:27:56.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | BeIR | 260 | transformers | # Query Generation
This model is the t5-base model from [docTTTTTquery](https://github.com/castorini/docTTTTTquery).
The T5-base model was trained on the [MS MARCO Passage Dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking), which consists of about 500k real search queries from Bing together with the releva... |
BeIR/sparta-msmarco-distilbert-base-v1 | 2021-04-20T14:54:42.000Z | [
"pytorch",
"distilbert",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_script.py",
"vocab.txt"
] | BeIR | 63 | transformers | ||
Belin/T5-Terms-and-Conditions | 2021-06-10T15:22:15.000Z | [] | [
".gitattributes"
] | Belin | 0 | |||
BenDavis71/GPT-2-Finetuning-AIRaid | 2021-05-21T09:29:22.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | BenDavis71 | 26 | transformers | |
BenQLange/HF_bot | 2021-02-12T17:40:17.000Z | [] | [
".gitattributes"
] | BenQLange | 0 | |||
BigBoy/model | 2021-04-09T13:12:58.000Z | [] | [
".gitattributes"
] | BigBoy | 0 | |||
BigSalmon/BlankSlots | 2021-03-27T18:50:29.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | BigSalmon | 14 | transformers | |
BigSalmon/DaBlank | 2021-03-20T03:53:42.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | BigSalmon | 8 | transformers | |
BigSalmon/Flowberta | 2021-06-12T01:20:12.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
] | BigSalmon | 2,048 | transformers | |
BigSalmon/GPT2HardArticleEasyArticle | 2021-05-21T09:31:52.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"events.out.tfevents.1619624233.d987fc993321.71.0",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin",
"1619624233.34817/events.out.tfevents.1619624233.d987fc993321.71.1"
] | BigSalmon | 14 | transformers | |
BigSalmon/Neo | 2021-04-07T15:05:25.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
] | BigSalmon | 20 | transformers | |
BigSalmon/Robertsy | 2021-06-10T23:23:33.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
] | BigSalmon | 15 | transformers | |
BigSalmon/Rowerta | 2021-06-11T01:07:05.000Z | [
"pytorch",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"training_args.bin"
] | BigSalmon | 9 | transformers | |
BigSalmon/T5Salmon | 2021-03-12T07:18:37.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | BigSalmon | 8 | transformers | |
BigSalmon/T5Salmon2 | 2021-03-15T23:17:03.000Z | [
"pytorch",
"t5",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | BigSalmon | 9 | transformers | |
Binbin/test | 2021-03-19T10:17:22.000Z | [] | [
".gitattributes"
] | Binbin | 0 | |||
BinksSachary/DialoGPT-small-shaxx | 2021-06-03T04:48:29.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | BinksSachary | 40 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
BinksSachary/ShaxxBot | 2021-06-03T04:51:56.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | BinksSachary | 32 | transformers | ---
tags:
- conversational
---
# My Awesome Model |
BinksSachary/ShaxxBot2 | 2021-06-03T04:37:46.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | BinksSachary | 45 | transformers | ---
tags:
- conversational
---
# My Awesome Model
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
for step in ... |
Blazeolmo/Scrabunzi | 2021-06-12T17:05:19.000Z | [] | [
".gitattributes"
] | Blazeolmo | 0 | |||
BonjinKim/dst_kor_bert | 2021-05-19T05:35:57.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | BonjinKim | 23 | transformers | # Korean bert base model for DST
- This is ConversationBert for dsksd/bert-ko-small-minimal(base-module) + 5 datasets
- Use dsksd/bert-ko-small-minimal tokenizer
- 5 datasets
- tweeter_dialogue : xlsx
- speech : trn
- office_dialogue : json
- KETI_dialogue : txt
- WOS_dataset : json
```python
tokenizer = ... | |
Boondong/Wandee | 2021-03-18T11:13:33.000Z | [] | [
".gitattributes"
] | Boondong | 0 | |||
BrianTin/MTBERT | 2021-05-18T17:08:50.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".DS_Store",
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | BrianTin | 27 | transformers | |
CAMeL-Lab/bert-base-camelbert-ca | 2021-05-18T17:09:46.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 72 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CAMeL-Lab/bert-base-camelbert-da | 2021-05-18T17:11:39.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 131 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CAMeL-Lab/bert-base-camelbert-mix | 2021-05-18T17:14:22.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 1,283 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CAMeL-Lab/bert-base-camelbert-msa-eighth | 2021-05-18T17:15:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 76 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CAMeL-Lab/bert-base-camelbert-msa-half | 2021-05-18T17:16:22.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 18 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CAMeL-Lab/bert-base-camelbert-msa-quarter | 2021-05-18T17:18:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 13 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CAMeL-Lab/bert-base-camelbert-msa-sixteenth | 2021-05-18T17:19:03.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 18 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CAMeL-Lab/bert-base-camelbert-msa | 2021-05-18T17:19:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | CAMeL-Lab | 385 | transformers | ---
language:
- ar
license: apache-2.0
widget:
- text: "الهدف من الحياة هو [MASK] ."
---
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
## Model description
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
The details are described in... |
CLEE/CLEE | 2021-05-17T13:29:33.000Z | [] | [
".gitattributes"
] | CLEE | 0 | |||
CTBC/ATS | 2020-12-12T15:10:21.000Z | [] | [
".gitattributes"
] | CTBC | 0 | |||
Callidior/bert2bert-base-arxiv-titlegen | 2021-03-04T09:49:47.000Z | [
"pytorch",
"encoder-decoder",
"seq2seq",
"en",
"dataset:arxiv_dataset",
"transformers",
"summarization",
"license:apache-2.0",
"text2text-generation"
] | summarization | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Callidior | 103 | transformers | ---
language:
- en
tags:
- summarization
license: apache-2.0
datasets:
- arxiv_dataset
metrics:
- rouge
widget:
- text: "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and ... |
CallumRai/HansardGPT2 | 2021-05-21T09:33:25.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
".gitignore",
"README.md",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | CallumRai | 18 | transformers | A PyTorch GPT-2 model trained on hansard from 2019-01-01 to 2020-06-01
For more information see: https://github.com/CallumRai/Hansard/ |
Cameron/BERT-Jigsaw | 2021-05-18T17:21:10.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 17 | transformers | |
Cameron/BERT-SBIC-offensive | 2021-05-18T17:22:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 10 | transformers | |
Cameron/BERT-SBIC-targetcategory | 2021-05-18T17:23:42.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 17 | transformers | |
Cameron/BERT-eec-emotion | 2021-05-18T17:25:51.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 19 | transformers | |
Cameron/BERT-jigsaw-identityhate | 2021-05-18T17:27:44.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 32 | transformers | |
Cameron/BERT-jigsaw-severetoxic | 2021-05-18T17:28:58.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 15 | transformers | |
Cameron/BERT-mdgender-convai-binary | 2021-05-18T17:30:21.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 11 | transformers | |
Cameron/BERT-mdgender-convai-ternary | 2021-05-18T17:31:21.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 7 | transformers | |
Cameron/BERT-mdgender-wizard | 2021-05-18T17:33:48.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 11 | transformers | |
Cameron/BERT-rtgender-opgender-annotations | 2021-05-18T17:34:57.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
] | Cameron | 16 | transformers | |
Capreolus/bert-base-msmarco | 2021-05-18T17:35:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Capreolus | 143 | transformers | # capreolus/bert-base-msmarco
## Model description
BERT-Base model (`google/bert_uncased_L-12_H-768_A-12`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model; see the [Capreolus BERT-MaxP implementation](https://github.com/capreolus-ir/capreolus/blo... |
Capreolus/birch-bert-large-car_mb | 2021-05-18T17:38:06.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Capreolus | 17 | transformers | ||
Capreolus/birch-bert-large-mb | 2021-05-18T17:40:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Capreolus | 14 | transformers | ||
Capreolus/birch-bert-large-msmarco_mb | 2021-05-18T17:43:33.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Capreolus | 75 | transformers | ||
Capreolus/electra-base-msmarco | 2020-09-08T14:53:10.000Z | [
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.txt"
] | Capreolus | 360 | transformers | # capreolus/electra-base-msmarco
## Model description
ELECTRA-Base model (`google/electra-base-discriminator`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model, but requires some modification since it contains a BERT classification head rather tha... |
Cat/Kitty | 2020-12-21T15:44:34.000Z | [] | [
".gitattributes"
] | Cat | 0 | |||
Chaima/TunBerto | 2021-04-01T12:56:56.000Z | [] | [
".gitattributes"
] | Chaima | 0 | |||
ChaitanyaU/FineTuneLM | 2021-01-13T10:27:29.000Z | [] | [
".gitattributes",
"FineTuneLM/config.json",
"FineTuneLM/pytorch_model.bin",
"FineTuneLM/special_tokens_map.json",
"FineTuneLM/tokenizer_config.json",
"FineTuneLM/training_args.bin",
"FineTuneLM/vocab.txt"
] | ChaitanyaU | 0 | |||
Chakita/Friends | 2021-06-04T10:36:40.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | Chakita | 67 | transformers | ---
tags:
- conversational
---
# Model trained on F.R.I.E.N.D.S dialogue |
Charlotte/text2dm_models | 2021-04-28T15:42:33.000Z | [] | [
".gitattributes"
] | Charlotte | 0 | |||
ChristopherA08/IndoELECTRA | 2021-02-04T06:23:59.000Z | [
"pytorch",
"electra",
"pretraining",
"id",
"dataset:oscar",
"transformers"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | ChristopherA08 | 180 | transformers | ---
language: id
datasets:
- oscar
---
# IndoBERT (Indonesian BERT Model)
## Model description
ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indo... | |
Cinnamon/electra-small-japanese-discriminator | 2020-12-11T21:26:13.000Z | [
"pytorch",
"electra",
"pretraining",
"ja",
"transformers",
"license:apache-2.0"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Cinnamon | 189 | transformers | ---
language: ja
license: apache-2.0
---
## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [J... | |
Cinnamon/electra-small-japanese-generator | 2020-12-11T21:26:17.000Z | [
"pytorch",
"electra",
"masked-lm",
"ja",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | Cinnamon | 435 | transformers | ---
language: ja
---
## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](ht... |
CodeNinja1126/bert-p-encoder | 2021-05-12T01:26:46.000Z | [
"pytorch"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | CodeNinja1126 | 6 | |||
CodeNinja1126/bert-q-encoder | 2021-05-12T01:31:17.000Z | [
"pytorch"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | CodeNinja1126 | 5 | |||
CodeNinja1126/koelectra-model | 2021-04-18T07:34:52.000Z | [] | [
".gitattributes"
] | CodeNinja1126 | 0 | |||
CodeNinja1126/test-model | 2021-05-18T17:45:32.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"trainer_state.json",
"training_args.bin"
] | CodeNinja1126 | 12 | transformers | |
CodeNinja1126/xlm-roberta-large-kor-mrc | 2021-05-19T06:11:31.000Z | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers"
] | question-answering | [
".gitattributes",
"config.json",
"pytorch_model.bin"
] | CodeNinja1126 | 35 | transformers | |
CoderEFE/DialoGPT-marxbot | 2021-06-07T01:24:25.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"conversational",
"text-generation"
] | conversational | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | CoderEFE | 125 | transformers | ---
tags:
- conversational
---
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
# Let's chat for 4 lines
for step in r... |
CoderEFE/DialoGPT-medium-marx | 2021-06-05T07:08:34.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"TAGS.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
] | CoderEFE | 19 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.