Instructions to use hagara/biobert-qa with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hagara/biobert-qa with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="hagara/biobert-qa")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("hagara/biobert-qa") model = AutoModelForQuestionAnswering.from_pretrained("hagara/biobert-qa") - Notebooks
- Google Colab
- Kaggle
File size: 394 Bytes
3c1abb3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | {
"clean_up_tokenization_spaces": true,
"cls_token": "[CLS]",
"do_basic_tokenize": true,
"do_lower_case": true,
"mask_token": "[MASK]",
"model_max_length": 1000000000000000019884624838656,
"never_split": null,
"pad_token": "[PAD]",
"sep_token": "[SEP]",
"strip_accents": null,
"tokenize_chinese_chars": true,
"tokenizer_class": "BertTokenizer",
"unk_token": "[UNK]"
}
|