Biomedical Language Models are Robust to Sub-optimal Tokenization
Paper • 2306.17649 • Published • 1
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("osunlp/BioVocabBERT")
model = AutoModelForMaskedLM.from_pretrained("osunlp/BioVocabBERT")YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
This biomedical language model uses a specialized biomedical tokenizer which is more closely aligned with human-morphological judgements than previous biomedical tokenizers such as PubMedBERT.
Details about our tokenizer design, pre-training procedure and downstream results can be found in our BioNLP @ ACL 2023 paper
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="osunlp/BioVocabBERT")