Fill-Mask
Transformers
PyTorch
Safetensors
English
modernbert
masked-lm
long-context
BioClinical-ModernBERT
clinical
biomedical
clinical encoder
clinical modern bert
Instructions to use thomas-sounack/BioClinical-ModernBERT-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use thomas-sounack/BioClinical-ModernBERT-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="thomas-sounack/BioClinical-ModernBERT-base")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("thomas-sounack/BioClinical-ModernBERT-base") model = AutoModelForMaskedLM.from_pretrained("thomas-sounack/BioClinical-ModernBERT-base") - Inference
- Notebooks
- Google Colab
- Kaggle
Pushing Onnx model to Hugging Face Hub
#2
by louisbrulenaudet - opened
Hello!
This pull request has been automatically generated from the push_to_hub method from the Sentence Transformers library.
Full Model Architecture:
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ORTModelForFeatureExtraction
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Tip:
Consider testing this pull request before merging by loading the model from this PR with the revision argument:
from sentence_transformers import SentenceTransformer
# TODO: Fill in the PR number
pr_number = 2
model = SentenceTransformer(
"thomas-sounack/BioClinical-ModernBERT-base",
revision=f"refs/pr/{pr_number}",
backend="onnx",
)
# Verify that everything works as expected
embeddings = model.encode(["The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium."])
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities)