Instructions to use nlpie/tiny-biobert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nlpie/tiny-biobert with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="nlpie/tiny-biobert")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("nlpie/tiny-biobert") model = AutoModelForMaskedLM.from_pretrained("nlpie/tiny-biobert") - Notebooks
- Google Colab
- Kaggle
Commit ·
e5b80b3
1
Parent(s): 1638c07
Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ TinyBioBERT is a distilled version of the [BioBERT](https://huggingface.co/dmis-
|
|
| 5 |
This model uses a unique distillation method called ‘transformer-layer distillation’ which is applied on each layer of the student to align the attention maps and the hidden states of the student with those of the teacher.
|
| 6 |
|
| 7 |
# Architecture and Initialisation
|
| 8 |
-
This model uses 4 hidden layers with a hidden dimension size and an embedding size of 768 resulting in a total of 15M parameters. Due to the small hidden dimension size
|
| 9 |
|
| 10 |
# Citation
|
| 11 |
|
|
|
|
| 5 |
This model uses a unique distillation method called ‘transformer-layer distillation’ which is applied on each layer of the student to align the attention maps and the hidden states of the student with those of the teacher.
|
| 6 |
|
| 7 |
# Architecture and Initialisation
|
| 8 |
+
This model uses 4 hidden layers with a hidden dimension size and an embedding size of 768 resulting in a total of 15M parameters. Due to the model's small hidden dimension size, it uses random initialisation.
|
| 9 |
|
| 10 |
# Citation
|
| 11 |
|