Instructions to use GleghornLab/cdsBERT-plus with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GleghornLab/cdsBERT-plus with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="GleghornLab/cdsBERT-plus")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("GleghornLab/cdsBERT-plus") model = AutoModel.from_pretrained("GleghornLab/cdsBERT-plus") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ widget:
|
|
| 21 |
|
| 22 |
## Model description
|
| 23 |
|
| 24 |
-
[cdsBERT+](https://doi.org/10.1101/2023.09.15.558027) is pLM with a codon vocabulary that was seeded with [ProtBERT](https://huggingface.co/Rostlab/prot_bert_bfd) and trained with a novel vocabulary extension pipeline called MELD. cdsBERT+ offers a highly biologically relevant latent space with excellent EC number prediction surpassing ProtBERT.
|
| 25 |
Specifically, this is the half-precision checkpoint after student-teacher knowledge distillation with Ankh-base.
|
| 26 |
|
| 27 |
## How to use
|
|
|
|
| 21 |
|
| 22 |
## Model description
|
| 23 |
|
| 24 |
+
[cdsBERT+](https://doi.org/10.1101/2023.09.15.558027) is a pLM with a codon vocabulary that was seeded with [ProtBERT](https://huggingface.co/Rostlab/prot_bert_bfd) and trained with a novel vocabulary extension pipeline called MELD. cdsBERT+ offers a highly biologically relevant latent space with excellent EC number prediction surpassing ProtBERT.
|
| 25 |
Specifically, this is the half-precision checkpoint after student-teacher knowledge distillation with Ankh-base.
|
| 26 |
|
| 27 |
## How to use
|