Instructions to use CompBioDSA/MutBERT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CompBioDSA/MutBERT with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("CompBioDSA/MutBERT", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -35,7 +35,7 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
| 35 |
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
|
| 36 |
```
|
| 37 |
|
| 38 |
-
The default attention is flash attention("sdpa"). If you want use basic attention, you can replace it with "eager". Please refer to [here](https://huggingface.co/
|
| 39 |
|
| 40 |
### Get embeddings
|
| 41 |
|
|
|
|
| 35 |
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
|
| 36 |
```
|
| 37 |
|
| 38 |
+
The default attention is flash attention("sdpa"). If you want use basic attention, you can replace it with "eager". Please refer to [here](https://huggingface.co/CompBioDSA/MutBERT/blob/main/modeling_mutbert.py#L438).
|
| 39 |
|
| 40 |
### Get embeddings
|
| 41 |
|