Transformers How to use emxia18/bias-attention with Transformers:
# Load model directly
from transformers import AutoTokenizer, BiasedDistilBERT
tokenizer = AutoTokenizer.from_pretrained("emxia18/bias-attention")
model = BiasedDistilBERT.from_pretrained("emxia18/bias-attention")