How to use emxia18/bias-attention with Transformers:
# Load model directly from transformers import AutoTokenizer, BiasedDistilBERT tokenizer = AutoTokenizer.from_pretrained("emxia18/bias-attention") model = BiasedDistilBERT.from_pretrained("emxia18/bias-attention")
The community tab is the place to discuss and collaborate with the HF community!