legacy-datasets/wikipedia
Updated • 121k • 629
How to use Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa") # Load model directly
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa")
model = AutoModelForPreTraining.from_pretrained("Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa")This model is was created using Prune OFA method described in Prune Once for All: Sparse Pre-Trained Language Models presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available here.