MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers
Paper
• 2012.15828 • Published
• 1
YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
mMiniLM-L12xH384 XLM-R model proposed in MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers that we fine-tune using the direct assessment annotations collected in the Workshop on Statistical Machine Translation (WMT) 2015 to 2020.
This model is much more light weight than the traditional XLM-RoBERTa base and large.