| | --- |
| | tags: |
| | - bert |
| | - oBERT |
| | - sparsity |
| | - pruning |
| | - compression |
| | language: en |
| | datasets: squad |
| | --- |
| | # oBERT-3-downstream-dense-squadv1 |
| |
|
| | This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259). |
| |
|
| | It corresponds to the model presented in the `Table 3 - 3 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models: |
| | - 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1` |
| | - 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1` |
| | - 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1` |
| | - 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1` |
| |
|
| | SQuADv1 dev-set: |
| | ``` |
| | EM = 76.62 |
| | F1 = 84.65 |
| | ``` |
| |
|
| | Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT) |
| |
|
| | If you find the model useful, please consider citing our work. |
| |
|
| | ## Citation info |
| | ```bibtex |
| | @article{kurtic2022optimal, |
| | title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models}, |
| | author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan}, |
| | journal={arXiv preprint arXiv:2203.07259}, |
| | year={2022} |
| | } |
| | ``` |