| --- |
| tags: |
| - bert |
| - oBERT |
| - sparsity |
| - pruning |
| - compression |
| language: en |
| datasets: squad |
| --- |
| # oBERT-3-downstream-pruned-unstructured-80-squadv1 |
|
|
| This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259). |
|
|
|
|
| It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - unstructured`. |
|
|
| ``` |
| Pruning method: oBERT downstream unstructured |
| Paper: https://arxiv.org/abs/2203.07259 |
| Dataset: SQuADv1 |
| Sparsity: 80% |
| Number of layers: 3 |
| ``` |
|
|
| The dev-set performance of this model: |
| ``` |
| EM = 75.62 |
| F1 = 84.08 |
| ``` |
|
|
| Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT) |
|
|
| If you find the model useful, please consider citing our work. |
|
|
| ## Citation info |
| ```bibtex |
| @article{kurtic2022optimal, |
| title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models}, |
| author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan}, |
| journal={arXiv preprint arXiv:2203.07259}, |
| year={2022} |
| } |
| ``` |