| # BERT-tiny model finetuned with M-FAC |
|
|
| This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC. |
| Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). |
|
|
| ## Finetuning setup |
|
|
| For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. |
| Hyperparameters used by M-FAC optimizer: |
|
|
| ```bash |
| learning rate = 1e-4 |
| number of gradients = 1024 |
| dampening = 1e-6 |
| ``` |
|
|
| ## Results |
|
|
| We share the best model out of 5 runs with the following score on QQP validation set: |
|
|
| ```bash |
| f1 = 79.84 |
| accuracy = 84.40 |
| ``` |
|
|
| Mean and standard deviation for 5 runs on QQP validation set: |
|
|
| | | F1 | Accuracy | |
| |:----:|:-----------:|:----------:| |
| | Adam | 77.58 ± 0.08 | 81.09 ± 0.15 | |
| | M-FAC | 79.71 ± 0.13 | 84.29 ± 0.08 | |
|
|
| Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: |
|
|
| ```bash |
| CUDA_VISIBLE_DEVICES=0 python run_glue.py \ |
| --seed 1234 \ |
| --model_name_or_path prajjwal1/bert-tiny \ |
| --task_name qqp \ |
| --do_train \ |
| --do_eval \ |
| --max_seq_length 128 \ |
| --per_device_train_batch_size 32 \ |
| --learning_rate 1e-4 \ |
| --num_train_epochs 5 \ |
| --output_dir out_dir/ \ |
| --optim MFAC \ |
| --optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}' |
| ``` |
|
|
| We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). |
|
|
| Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). |
| A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). |
|
|
| ## BibTeX entry and citation info |
|
|
| ```bibtex |
| @article{frantar2021m, |
| title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information}, |
| author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan}, |
| journal={Advances in Neural Information Processing Systems}, |
| volume={35}, |
| year={2021} |
| } |
| |
| ``` |
|
|