File size: 2,315 Bytes
9d16da0
 
 
17d5cf3
9d16da0
 
cc00fac
9d16da0
 
 
 
 
 
 
 
 
 
 
cc00fac
 
9d16da0
 
 
 
cc00fac
 
141b674
 
 
 
 
cc00fac
 
 
 
141b674
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc00fac
 
 
 
 
17d5cf3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
language: en
license: apache-2.0
library_name: transformers
---

# SQFT Fine-tuned Model: sqft-qa-sparsepeft-phi-3-mini-4k-50-gptq-cs-heu

- Base Model: [IntelLabs/sqft-phi-3-mini-4k-50-base-gptq](https://huggingface.co/IntelLabs/sqft-phi-3-mini-4k-50-base-gptq)
- Sparsity: 50%
- Quantization: INT4 (GPTQ)
- Finetune Method: SQFT + QA-SparsePEFT
- Finetune data: [winogrande](https://huggingface.co/datasets/winogrande), [boolq](https://huggingface.co/datasets/google/boolq), [openbookqa](https://huggingface.co/datasets/allenai/openbookqa), [hellaswag](https://huggingface.co/datasets/Rowan/hellaswag), [piqa](https://huggingface.co/datasets/piqa), [ai2_arc](https://huggingface.co/datasets/allenai/ai2_arc) training dataset (83k)
- Sub-Adapter: Heuristic

### Evaluation

```bash
MODEL_NAME=IntelLabs/sqft-qa-sparsepeft-phi-3-mini-4k-50-gptq-cs-heu
lm_eval --model hf --model_args pretrained=${MODEL_NAME},add_bos_token=True,trust_remote_code=True --tasks piqa,arc_easy,arc_challenge,hellaswag,openbookqa,boolq,winogrande --batch_size auto:4
```

Refer to our [repo](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT) for the environment information to run this command.

## Model Sources

**Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT)

**Paper:**
- [SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models](https://arxiv.org/abs/2410.03750)
- [Low-Rank Adapters Meet Neural Architecture Search for LLM Compression](https://arxiv.org/abs/2501.16372)

## Citation

```bash
@inproceedings{munoz-etal-2024-sqft,
    title = "{SQFT}: Low-cost Model Adaptation in Low-precision Sparse Foundation Models",
    author = "Munoz, Juan Pablo  and
      Yuan, Jinjie  and
      Jain, Nilesh",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-emnlp.749",
    pages = "12817--12832",
}
```

## License

Apache-2.0