Add pipeline tag and improve documentation
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,13 +1,16 @@
|
|
| 1 |
---
|
| 2 |
-
tags:
|
| 3 |
-
- sketchtune
|
| 4 |
-
- sketch to adapt
|
| 5 |
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
# Fine-Tuned Model Checkpoints for *(ICML 2025) Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation*
|
| 9 |
|
| 10 |
-
This repository contains the fine-tuned model checkpoints used in
|
|
|
|
|
|
|
| 11 |
|
| 12 |
The table below lists the available models along with their fine-tuning datasets, bit widths, groups per row, and training epochs.
|
| 13 |
|
|
@@ -40,7 +43,7 @@ SketchTune is a novel method for adapting large language models (LLMs) that focu
|
|
| 40 |
* Even with base models that are **2.6–3.5× smaller**, SketchTune **outperforms LoRA, DoRA, and S2FT** on commonsense and math reasoning benchmarks.
|
| 41 |
* On the GSM8K math dataset, SketchTune achieves a **14.48% higher accuracy than LoftQ**, while training **7.3× fewer parameters**.
|
| 42 |
|
| 43 |
-
For a deep dive into how sketching works, including math details and extensive test results, check out
|
| 44 |
|
| 45 |
### Citation
|
| 46 |
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
tags:
|
| 5 |
+
- sketchtune
|
| 6 |
+
- sketch to adapt
|
| 7 |
---
|
| 8 |
|
| 9 |
# Fine-Tuned Model Checkpoints for *(ICML 2025) Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation*
|
| 10 |
|
| 11 |
+
This repository contains the fine-tuned model checkpoints used in the paper: [Sketch to Adapt: Fine-Tunable Sketches for Efficient LLM Adaptation](https://huggingface.co/papers/2410.06364).
|
| 12 |
+
|
| 13 |
+
**Authors**: Tianyi Zhang, Junda Su, Aditya Desai, Oscar Wu, Zhaozhuo Xu, Anshumali Shrivastava.
|
| 14 |
|
| 15 |
The table below lists the available models along with their fine-tuning datasets, bit widths, groups per row, and training epochs.
|
| 16 |
|
|
|
|
| 43 |
* Even with base models that are **2.6–3.5× smaller**, SketchTune **outperforms LoRA, DoRA, and S2FT** on commonsense and math reasoning benchmarks.
|
| 44 |
* On the GSM8K math dataset, SketchTune achieves a **14.48% higher accuracy than LoftQ**, while training **7.3× fewer parameters**.
|
| 45 |
|
| 46 |
+
For a deep dive into how sketching works, including math details and extensive test results, check out the full paper: [https://huggingface.co/papers/2410.06364](https://huggingface.co/papers/2410.06364).
|
| 47 |
|
| 48 |
### Citation
|
| 49 |
|