Instructions to use BEncoderRT/Pythia-QLoRA-Instruction-Tuning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use BEncoderRT/Pythia-QLoRA-Instruction-Tuning with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("EleutherAI/pythia-1B-deduped") model = PeftModel.from_pretrained(base_model, "BEncoderRT/Pythia-QLoRA-Instruction-Tuning") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -13,9 +13,11 @@ tags:
|
|
| 13 |
- peft
|
| 14 |
---
|
| 15 |
|
| 16 |
-
“Predict the next token”
|
| 17 |
-
|
| 18 |
-
|
|
|
|
|
|
|
| 19 |
|
| 20 |
|
| 21 |
# QLoRA Instruction Tuning on Pythia-1B
|
|
|
|
| 13 |
- peft
|
| 14 |
---
|
| 15 |
|
| 16 |
+
## “Predict the next token”
|
| 17 |
+
|
| 18 |
+
# not
|
| 19 |
+
|
| 20 |
+
## “Obey the instruction”
|
| 21 |
|
| 22 |
|
| 23 |
# QLoRA Instruction Tuning on Pythia-1B
|