Commit ·
a3de4e4
1
Parent(s): f94fdeb
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,21 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- code
|
| 5 |
+
size_categories:
|
| 6 |
+
- n<1K
|
| 7 |
---
|
| 8 |
+
|
| 9 |
+
# Gists
|
| 10 |
+
|
| 11 |
+
This 🤗 dataset contains some of my GitHub Gists at https://gist.github.com/alvarobartt, ported here so that its cleaner
|
| 12 |
+
and easier to maintain.
|
| 13 |
+
|
| 14 |
+
## Available gists
|
| 15 |
+
|
| 16 |
+
* `causallm-to-hub.py`: to upload any `AutoModelForCausalLM` to the 🤗 Hub from a local path, useful after some LLM fine-tuning,
|
| 17 |
+
as sometimes `accelerate` gets stuck while pushing to the Hub, so I tend to do that in a separate process after each epoch has been
|
| 18 |
+
dumped into the disk.
|
| 19 |
+
|
| 20 |
+
* `dpo-qlora-4bit.py`: to fine-tune an `AutoModelForCausalLM` using Q-LoRA in 4-bit, in this case the fine-tuning is done using
|
| 21 |
+
🤗 `trl.DPOTrainer` built on top of `transformers` useful for intent alignment of LMs on low resources, ~80GB of VRAM.
|