Update link
Browse files
README.md
CHANGED
|
@@ -72,7 +72,13 @@ The adapter was trained on a curated blend of English datasets:
|
|
| 72 |
|
| 73 |
## 🖥️ Training Hardware
|
| 74 |
Fine-tuning was performed entirely on a consumer-grade laptop:
|
| 75 |
-
- **Laptop:** Acer Nitro V15
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
This demonstrates that Phi-2 can be fine-tuned effectively even on low-VRAM devices.
|
| 77 |
|
| 78 |
---
|
|
@@ -110,8 +116,9 @@ The adapter released in this repository is the result of this final, optimized t
|
|
| 110 |
|--------|----------|----------|
|
| 111 |
| **1** | Fine-tune again from scratch(from base model) by applying all the insights from previous experiments. | 1d 21h |
|
| 112 |
|
| 113 |
-
📊 **W&B Log (Phase 1F):** [wandb.ai/VoidNova/.../
|
| 114 |
-
|
|
|
|
| 115 |
|
| 116 |
---
|
| 117 |
|
|
|
|
| 72 |
|
| 73 |
## 🖥️ Training Hardware
|
| 74 |
Fine-tuning was performed entirely on a consumer-grade laptop:
|
| 75 |
+
- **Laptop:** Acer Nitro V15
|
| 76 |
+
- **GPU:** NVIDIA RTX 2050 Mobile (4 GB VRAM)
|
| 77 |
+
- **CPU:** Intel Core i5-13420H
|
| 78 |
+
- **RAM:** 16 GB
|
| 79 |
+
- **Quantization:** 4-bit NF4
|
| 80 |
+
- **Strategy:** Low VRAM setup using gradient accumulation, packing, and LoRA adapters
|
| 81 |
+
-
|
| 82 |
This demonstrates that Phi-2 can be fine-tuned effectively even on low-VRAM devices.
|
| 83 |
|
| 84 |
---
|
|
|
|
| 116 |
|--------|----------|----------|
|
| 117 |
| **1** | Fine-tune again from scratch(from base model) by applying all the insights from previous experiments. | 1d 21h |
|
| 118 |
|
| 119 |
+
📊 **W&B Log (Phase 1F):** [wandb.ai/VoidNova/.../](https://wandb.ai/VoidNova/phi-2-2.7B_qlora_alpaca-51.8k_identity-model-232_squadv2-15k/)
|
| 120 |
+
|
| 121 |
+
📊 **W&B Log (Final):** [wandb.ai/VoidNova/.../runs/rx5fih5v](https://wandb.ai/VoidNova/phi-2_qlora_ZeroChat/)
|
| 122 |
|
| 123 |
---
|
| 124 |
|