Update README.md
Browse files
README.md
CHANGED
|
@@ -143,6 +143,7 @@ Frameworks and Libraries
|
|
| 143 |
|
| 144 |
**Unsloth (for 2x faster training and 70% less memory usage)**
|
| 145 |
|
|
|
|
| 146 |
### LoRA Configuration
|
| 147 |
|
| 148 |
Parameter-Efficient Fine-Tuning was performed using LoRA with the following configuration, targeting a wide range of attention and feed-forward network layers to ensure comprehensive adaptation:
|
|
|
|
| 143 |
|
| 144 |
**Unsloth (for 2x faster training and 70% less memory usage)**
|
| 145 |
|
| 146 |
+
|
| 147 |
### LoRA Configuration
|
| 148 |
|
| 149 |
Parameter-Efficient Fine-Tuning was performed using LoRA with the following configuration, targeting a wide range of attention and feed-forward network layers to ensure comprehensive adaptation:
|