Update README.md
Browse files
README.md
CHANGED
|
@@ -113,6 +113,14 @@ for k in range(606):
|
|
| 113 |
print(pred)
|
| 114 |
```
|
| 115 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
We used Low-Rank Adaptation (LoRA) as the Parameter-Efficient Fine-Tuning (PEFT) method for fine-tuning utilizing the unsloth framework.
|
| 117 |
|
| 118 |
The hyper-parameters of Llama 3.2-11B are as follows:
|
|
|
|
| 113 |
print(pred)
|
| 114 |
```
|
| 115 |
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
|
| 122 |
+
|
| 123 |
+
|
| 124 |
We used Low-Rank Adaptation (LoRA) as the Parameter-Efficient Fine-Tuning (PEFT) method for fine-tuning utilizing the unsloth framework.
|
| 125 |
|
| 126 |
The hyper-parameters of Llama 3.2-11B are as follows:
|