Update README.md
Browse files
README.md
CHANGED
|
@@ -11,4 +11,24 @@ tags:
|
|
| 11 |
|
| 12 |
* Parameter-efficient, instruction-fine-tuned phi-4-mini model.
|
| 13 |
* Uses LoRA for fine-tuning.
|
| 14 |
-
* Trained on LinkedIn posts from various themes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
* Parameter-efficient, instruction-fine-tuned phi-4-mini model.
|
| 13 |
* Uses LoRA for fine-tuning.
|
| 14 |
+
* Trained on LinkedIn posts from various themes.
|
| 15 |
+
|
| 16 |
+
* Training details:
|
| 17 |
+
<ul>
|
| 18 |
+
<li>Training size: 2643</li>
|
| 19 |
+
<li>Quantization: 8-bit</li>
|
| 20 |
+
<li>Optimizer: AdamW</li>
|
| 21 |
+
<li>Learning Rate: 1e-4</li>
|
| 22 |
+
<li>Epochs: 1</li>
|
| 23 |
+
<li>Train Batch size: 1</li>
|
| 24 |
+
<li>Eval Batch size: 4</li>
|
| 25 |
+
<li>Gradient accumulation steps: 8</li>
|
| 26 |
+
<li>Sequence length: 412</li>
|
| 27 |
+
</ul>
|
| 28 |
+
|
| 29 |
+
* LoRA configs:
|
| 30 |
+
<ul>
|
| 31 |
+
<li>Rank: 16</li>
|
| 32 |
+
<li>Alpha: 16</li>
|
| 33 |
+
<li>Dropout: 0.05</li>
|
| 34 |
+
</ul>
|