| | --- |
| | language: |
| | - en |
| | base_model: |
| | - microsoft/Phi-4-mini-instruct |
| | pipeline_tag: text-generation |
| | tags: |
| | - professional |
| | - linkedin |
| | --- |
| | |
| | * Parameter-efficient, instruction-fine-tuned phi-4-mini model. |
| | * Uses LoRA for fine-tuning. |
| | * Trained on LinkedIn posts from various themes. |
| |
|
| | * Training details: |
| | <ul> |
| | <li>Training size: 2643</li> |
| | <li>Quantization: 8-bit</li> |
| | <li>Optimizer: AdamW</li> |
| | <li>Learning Rate: 1e-4</li> |
| | <li>Epochs: 1</li> |
| | <li>Train Batch size: 1</li> |
| | <li>Eval Batch size: 4</li> |
| | <li>Gradient accumulation steps: 8</li> |
| | <li>Sequence length: 412</li> |
| | </ul> |
| | |
| | * LoRA configs: |
| | <ul> |
| | <li>Rank: 16</li> |
| | <li>Alpha: 16</li> |
| | <li>Dropout: 0.05</li> |
| | </ul> |
| |
|