| | --- |
| | base_model: wizardoftrap/LFM2.5-1.2B-hi-it |
| | tags: |
| | - text-generation-inference |
| | - transformers |
| | - unsloth |
| | - lfm2 |
| | license: apache-2.0 |
| | language: |
| | - en |
| | - hi |
| | datasets: |
| | - wizardoftrap/indianHistoryEnhanced |
| | --- |
| | |
| |
|
| | # Indian History LLM – LFM2.5-1.2B Fine-tuned |
| |
|
| | - **Model name:** `wizardoftrap/LFM2.5-1.2B-his` |
| | - **Base model:** `wizardoftrap/LFM2.5-1.2B-hi-it` |
| | - **Fine-tuning method:** LoRA with Unsloth |
| | - **Domain:** Indian History |
| | - **Author:** Shiv Prakash (wizardoftrap) |
| |
|
| | --- |
| |
|
| | ## Overview |
| |
|
| | - **Indian History LLM** is a domain-adapted version of **wizardoftrap/LFM2.5-1.2B-hi-it**, fine-tuned to answer questions on Indian History. |
| | - The model is optimized to behave like a **history tutor**, producing concise, exam-style answers aligned with Indian curricula. |
| | - This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
| | - This model was fine tuned on with L4 GPU |
| |
|
| | --- |
| | ## Training Dataset |
| |
|
| | Fine-tuned on: |
| | [**`wizardoftrap/indianHistoryEnhanced`**](https://huggingface.co/datasets/wizardoftrap/indianHistoryEnhanced) |
| |
|
| |
|