| language: [ta] | |
| base_model: Qwen/Qwen3-1.7B | |
| tags: [fine-tuned, indian-languages, rajasthan, lora, ai4bharat] | |
| license: apache-2.0 | |
| # AI4Bharat State Expert — Rajasthan | |
| Fine-tuned Qwen3-1.7B on AI4Bharat Indic Languages dataset for Rajasthan. | |
| Trained at SRMIST Vadapalani AI4Bharat Tune-Athon (Feb 26, 2026). Checkpoint: saved. | |
| ## Training | |
| | Param | Value | | |
| |---|---| | |
| | Base | Qwen3-1.7B 4-bit | LoRA rank | 64 | Alpha | 128 | | |
| | Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj | Seq len | 2048 | | |
| | Splits | indic + conv + cult | HW | Apple M3 iMac 16GB | | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model = AutoModelForCausalLM.from_pretrained("ForgottenAreeb/AI4Bharat-State-Expert-Rajasthan") | |
| tok = AutoTokenizer.from_pretrained("ForgottenAreeb/AI4Bharat-State-Expert-Rajasthan") | |
| ``` | |
| Merged FP16 model — no PEFT required. | |