Model Card for sarvam_finetuned_output
This model is a fine-tuned LoRA adapter version of sarvamai/sarvam-translate, specifically optimized for English to Kashmiri translation. It has been trained using Unsloth and TRL.
Model Details
- Base Model: sarvamai/sarvam-translate (Gemma 3 based)
- Adapter Type: LoRA (Low-Rank Adaptation)
- Language Pair: English to Kashmiri (Perso-Arabic script)
- Frameworks: Unsloth, PEFT, TRL, Transformers
Quick start
To use this model for inference, you can load it using peft and transformers.
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model_id = "sarvamai/sarvam-translate"
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# Load the adapter
adapter_model_id = "GAASH-Lab/Sarvam-Kashmiri-finetuned"
model = PeftModel.from_pretrained(model, adapter_model_id)
# Inference
input_text = "Where do you live?"
messages = [
{"role": "system", "content": "Translate the text below to Kashmiri."},
{"role": "user", "content": input_text},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(input_ids, max_new_tokens=64)
print(tokenizer.decode(outputs[0][input_ids.shape[1]:], skip_special_tokens=True))
Training procedure
This model was trained with the following hyperparameters:
- LoRA Rank (r): 16
- LoRA Alpha: 16
- Target Modules:
k_proj,o_proj,down_proj,up_proj,gate_proj,v_proj,q_proj - Batch Size: 16 * 4 (grad accum) = 64 effective batch size (inferred from defaults)
- Learning Rate: 2e-4
- Epochs: 3
- Optimizer: AdamW 8-bit
Dataset
The model was fine-tuned on a parallel corpus of English-Kashmiri sentence pairs.
Framework versions
- PEFT 0.18.1
- TRL: 0.24.0
- Transformers: 4.57.3
- Pytorch: 2.9.1
- Datasets: 4.3.0
- Tokenizers: 0.22.2
Citations
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 22