Mistral-Indic-Chat-LoRA-v1

Lightweight multilingual conversational LoRA adapter built on Mistral-7B

Overview

This model enables English ↔ Hindi translation, Hinglish conversational Q/A, and basic chat-style responses. It is focused on language alignment rather than deep reasoning.

Base Model

Base: mistralai/Mistral-7B-v0.1
Method: LoRA (PEFT)

Training Details

  • LoRA Rank (r): 8
  • Target Modules: q_proj, v_proj
  • Trainable Parameters: 0.2%
  • Epochs: 2
  • Batch Size: 2
  • Max Sequence Length: 256

Dataset

  • Bhasha Wiki Indic Context (English → Hindi, Hindi → English)
  • Synthetic Hinglish conversational data

Data Format

User:
Assistant:

Usage

import torch  
from transformers import AutoModelForCausalLM, AutoTokenizer  
from peft import PeftModel  

model_name = "mistralai/Mistral-7B-v0.1"  

tokenizer = AutoTokenizer.from_pretrained(model_name)  
tokenizer.pad_token = tokenizer.eos_token  

base_model = AutoModelForCausalLM.from_pretrained(model_name)  
model = PeftModel.from_pretrained(base_model, "YOUR_MODEL_PATH")  

model.eval()  

prompt = "User: gravity kya hoti hai?\nAssistant:"  
inputs = tokenizer(prompt, return_tensors="pt")  

with torch.no_grad():  
    outputs = model.generate(  
        **inputs,  
        max_new_tokens=50,  
        do_sample=True,  
        temperature=0.7,  
        top_p=0.9,  
        repetition_penalty=1.2,  
        eos_token_id=tokenizer.eos_token_id  
    )  

response = tokenizer.decode(outputs[0], skip_special_tokens=True)  
print(response.split("Assistant:")[-1].strip())  

Capabilities

  • English → Hindi translation
  • Hindi → English translation
  • Hinglish conversational understanding
  • Chat-style response generation
  • Basic multilingual alignment

Limitations

  • Weak reasoning ability
  • Can hallucinate incorrect outputs
  • Inconsistent for long responses
  • Limited generalization
  • Not production-ready

Evaluation

  • Qualitative evaluation only
  • No standard benchmarks used
  • Improvements mainly in multilingual behavior

Architecture

  • Base model is frozen
  • Only LoRA adapters trained
  • Efficient parameter usage (~0.2%)

Intended Use

  • Learning and experimentation
  • Multilingual fine-tuning research
  • Indic chatbot prototyping

License

This model is released under the CC BY-NC 4.0 license.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Neural-Hacker/Mistral-7b-Indic-Chat-LoRA-v1

Adapter
(2405)
this model

Dataset used to train Neural-Hacker/Mistral-7b-Indic-Chat-LoRA-v1