BharatLLM BTech -- Engineering Education LoRA
A QLoRA adapter for Mistral-7B, fine-tuned on 815,906 engineering Q&A pairs across 11 BTech departments and 552 subjects.
Part of the BharatLLM project: 13 LoRA adapters (12 K-12 languages + 1 BTech Engineering).
Model Details
| Property | Value |
|---|---|
| Base Model | mistralai/Mistral-7B-Instruct-v0.3 |
| Method | QLoRA (4-bit quantization + LoRA, r=64) |
| Trainable Parameters | 167,772,160 (2.26% of 7.4B) |
| Training Library | Unsloth |
| Language | English |
| Domain | BTech Engineering (11 departments, 552 subjects) |
| Training Data | 815,906 Q&A pairs |
| Difficulty Levels | Easy (369K), Medium (275K), Hard (172K) |
| License | Apache 2.0 |
Departments Covered
| Code | Department | Entries | Subjects |
|---|---|---|---|
| CSE | Computer Science & Engineering | 81,301 | 54 |
| ME | Mechanical Engineering | 79,369 | 52 |
| CE | Civil Engineering | 78,373 | 50 |
| ECE | Electronics & Communication | 76,377 | 50 |
| EEE | Electrical & Electronics | 76,306 | 49 |
| IT | Information Technology | 72,739 | 51 |
| CH | Chemical Engineering | 71,265 | 46 |
| CSBS | CS & Business Systems | 71,138 | 48 |
| CSE_DS | CSE (Data Science) | 70,799 | 50 |
| CSE_IOT | CSE (Internet of Things) | 69,237 | 51 |
| CSE_AIML | CSE (AI & Machine Learning) | 69,002 | 51 |
| Total | 11 Departments | 815,906 | 552 |
Quick Start (Unsloth)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="FoundryAILabs/bharat-btech-7b-lora",
max_seq_length=2048,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
prompt = "[INST] <<SYS>>\nYou are BharatLLM, an expert engineering tutor.\n<</SYS>>\n\nExplain Dijkstra's shortest path algorithm with time complexity. [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Using with HuggingFace Transformers
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3", load_in_4bit=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
model = PeftModel.from_pretrained(base, "FoundryAILabs/bharat-btech-7b-lora")
Website: foundryailabs.io | GitHub: github.com/foundryailabs/BharatLLM
- Downloads last month
- 7
Model tree for FoundryAILabs/bharat-btech-7b-lora
Base model
mistralai/Mistral-7B-v0.3 Finetuned
mistralai/Mistral-7B-Instruct-v0.3