VaidhLLaMA-3.2-3B-Instruct
VaidhLLaMA-3.2-3B-Instruct is a specialized Large Language Model fine-tuned for the domain of Ayurveda. It is built upon the Llama-3.2-3B-Instruct architecture and has been optimized to understand and reason with Ayurvedic concepts, physiology (Sharir Kriya), and clinical applications.
Model Details
- Model Name: VaidhLLaMA-3.2-3B-Instruct
- Base Model: meta-llama/Llama-3.2-3B-Instruct
- Developed By: Vivekdas
- Language: English, Hindi, Sanskrit (Domain-specific terminology)
- License: Llama 3.2 Community License
- Architecture: Transformer-based Auto-Regressive Language Model
Performance
VaidhLLaMA demonstrates strong performance on the BhashaBench-Ayur benchmark, outperforming its base model and other similarly sized models in domain-specific tasks.
| Model | Accuracy (%) | Note |
|---|---|---|
| VaidhLLaMA-3.2-3B | 41.91% | Fine-tuned Ayurveda Specialist |
| Llama-3.2-3B-Instruct | 40.74% | Base Model |
| Llama-3.2-1B | 27.58% | Tiny Model |
Intended Use
This model is designed for:
- Answering questions related to Ayurvedic medical science.
- Explaining concepts from classical Ayurvedic texts (Samhitas).
- Assisting researchers and students in the field of Ayurveda.
Disclaimer: This model is for educational and research purposes only. It should not be used as a substitute for professional medical advice, diagnosis, or treatment.
Usage
You can run this model using the transformers library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Vivekdas/VaidhLLaMA-3.2-3B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "system", "content": "You are VaidhLLaMA, an expert AI assistant for Ayurveda."},
{"role": "user", "content": "Explain the concept of Tridosha in Ayurveda."}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.6,
top_p=0.9
)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
Citation
If you use this model in your research, please cite:
@misc{vaidhllama2024,
author = {Vivekdas},
title = {VaidhLLaMA: A Fine-Tuned LLM for Ayurveda},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Repository},
howpublished = {\url{https://huggingface.co/Vivekdas/VaidhLLaMA-3.2-3B-Instruct}}
}
- Downloads last month
- 61
Model tree for Vivekdas/VaidhLLaMA-3.2-3B-Instruct
Evaluation results
- Accuracy (Zero-Shot) on BhashaBench-Ayurself-reported41.910