metadata
language: en
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
- medical
- healthcare
- lab-reports
- llama
- qlora
- peft
datasets:
- lavita/ChatDoctor-HealthCareMagic-100k
- lavita/ChatDoctor-iCliniq
MedLLaMA-3.2-3B: AI Lab Report Analyzer
Model Description
This is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct trained on medical Q&A data to answer patient queries about lab reports and health conditions.
Training Details
- Base Model: LLaMA-3.2-3B-Instruct
- Method: QLoRA (4-bit quantization + LoRA rank 16)
- Dataset: MedQuAD + iCliniq (~10k examples)
- Epochs: 2
- Hardware: NVIDIA T4 (Google Colab)
Intended Use
- Answering patient questions about lab report values
- Explaining medical terminology in plain language
- Providing general health information
⚠️ Limitations & Disclaimer
This model is for educational and informational purposes only. It is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider for medical decisions.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Llama-3.2-3B-Instruct',
quantization_config=BitsAndBytesConfig(load_in_4bit=True),
device_map='auto'
)
model = PeftModel.from_pretrained(base_model, 'jb10231/MedLLaMA-3.2-3B-LabReport')
tokenizer = AutoTokenizer.from_pretrained('jb10231/MedLLaMA-3.2-3B-LabReport')