MedLLaMA-3.2-3B: AI Lab Report Analyzer
Model Description
This is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct trained on medical Q&A data to answer patient queries about lab reports and health conditions.
Training Details
- Base Model: LLaMA-3.2-3B-Instruct
- Method: QLoRA (4-bit quantization + LoRA rank 16)
- Dataset: MedQuAD + iCliniq (~10k examples)
- Epochs: 2
- Hardware: NVIDIA T4 (Google Colab)
Intended Use
- Answering patient questions about lab report values
- Explaining medical terminology in plain language
- Providing general health information
⚠️ Limitations & Disclaimer
This model is for educational and informational purposes only. It is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider for medical decisions.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained(
'meta-llama/Llama-3.2-3B-Instruct',
quantization_config=BitsAndBytesConfig(load_in_4bit=True),
device_map='auto'
)
model = PeftModel.from_pretrained(base_model, 'jb10231/MedLLaMA-3.2-3B-LabReport')
tokenizer = AutoTokenizer.from_pretrained('jb10231/MedLLaMA-3.2-3B-LabReport')
- Downloads last month
- 34
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for jb10231/MedLLaMA-3.2-3B-LabReport
Base model
meta-llama/Llama-3.2-3B-Instruct