File size: 1,664 Bytes
6afd76a
9c7cba6
 
 
 
 
 
 
 
 
 
 
 
 
6afd76a
 
9c7cba6
6afd76a
9c7cba6
 
 
6afd76a
 
9c7cba6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
language: en
license: llama3.2
base_model: meta-llama/Llama-3.2-3B-Instruct
tags:
  - medical
  - healthcare
  - lab-reports
  - llama
  - qlora
  - peft
datasets:
  - lavita/ChatDoctor-HealthCareMagic-100k
  - lavita/ChatDoctor-iCliniq
---

# MedLLaMA-3.2-3B: AI Lab Report Analyzer

## Model Description
This is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) 
trained on medical Q&A data to answer patient queries about lab reports and health conditions.

## Training Details
- **Base Model:** LLaMA-3.2-3B-Instruct
- **Method:** QLoRA (4-bit quantization + LoRA rank 16)
- **Dataset:** MedQuAD + iCliniq (~10k examples)
- **Epochs:** 2
- **Hardware:** NVIDIA T4 (Google Colab)

## Intended Use
- Answering patient questions about lab report values
- Explaining medical terminology in plain language
- Providing general health information

## ⚠️ Limitations & Disclaimer
This model is for **educational and informational purposes only**.
It is **NOT a substitute for professional medical advice, diagnosis, or treatment.**
Always consult a qualified healthcare provider for medical decisions.

## Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch

base_model = AutoModelForCausalLM.from_pretrained(
    'meta-llama/Llama-3.2-3B-Instruct',
    quantization_config=BitsAndBytesConfig(load_in_4bit=True),
    device_map='auto'
)
model = PeftModel.from_pretrained(base_model, 'jb10231/MedLLaMA-3.2-3B-LabReport')
tokenizer = AutoTokenizer.from_pretrained('jb10231/MedLLaMA-3.2-3B-LabReport')
```