File size: 2,730 Bytes
be8ab0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
# 🩺 TinyLlama Medical Assistant

A fine-tuned TinyLlama 1.1B model specialized in allopathic medicine, trained using LoRA (Low-Rank Adaptation) on a custom medical dataset.

## 🎯 Features

- **Fine-tuned Model**: Specialized knowledge of 10+ common medicines
- **LoRA Adaptation**: Efficient fine-tuning with only 2.2M trainable parameters
- **4-bit Quantization**: Memory-efficient inference
- **User Authentication**: Role-based access (admin, doctor, student)
- **Medical Disclaimer**: Safety warnings on all responses
- **Interactive UI**: Clean Streamlit interface with adjustable parameters

## πŸ“Š Model Details

- **Base Model**: TinyLlama-1.1B-Chat-v1.0
- **Fine-tuning Method**: LoRA (r=8, alpha=16)
- **Training Data**: 500 medical Q&A pairs
- **Training Accuracy**: 97.83%
- **Medicines Covered**: Paracetamol, Ibuprofen, Amoxicillin, Metformin, Atorvastatin, Amlodipine, Omeprazole, Cetirizine, Azithromycin, Losartan

## πŸš€ Quick Start

### Local Installation

```bash
# Clone the repository
git clone https://github.com/yourusername/medical-assistant.git
cd medical-assistant

# Install dependencies
pip install -r requirements.txt

# Run the app
streamlit run app.py
```

### Login Credentials

```
Username: admin     Password: admin123
Username: doctor    Password: doc123
Username: student   Password: student123
```

## πŸ“ Project Structure

```
medical-assistant/
β”œβ”€β”€ app.py                          # Main Streamlit application
β”œβ”€β”€ requirements.txt                # Python dependencies
β”œβ”€β”€ .streamlit/
β”‚   └── config.toml                # Streamlit configuration
β”œβ”€β”€ tinyllama-medical-lora/        # Fine-tuned model weights
β”‚   β”œβ”€β”€ adapter_config.json
β”‚   β”œβ”€β”€ adapter_model.safetensors
β”‚   └── tokenizer files...
└── README.md
```

## πŸ’‘ Example Queries

- "What is Paracetamol used for?"
- "Tell me about Ibuprofen"
- "What is Metformin?"
- "Uses of Amoxicillin"
- "What is Atorvastatin for?"

## βš™οΈ Model Parameters

Adjust these in the sidebar:
- **Temperature** (0.1-1.5): Controls randomness
- **Max Tokens** (32-256): Response length
- **Top-p** (0.1-1.0): Nucleus sampling

## ⚠️ Medical Disclaimer

This AI assistant is for educational purposes only. Always consult a qualified healthcare provider for medical advice.

## πŸ”§ Technical Stack

- **Framework**: Streamlit
- **Model**: TinyLlama 1.1B + LoRA
- **Libraries**: Transformers, PEFT, BitsAndBytes, PyTorch
- **Quantization**: 4-bit NF4

## πŸ“„ License

MIT License

## πŸ‘₯ Authors

Your Name - Medical AI Research

## πŸ™ Acknowledgments

- TinyLlama team for the base model
- Hugging Face for transformers library
- PEFT library for LoRA implementation