File size: 2,328 Bytes
2e382b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
language:
  - en
tags:
  - medical
  - llama
  - heart-disease
  - healthcare
  - instruction-tuned
  - awareness
  - causal-lm
model_name: CardioMed-LLaMA3.2-1B
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
  - custom
library_name: transformers
pipeline_tag: text-generation
---

# 🫀 CardioMed-LLaMA3.2-1B

**CardioMed-LLaMA3.2-1B** is a domain-adapted, instruction-tuned language model fine-tuned specifically on heart disease–related medical prompts using LoRA on top of `meta-llama/Llama-3.2-1B-Instruct`.

This model is designed to generate structured **medical abstracts and awareness information** about cardiovascular diseases such as stroke, myocardial infarction, hypertension, etc.

---

## ✨ Example Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model = AutoModelForCausalLM.from_pretrained("rajkumar/CardioMed-LLaMA3.2-1B", torch_dtype=torch.float16).cuda()
tokenizer = AutoTokenizer.from_pretrained("rajkumar/CardioMed-LLaMA3.2-1B")

prompt = """### Instruction:
Provide an abstract and awareness information for the following disease: Myocardial Infarction

### Response:
"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

---

## 🧠 Use Cases

- Patient education for cardiovascular conditions  
- Early awareness chatbots  
- Clinical NLP augmentation  
- Health-tech research assistants

---

## 🔧 Fine-tuning Details

- **Base model:** `meta-llama/Llama-3.2-1B-Instruct`  
- **Fine-tuning method:** PEFT (LoRA)  
- **LoRA target modules:** `q_proj`, `v_proj`  
- **Dataset size:** 3,209 instruction-response pairs (custom medical JSONL)  
- **Instruction format:** Alpaca-style (`### Instruction` / `### Response`)  
- **Max sequence length:** 512 tokens  
- **Framework:** Hugging Face Transformers + PEFT

---

## 🧪 Prompt Format

```text
### Instruction:
Provide an abstract and awareness information for the following disease: Stroke

### Response:
```

Model will generate:
- ✅ Abstract  
- ✅ Awareness & prevention guidelines  
- ✅ Structured medical info

---

## 📄 License

This model is licensed under the **MIT License** and intended for **educational and research purposes only**.