Markolo96 commited on
Commit
eba9daa
·
verified ·
1 Parent(s): 3a7b6b0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - pharmacy
5
+ - clinical-decision-support
6
+ - medication-review
7
+ - mistral
8
+ - fine-tuned
9
+ base_model: mistralai/Mistral-7B-Instruct-v0.1
10
+ model_name: markoste-clinical-mistral
11
+ language: en
12
+ datasets:
13
+ - Markolo96/Mistral_DB
14
+ ---
15
+
16
+ # 💊 Markoste Clinical Mistral (7B)
17
+ _A pharmacist-optimized large language model fine-tuned on Australian Product Information (PI) Q&A data for clinical medication review support._
18
+
19
+ ## 🧠 Model Purpose
20
+ Markoste Clinical Mistral is fine-tuned to answer medication-specific clinical questions such as:
21
+ - Dosing adjustments in renal impairment
22
+ - Contraindications
23
+ - IV administration instructions
24
+ - Interactions and precautions
25
+ - Pharmacokinetic properties
26
+ - Pregnancy & breastfeeding guidance
27
+
28
+ ## 🧪 Training
29
+ Fine-tuned using **LoRA** on top of `mistralai/Mistral-7B-Instruct-v0.1` with Q&A pairs derived from structured TGA PI documents.
30
+
31
+ ## ⚙️ Example Usage
32
+
33
+ ```python
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer
35
+
36
+ model = AutoModelForCausalLM.from_pretrained("Markolo96/markoste-clinical-mistral", device_map="auto")
37
+ tokenizer = AutoTokenizer.from_pretrained("Markolo96/markoste-clinical-mistral")
38
+
39
+ prompt = "<|user|>\nWhat are the indications for erythromycin lactobionate?\n<|assistant|>"
40
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
41
+ outputs = model.generate(**inputs, max_new_tokens=256)
42
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))