Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -7,6 +7,7 @@ tags:
|
|
| 7 |
- lora
|
| 8 |
- peft
|
| 9 |
- medgemma
|
|
|
|
| 10 |
language:
|
| 11 |
- en
|
| 12 |
library_name: peft
|
|
@@ -24,20 +25,45 @@ Trained models for clinical note simplification - translating medical documents
|
|
| 24 |
| **gemma-2b-dpo** | gemma-2-2b-it | DPO comparison | **73%** | **82%** | 61% |
|
| 25 |
| **gemma-9b-dpo** | gemma-2-9b-it | Teacher model | 79% | 91% | 70% |
|
| 26 |
|
| 27 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
```python
|
| 30 |
from peft import PeftModel
|
| 31 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 32 |
|
| 33 |
-
# Load the distilled model
|
| 34 |
base_model = AutoModelForCausalLM.from_pretrained("google/gemma-2-2b-it")
|
| 35 |
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
|
| 36 |
model = PeftModel.from_pretrained(base_model, "dejori/note-explain", subfolder="gemma-2b-distilled")
|
| 37 |
|
| 38 |
-
# Or load the DPO model (higher accuracy)
|
| 39 |
-
model = PeftModel.from_pretrained(base_model, "dejori/note-explain", subfolder="gemma-2b-dpo")
|
| 40 |
-
|
| 41 |
# Generate
|
| 42 |
prompt = "Simplify this clinical note for a patient:\n\n[clinical note]\n\nSimplified version:"
|
| 43 |
inputs = tokenizer(prompt, return_tensors="pt")
|
|
@@ -54,17 +80,6 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 54 |
|
| 55 |
Training data: [dejori/note-explain](https://huggingface.co/datasets/dejori/note-explain)
|
| 56 |
|
| 57 |
-
## Citation
|
| 58 |
-
|
| 59 |
-
```bibtex
|
| 60 |
-
@misc{noteexplain2026,
|
| 61 |
-
title={NoteExplain: Privacy-First Clinical Note Simplification},
|
| 62 |
-
author={Dejori, Mathaeus},
|
| 63 |
-
year={2026},
|
| 64 |
-
publisher={HuggingFace}
|
| 65 |
-
}
|
| 66 |
-
```
|
| 67 |
-
|
| 68 |
## License
|
| 69 |
|
| 70 |
Apache 2.0
|
|
|
|
| 7 |
- lora
|
| 8 |
- peft
|
| 9 |
- medgemma
|
| 10 |
+
- gguf
|
| 11 |
language:
|
| 12 |
- en
|
| 13 |
library_name: peft
|
|
|
|
| 25 |
| **gemma-2b-dpo** | gemma-2-2b-it | DPO comparison | **73%** | **82%** | 61% |
|
| 26 |
| **gemma-9b-dpo** | gemma-2-9b-it | Teacher model | 79% | 91% | 70% |
|
| 27 |
|
| 28 |
+
## GGUF for Mobile/Local Inference
|
| 29 |
+
|
| 30 |
+
Pre-quantized GGUF models (Q4_K_M, ~1.6GB each) for llama.cpp, Ollama, LM Studio:
|
| 31 |
+
|
| 32 |
+
| File | Description | Download |
|
| 33 |
+
|------|-------------|----------|
|
| 34 |
+
| `gguf/gemma-2b-distilled-q4_k_m.gguf` | Distilled model (better patient communication) | [Download](https://huggingface.co/dejori/note-explain/resolve/main/gguf/gemma-2b-distilled-q4_k_m.gguf) |
|
| 35 |
+
| `gguf/gemma-2b-dpo-q4_k_m.gguf` | DPO model (higher accuracy) | [Download](https://huggingface.co/dejori/note-explain/resolve/main/gguf/gemma-2b-dpo-q4_k_m.gguf) |
|
| 36 |
+
|
| 37 |
+
### Quick Start with Ollama
|
| 38 |
+
|
| 39 |
+
```bash
|
| 40 |
+
# Download and run
|
| 41 |
+
ollama run hf.co/dejori/note-explain:gemma-2b-distilled-q4_k_m.gguf
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
### Quick Start with llama.cpp
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
# Download
|
| 48 |
+
wget https://huggingface.co/dejori/note-explain/resolve/main/gguf/gemma-2b-distilled-q4_k_m.gguf
|
| 49 |
+
|
| 50 |
+
# Run
|
| 51 |
+
./llama-cli -m gemma-2b-distilled-q4_k_m.gguf -p "Simplify this clinical note for a patient: [your note]"
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
## LoRA Adapters
|
| 55 |
+
|
| 56 |
+
For fine-tuning or full-precision inference:
|
| 57 |
|
| 58 |
```python
|
| 59 |
from peft import PeftModel
|
| 60 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 61 |
|
| 62 |
+
# Load the distilled model
|
| 63 |
base_model = AutoModelForCausalLM.from_pretrained("google/gemma-2-2b-it")
|
| 64 |
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
|
| 65 |
model = PeftModel.from_pretrained(base_model, "dejori/note-explain", subfolder="gemma-2b-distilled")
|
| 66 |
|
|
|
|
|
|
|
|
|
|
| 67 |
# Generate
|
| 68 |
prompt = "Simplify this clinical note for a patient:\n\n[clinical note]\n\nSimplified version:"
|
| 69 |
inputs = tokenizer(prompt, return_tensors="pt")
|
|
|
|
| 80 |
|
| 81 |
Training data: [dejori/note-explain](https://huggingface.co/datasets/dejori/note-explain)
|
| 82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
## License
|
| 84 |
|
| 85 |
Apache 2.0
|