Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,8 @@ datasets:
|
|
| 18 |
|
| 19 |
## Model Card: QwenMedic-v1
|
| 20 |
|
|
|
|
|
|
|
| 21 |
### Overview
|
| 22 |
QwenMedic-v1 is a medical-specialty adaptation of the Qwen3-1.7B causal language model, fine-tuned for clinical reasoning and instruction-following tasks. It was trained for **1 epoch** on two curated medical datasets to improve diagnostic Q&A and clinical summarization.
|
| 23 |
|
|
@@ -53,7 +55,16 @@ QwenMedic-v1 is a medical-specialty adaptation of the Qwen3-1.7B causal language
|
|
| 53 |
- May produce **hallucinations** or plausible-sounding but incorrect advice
|
| 54 |
- **Biases** due to training-data coverage
|
| 55 |
- **Not FDA-approved**—should not replace professional medical judgment
|
| 56 |
-
- Avoid feeding **patient-identifiable** data without proper de-identification
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
### Inference Example
|
| 59 |
|
|
|
|
| 18 |
|
| 19 |
## Model Card: QwenMedic-v1
|
| 20 |
|
| 21 |
+

|
| 22 |
+
|
| 23 |
### Overview
|
| 24 |
QwenMedic-v1 is a medical-specialty adaptation of the Qwen3-1.7B causal language model, fine-tuned for clinical reasoning and instruction-following tasks. It was trained for **1 epoch** on two curated medical datasets to improve diagnostic Q&A and clinical summarization.
|
| 25 |
|
|
|
|
| 55 |
- May produce **hallucinations** or plausible-sounding but incorrect advice
|
| 56 |
- **Biases** due to training-data coverage
|
| 57 |
- **Not FDA-approved**—should not replace professional medical judgment
|
| 58 |
+
- Avoid feeding **patient-identifiable** data without proper de-identification
|
| 59 |
+
|
| 60 |
+
### Summary of Final Training Metrics
|
| 61 |
+
|
| 62 |
+
| Metric | Step | Smoothed | Raw Value |
|
| 63 |
+
|------------------:|-----:|---------:|----------:|
|
| 64 |
+
| **Epoch** | 1539 | 0.9979 | 0.9997 |
|
| 65 |
+
| **Gradient Norm** | 1539 | 0.3882 | 0.3974 |
|
| 66 |
+
| **Learning Rate** | 1539 | — | 0 |
|
| 67 |
+
| **Training Loss** | 1539 | 1.5216 | 1.4703 |
|
| 68 |
|
| 69 |
### Inference Example
|
| 70 |
|