Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,16 @@ Large language models, such as GPT-4, obtain reasonable scores on medical questi
|
|
| 40 |
In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
|
| 41 |
We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
|
| 42 |
We show the benefits of our training strategy on a medical answering question dataset.
|
| 43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
|
| 46 |
- **Developed by:** Raidium
|
|
|
|
| 40 |
In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
|
| 41 |
We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
|
| 42 |
We show the benefits of our training strategy on a medical answering question dataset.
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
### Using the model
|
| 46 |
+
|
| 47 |
+
```python
|
| 48 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 49 |
+
|
| 50 |
+
tokenizer = AutoTokenizer.from_pretrained("raidium/MQG")
|
| 51 |
+
model = AutoModelForCausalLM.from_pretrained("raidium/MQG")
|
| 52 |
+
```
|
| 53 |
|
| 54 |
|
| 55 |
- **Developed by:** Raidium
|