openlifescienceai/medmcqa
Viewer • Updated • 193k • 30.8k • 222
This is a distilled GPT-2 model fine-tuned for pharmaceutical autocomplete. It suggests drug names and medical terminology based on clinical context.
Key Features:
| Metric | Value |
|---|---|
| Parameters | 81,912,576 |
| Perplexity | 44.07 |
| Inference Speed | 347.9ms |
| Quality Retained | 53.6% |
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load model and tokenizer
model = GPT2LMHeadModel.from_pretrained("codehance/distilgpt2-medical-pharma")
tokenizer = GPT2Tokenizer.from_pretrained("codehance/distilgpt2-medical-pharma")
# Generate pharmaceutical suggestions
prompt = "The patient should take"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=30, num_return_sequences=3)
for output in outputs:
print(tokenizer.decode(output, skip_special_tokens=True))
Primary Use Cases:
Limitations:
⚠️ Important: This model is for autocomplete assistance only. It should NOT be used as the sole basis for medical decisions. Always verify suggestions with qualified healthcare professionals.
Created as part of a pharmaceutical autocomplete system tutorial demonstrating transfer learning, fine-tuning, and knowledge distillation.
@misc{distilgpt2-medical-pharma,
author = {codehance},
title = {DistilGPT-2 Medical Pharmaceutical Autocomplete},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/codehance/distilgpt2-medical-pharma}}
}