Instructions to use Radhe09/tinyllama-medical-dapt with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Radhe09/tinyllama-medical-dapt with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0") model = PeftModel.from_pretrained(base_model, "Radhe09/tinyllama-medical-dapt") - Notebooks
- Google Colab
- Kaggle
Configuration Parsing Warning:In tokenizer_config.json: "tokenizer_config.chat_template" must be one of [string, array]
TinyLlama Medical DAPT
Model Description
TinyLlama 1.1B fine-tuned on PubMed medical abstracts using Domain Adaptive Pre-Training (DAPT) with LoRA.
Training Data
- Dataset: PubMedQA (500 samples)
- Task: Causal Language Modeling
Training Config
- Rank: 16
- LoRA Alpha: 32
- Target Modules: attention + FFN layers
- Epochs: 2
- Learning Rate: 2e-4
Limitations
Small dataset — use for experimentation only.
- Downloads last month
- 53
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Radhe09/tinyllama-medical-dapt
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0