Configuration Parsing Warning:In tokenizer_config.json: "tokenizer_config.chat_template" must be one of [string, array]

TinyLlama Medical DAPT

Model Description

TinyLlama 1.1B fine-tuned on PubMed medical abstracts using Domain Adaptive Pre-Training (DAPT) with LoRA.

Training Data

  • Dataset: PubMedQA (500 samples)
  • Task: Causal Language Modeling

Training Config

  • Rank: 16
  • LoRA Alpha: 32
  • Target Modules: attention + FFN layers
  • Epochs: 2
  • Learning Rate: 2e-4

Limitations

Small dataset — use for experimentation only.

Downloads last month
53
Safetensors
Model size
1B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Radhe09/tinyllama-medical-dapt

Adapter
(1494)
this model