Llama-3.2-1B-Instruct, with domain adapted pretraining (DAPT), also called Continuous Pre-training (CPT) on a generic Dutch medical corpus.

Training for on the Dutch medical corpus, with a 256 batch size, maximally 1024 sequence length during training and a linear-cosine schedul, with 100 cycles per 250M steps, with LRmax=1e-4 and 100K warmup steps, AdamW for optimization.

Currently at 5.5 perplexity, could still use more training.

Planned: on-premise continuous pre-training on Dutch clinical texts.

To use for text-generation;

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("UMCU/MedLlama.nl")
model = AutoModelForCausalLM.from_pretrained("UMCU/MedLlama.nl", torch_dtype=torch.float16)
Downloads last month
277
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for UMCU/MedLlama.nl

Finetuned
(1719)
this model

Dataset used to train UMCU/MedLlama.nl