How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("fill-mask", model="xdai/mimic_longformer_base")
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("xdai/mimic_longformer_base")
model = AutoModelForMaskedLM.from_pretrained("xdai/mimic_longformer_base")
Quick Links
  • Continue pre-training RoBERTa-base using discharge summaries from MIMIC-III datasets.

  • Details can be found in the following paper

Xiang Dai and Ilias Chalkidis and Sune Darkner and Desmond Elliott. 2022. Revisiting Transformer-based Models for Long Document Classification. (https://arxiv.org/abs/2204.06683)

  • Important hyper-parameters
Max sequence 4096
Batch size 8
Learning rate 5e-5
Training epochs 6
Training time 130 GPU-hours
Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for xdai/mimic_longformer_base