metadata
library_name: transformers
tags:
- multilabel
- multilabel-token-classification
base_model:
- google-bert/bert-large-cased
Overview
- This is an extension of the
bert-large-casedmodel to enable multi-label token classification. - The training objective is BCELoss.
- Labels are one-hot encoded.
- Model output logits can be normalized using sigmoid activation.
- This model uses the same weights as
bert-large-casedand thus needs to be fine-tuned for downstream tasks.
Usage
To initialize the model for fine tuning, simply provide id2label and label2id, similarly to standard token classification fine tuning:
from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained('jvaquet/multilabel-classification-bert',
id2label = id2label,
label2id = label2id,
trust_remote_code=True)