jvaquet's picture
Update README.md
fa5d6bf verified
metadata
library_name: transformers
tags:
  - multilabel
  - multilabel-token-classification
base_model:
  - google-bert/bert-large-cased

Overview

  • This is an extension of the bert-large-cased model to enable multi-label token classification.
  • The training objective is BCELoss.
  • Labels are one-hot encoded.
  • Model output logits can be normalized using sigmoid activation.
  • This model uses the same weights as bert-large-cased and thus needs to be fine-tuned for downstream tasks.

Usage

To initialize the model for fine tuning, simply provide id2label and label2id, similarly to standard token classification fine tuning:

from transformers import AutoModelForTokenClassification

model = AutoModelForTokenClassification.from_pretrained('jvaquet/multilabel-classification-bert', 
  id2label = id2label, 
  label2id = label2id,
  trust_remote_code=True)