Overview

  • This is an extension of the bert-large-cased model to enable multi-label token classification.
  • The training objective is BCELoss.
  • Labels are one-hot encoded.
  • Model output logits can be normalized using sigmoid activation.
  • This model uses the same weights as bert-large-cased and thus needs to be fine-tuned for downstream tasks.

Usage

To initialize the model for fine tuning, simply provide id2label and label2id, similarly to standard token classification fine tuning:

from transformers import AutoModelForTokenClassification

model = AutoModelForTokenClassification.from_pretrained('jvaquet/multilabel-classification-bert', 
  id2label = id2label, 
  label2id = label2id,
  trust_remote_code=True)
Downloads last month
164
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jvaquet/multilabel-classification-bert

Finetuned
(147)
this model
Finetunes
6 models

Collection including jvaquet/multilabel-classification-bert