Token Classification
Transformers
Safetensors
MultiLabelBert
multilabel
multilabel-token-classification
custom_code
Instructions to use jvaquet/multilabel-classification-bert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jvaquet/multilabel-classification-bert with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="jvaquet/multilabel-classification-bert", trust_remote_code=True)# Load model directly from transformers import AutoModelForTokenClassification model = AutoModelForTokenClassification.from_pretrained("jvaquet/multilabel-classification-bert", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
| library_name: transformers | |
| tags: | |
| - multilabel | |
| - multilabel-token-classification | |
| base_model: | |
| - google-bert/bert-large-cased | |
| # Overview | |
| - This is an extension of the `bert-large-cased` model to enable **multi-label token classification**. | |
| - The training objective is BCELoss. | |
| - Labels are one-hot encoded. | |
| - Model output logits can be normalized using sigmoid activation. | |
| - This model uses the same weights as `bert-large-cased` and thus needs to be fine-tuned for downstream tasks. | |
| # Usage | |
| To initialize the model for fine tuning, simply provide `id2label` and `label2id`, similarly to standard token classification fine tuning: | |
| ```python | |
| from transformers import AutoModelForTokenClassification | |
| model = AutoModelForTokenClassification.from_pretrained('jvaquet/multilabel-classification-bert', | |
| id2label = id2label, | |
| label2id = label2id, | |
| trust_remote_code=True) | |
| ``` | |