| | <!--Copyright 2022 The HuggingFace Team. All rights reserved. |
| |
|
| | Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| | the License. You may obtain a copy of the License at |
| |
|
| | http: |
| |
|
| | Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| | specific language governing permissions and limitations under the License. |
| | --> |
| |
|
| | # Token classification |
| |
|
| | [[open-in-colab]] |
| |
|
| | <Youtube id="wVHdVlPScxA"/> |
| |
|
| | Token classification assigns a label to individual tokens in a sentence. One of the most common token classification tasks is Named Entity Recognition (NER). NER attempts to find a label for each entity in a sentence, such as a person, location, or organization. |
| |
|
| | This guide will show you how to: |
| |
|
| | 1. Finetune [DistilBERT](https: |
| | 2. Use your finetuned model for inference. |
| |
|
| | <Tip> |
| | The task illustrated in this tutorial is supported by the following model architectures: |
| |
|
| | <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> |
| |
|
| | [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nyströmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) |
| |
|
| | <!--End of the generated tip--> |
| |
|
| | </Tip> |
| |
|
| | Before you begin, make sure you have all the necessary libraries installed: |
| |
|
| | ```bash |
| | pip install transformers datasets evaluate seqeval |
| | ``` |
| |
|
| | We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: |
| |
|
| | ```py |
| | >>> from huggingface_hub import notebook_login |
| |
|
| | >>> notebook_login() |
| | ``` |
| |
|
| | ## Load WNUT 17 dataset |
| |
|
| | Start by loading the WNUT 17 dataset from the 🤗 Datasets library: |
| |
|
| | ```py |
| | >>> from datasets import load_dataset |
| |
|
| | >>> wnut = load_dataset("wnut_17") |
| | ``` |
| |
|
| | Then take a look at an example: |
| |
|
| | ```py |
| | >>> wnut["train"][0] |
| | {'id': '0', |
| | 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], |
| | 'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'] |
| | } |
| | ``` |
| |
|
| | Each number in `ner_tags` represents an entity. Convert the numbers to their label names to find out what the entities are: |
| |
|
| | ```py |
| | >>> label_list = wnut["train"].features[f"ner_tags"].feature.names |
| | >>> label_list |
| | [ |
| | "O", |
| | "B-corporation", |
| | "I-corporation", |
| | "B-creative-work", |
| | "I-creative-work", |
| | "B-group", |
| | "I-group", |
| | "B-location", |
| | "I-location", |
| | "B-person", |
| | "I-person", |
| | "B-product", |
| | "I-product", |
| | ] |
| | ``` |
| |
|
| | The letter that prefixes each `ner_tag` indicates the token position of the entity: |
| |
|
| | - `B-` indicates the beginning of an entity. |
| | - `I-` indicates a token is contained inside the same entity (for example, the `State` token is a part of an entity like |
| | `Empire State Building`). |
| | - `0` indicates the token doesn't correspond to any entity. |
| | |
| | ## Preprocess |
| | |
| | <Youtube id="iY2AZYdZAr0"/> |
| | |
| | The next step is to load a DistilBERT tokenizer to preprocess the `tokens` field: |
| | |
| | ```py |
| | >>> from transformers import AutoTokenizer |
| | |
| | >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") |
| | ``` |
| | |
| | As you saw in the example `tokens` field above, it looks like the input has already been tokenized. But the input actually hasn't been tokenized yet and you'll need to set `is_split_into_words=True` to tokenize the words into subwords. For example: |
| | |
| | ```py |
| | >>> example = wnut["train"][0] |
| | >>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True) |
| | >>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"]) |
| | >>> tokens |
| | ['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]'] |
| | ``` |
| | |
| | However, this adds some special tokens `[CLS]` and `[SEP]` and the subword tokenization creates a mismatch between the input and labels. A single word corresponding to a single label may now be split into two subwords. You'll need to realign the tokens and labels by: |
| |
|
| | 1. Mapping all tokens to their corresponding word with the [`word_ids`](https: |
| | 2. Assigning the label `-100` to the special tokens `[CLS]` and `[SEP]` so they're ignored by the PyTorch loss function. |
| | 3. Only labeling the first token of a given word. Assign `-100` to other subtokens from the same word. |
| | |
| | Here is how you can create a function to realign the tokens and labels, and truncate sequences to be no longer than DistilBERT's maximum input length: |
| |
|
| | ```py |
| | >>> def tokenize_and_align_labels(examples): |
| | ... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) |
| |
|
| | ... labels = [] |
| | ... for i, label in enumerate(examples[f"ner_tags"]): |
| | ... word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. |
| | ... previous_word_idx = None |
| | ... label_ids = [] |
| | ... for word_idx in word_ids: # Set the special tokens to -100. |
| | ... if word_idx is None: |
| | ... label_ids.append(-100) |
| | ... elif word_idx != previous_word_idx: # Only label the first token of a given word. |
| | ... label_ids.append(label[word_idx]) |
| | ... else: |
| | ... label_ids.append(-100) |
| | ... previous_word_idx = word_idx |
| | ... labels.append(label_ids) |
| |
|
| | ... tokenized_inputs["labels"] = labels |
| | ... return tokenized_inputs |
| | ``` |
| |
|
| | To apply the preprocessing function over the entire dataset, use 🤗 Datasets [`~datasets.Dataset.map`] function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once: |
| |
|
| | ```py |
| | >>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) |
| | ``` |
| |
|
| | Now create a batch of examples using [`DataCollatorWithPadding`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. |
| | |
| | <frameworkcontent> |
| | <pt> |
| | ```py |
| | >>> from transformers import DataCollatorForTokenClassification |
| | |
| | >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) |
| | ``` |
| | </pt> |
| | <tf> |
| | ```py |
| | >>> from transformers import DataCollatorForTokenClassification |
| | |
| | >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf") |
| | ``` |
| | </tf> |
| | </frameworkcontent> |
| | |
| | ## Evaluate |
| | |
| | Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 [Evaluate](https: |
| |
|
| | ```py |
| | >>> import evaluate |
| |
|
| | >>> seqeval = evaluate.load("seqeval") |
| | ``` |
| |
|
| | Get the NER labels first, and then create a function that passes your true predictions and true labels to [`~evaluate.EvaluationModule.compute`] to calculate the scores: |
| |
|
| | ```py |
| | >>> import numpy as np |
| |
|
| | >>> labels = [label_list[i] for i in example[f"ner_tags"]] |
| |
|
| |
|
| | >>> def compute_metrics(p): |
| | ... predictions, labels = p |
| | ... predictions = np.argmax(predictions, axis=2) |
| |
|
| | ... true_predictions = [ |
| | ... [label_list[p] for (p, l) in zip(prediction, label) if l != -100] |
| | ... for prediction, label in zip(predictions, labels) |
| | ... ] |
| | ... true_labels = [ |
| | ... [label_list[l] for (p, l) in zip(prediction, label) if l != -100] |
| | ... for prediction, label in zip(predictions, labels) |
| | ... ] |
| |
|
| | ... results = seqeval.compute(predictions=true_predictions, references=true_labels) |
| | ... return { |
| | ... "precision": results["overall_precision"], |
| | ... "recall": results["overall_recall"], |
| | ... "f1": results["overall_f1"], |
| | ... "accuracy": results["overall_accuracy"], |
| | ... } |
| | ``` |
| |
|
| | Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training. |
| | |
| | ## Train |
| | |
| | Before you start training your model, create a map of the expected ids to their labels with `id2label` and `label2id`: |
| | |
| | ```py |
| | >>> id2label = { |
| | ... 0: "O", |
| | ... 1: "B-corporation", |
| | ... 2: "I-corporation", |
| | ... 3: "B-creative-work", |
| | ... 4: "I-creative-work", |
| | ... 5: "B-group", |
| | ... 6: "I-group", |
| | ... 7: "B-location", |
| | ... 8: "I-location", |
| | ... 9: "B-person", |
| | ... 10: "I-person", |
| | ... 11: "B-product", |
| | ... 12: "I-product", |
| | ... } |
| | >>> label2id = { |
| | ... "O": 0, |
| | ... "B-corporation": 1, |
| | ... "I-corporation": 2, |
| | ... "B-creative-work": 3, |
| | ... "I-creative-work": 4, |
| | ... "B-group": 5, |
| | ... "I-group": 6, |
| | ... "B-location": 7, |
| | ... "I-location": 8, |
| | ... "B-person": 9, |
| | ... "I-person": 10, |
| | ... "B-product": 11, |
| | ... "I-product": 12, |
| | ... } |
| | ``` |
| | |
| | <frameworkcontent> |
| | <pt> |
| | <Tip> |
| | |
| | If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)! |
| |
|
| | </Tip> |
| |
|
| | You're ready to start training your model now! Load DistilBERT with [`AutoModelForTokenClassification`] along with the number of expected labels, and the label mappings: |
| | |
| | ```py |
| | >>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer |
| | |
| | >>> model = AutoModelForTokenClassification.from_pretrained( |
| | ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id |
| | ... ) |
| | ``` |
| | |
| | At this point, only three steps remain: |
| | |
| | 1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the seqeval scores and save the training checkpoint. |
| | 2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function. |
| | 3. Call [`~Trainer.train`] to finetune your model. |
| |
|
| | ```py |
| | >>> training_args = TrainingArguments( |
| | ... output_dir="my_awesome_wnut_model", |
| | ... learning_rate=2e-5, |
| | ... per_device_train_batch_size=16, |
| | ... per_device_eval_batch_size=16, |
| | ... num_train_epochs=2, |
| | ... weight_decay=0.01, |
| | ... evaluation_strategy="epoch", |
| | ... save_strategy="epoch", |
| | ... load_best_model_at_end=True, |
| | ... push_to_hub=True, |
| | ... ) |
| |
|
| | >>> trainer = Trainer( |
| | ... model=model, |
| | ... args=training_args, |
| | ... train_dataset=tokenized_wnut["train"], |
| | ... eval_dataset=tokenized_wnut["test"], |
| | ... tokenizer=tokenizer, |
| | ... data_collator=data_collator, |
| | ... compute_metrics=compute_metrics, |
| | ... ) |
| |
|
| | >>> trainer.train() |
| | ``` |
| |
|
| | Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model: |
| |
|
| | ```py |
| | >>> trainer.push_to_hub() |
| | ``` |
| | </pt> |
| | <tf> |
| | <Tip> |
| |
|
| | If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)! |
| | |
| | </Tip> |
| | To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: |
| | |
| | ```py |
| | >>> from transformers import create_optimizer |
| | |
| | >>> batch_size = 16 |
| | >>> num_train_epochs = 3 |
| | >>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs |
| | >>> optimizer, lr_schedule = create_optimizer( |
| | ... init_lr=2e-5, |
| | ... num_train_steps=num_train_steps, |
| | ... weight_decay_rate=0.01, |
| | ... num_warmup_steps=0, |
| | ... ) |
| | ``` |
| | |
| | Then you can load DistilBERT with [`TFAutoModelForTokenClassification`] along with the number of expected labels, and the label mappings: |
| | |
| | ```py |
| | >>> from transformers import TFAutoModelForTokenClassification |
| | |
| | >>> model = TFAutoModelForTokenClassification.from_pretrained( |
| | ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id |
| | ... ) |
| | ``` |
| | |
| | Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]: |
| | |
| | ```py |
| | >>> tf_train_set = model.prepare_tf_dataset( |
| | ... tokenized_wnut["train"], |
| | ... shuffle=True, |
| | ... batch_size=16, |
| | ... collate_fn=data_collator, |
| | ... ) |
| | |
| | >>> tf_validation_set = model.prepare_tf_dataset( |
| | ... tokenized_wnut["validation"], |
| | ... shuffle=False, |
| | ... batch_size=16, |
| | ... collate_fn=data_collator, |
| | ... ) |
| | ``` |
| | |
| | Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method): |
| | |
| | ```py |
| | >>> import tensorflow as tf |
| | |
| | >>> model.compile(optimizer=optimizer) |
| | ``` |
| | |
| | The last two things to setup before you start training is to compute the seqeval scores from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks). |
| | |
| | Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]: |
| | |
| | ```py |
| | >>> from transformers.keras_callbacks import KerasMetricCallback |
| | |
| | >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) |
| | ``` |
| | |
| | Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]: |
| | |
| | ```py |
| | >>> from transformers.keras_callbacks import PushToHubCallback |
| | |
| | >>> push_to_hub_callback = PushToHubCallback( |
| | ... output_dir="my_awesome_wnut_model", |
| | ... tokenizer=tokenizer, |
| | ... ) |
| | ``` |
| | |
| | Then bundle your callbacks together: |
| | |
| | ```py |
| | >>> callbacks = [metric_callback, push_to_hub_callback] |
| | ``` |
| | |
| | Finally, you're ready to start training your model! Call [`fit`](https: |
| |
|
| | ```py |
| | >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) |
| | ``` |
| |
|
| | Once training is completed, your model is automatically uploaded to the Hub so everyone can use it! |
| | </tf> |
| | </frameworkcontent> |
| |
|
| | <Tip> |
| |
|
| | For a more in-depth example of how to finetune a model for token classification, take a look at the corresponding |
| | [PyTorch notebook](https: |
| | or [TensorFlow notebook](https: |
| |
|
| | </Tip> |
| |
|
| | ## Inference |
| |
|
| | Great, now that you've finetuned a model, you can use it for inference! |
| | |
| | Grab some text you'd like to run inference on: |
| |
|
| | ```py |
| | >>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco." |
| | ``` |
| |
|
| | The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for NER with your model, and pass your text to it: |
| |
|
| | ```py |
| | >>> from transformers import pipeline |
| |
|
| | >>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model") |
| | >>> classifier(text) |
| | [{'entity': 'B-location', |
| | 'score': 0.42658573, |
| | 'index': 2, |
| | 'word': 'golden', |
| | 'start': 4, |
| | 'end': 10}, |
| | {'entity': 'I-location', |
| | 'score': 0.35856336, |
| | 'index': 3, |
| | 'word': 'state', |
| | 'start': 11, |
| | 'end': 16}, |
| | {'entity': 'B-group', |
| | 'score': 0.3064001, |
| | 'index': 4, |
| | 'word': 'warriors', |
| | 'start': 17, |
| | 'end': 25}, |
| | {'entity': 'B-location', |
| | 'score': 0.65523505, |
| | 'index': 13, |
| | 'word': 'san', |
| | 'start': 80, |
| | 'end': 83}, |
| | {'entity': 'B-location', |
| | 'score': 0.4668663, |
| | 'index': 14, |
| | 'word': 'francisco', |
| | 'start': 84, |
| | 'end': 93}] |
| | ``` |
| |
|
| | You can also manually replicate the results of the `pipeline` if you'd like: |
| | |
| | <frameworkcontent> |
| | <pt> |
| | Tokenize the text and return PyTorch tensors: |
| | |
| | ```py |
| | >>> from transformers import AutoTokenizer |
| | |
| | >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") |
| | >>> inputs = tokenizer(text, return_tensors="pt") |
| | ``` |
| | |
| | Pass your inputs to the model and return the `logits`: |
| | |
| | ```py |
| | >>> from transformers import AutoModelForTokenClassification |
| | |
| | >>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") |
| | >>> with torch.no_grad(): |
| | ... logits = model(**inputs).logits |
| | ``` |
| | |
| | Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label: |
| |
|
| | ```py |
| | >>> predictions = torch.argmax(logits, dim=2) |
| | >>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] |
| | >>> predicted_token_class |
| | ['O', |
| | 'O', |
| | 'B-location', |
| | 'I-location', |
| | 'B-group', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'B-location', |
| | 'B-location', |
| | 'O', |
| | 'O'] |
| | ``` |
| | </pt> |
| | <tf> |
| | Tokenize the text and return TensorFlow tensors: |
| |
|
| | ```py |
| | >>> from transformers import AutoTokenizer |
| |
|
| | >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") |
| | >>> inputs = tokenizer(text, return_tensors="tf") |
| | ``` |
| |
|
| | Pass your inputs to the model and return the `logits`: |
| |
|
| | ```py |
| | >>> from transformers import TFAutoModelForTokenClassification |
| |
|
| | >>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") |
| | >>> logits = model(**inputs).logits |
| | ``` |
| |
|
| | Get the class with the highest probability, and use the model's `id2label` mapping to convert it to a text label: |
| | |
| | ```py |
| | >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) |
| | >>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] |
| | >>> predicted_token_class |
| | ['O', |
| | 'O', |
| | 'B-location', |
| | 'I-location', |
| | 'B-group', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'O', |
| | 'B-location', |
| | 'B-location', |
| | 'O', |
| | 'O'] |
| | ``` |
| | </tf> |
| | </frameworkcontent> |
| | |