{ "cells": [ { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'label': 'POSITIVE', 'score': 0.9433633089065552},\n", " {'label': 'NEGATIVE', 'score': 0.9994558691978455}]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers import pipeline\n", "\n", "sentiment_analyser = pipeline('sentiment-analysis', model='distilbert-base-uncased-finetuned-sst-2-english')\n", "sentiment_analyser(['I have been waiting for a HuggingFace course my whole life.',\n", " 'I hate this so much!'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What happens under the hood?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tokenizer -> Model -> PostProcessing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tokenizer takes in the raw data in this case the text and converts it into numerical representation for the model.\n", "It does so using the following steps:\n", "- Take input\n", "- Break input down into tokens depending on spaces or punctuation\n", "- Provide the sequence of tokens with a start token and a stop token, the start token for the BERT model is CLS which stands for Classification Tasks and the stop token for said model is SEP which stands for Seperation Tasks.\n", "- Convert all the tokens in the sequence into their numerical representation for the model to ingest." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Pytorch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tokenizer" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "# initialize tokenizer and model from checkpoint name\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "tokenizer = AutoTokenizer.from_pretrained(checkpoint)\n", "\n", "# use tokenizer to preprocess inputs:\n", "raw_inputs = [\n", " 'I have been waiting for a HuggingFace course my whole life.',\n", " 'I hate this so much!'\n", "]\n", "inputs_pytorch = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors='pt')" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'input_ids': tensor([[ 101, 1045, 2031, 2042, 3403, 2005, 1037, 17662, 12172, 2607,\n", " 2026, 2878, 2166, 1012, 102],\n", " [ 101, 1045, 5223, 2023, 2061, 2172, 999, 102, 0, 0,\n", " 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n", " [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]])}\n" ] } ], "source": [ "print(inputs_pytorch)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "torch.Size([2, 15, 768])\n" ] } ], "source": [ "from transformers import AutoModel\n", "\n", "# initialize model from checkpoint name\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "model = AutoModel.from_pretrained(checkpoint)\n", "\n", "# forward pass\n", "outputs_pytorch = model(**inputs_pytorch)\n", "\n", "# print last hidden states of the first batch\n", "print(outputs_pytorch.last_hidden_state.shape)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, the automodel api will only instantiate once the pre-training head is removed. It will output a high dimensional tensor that is a representation of the sentences passed but not directly useful for classification." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use Auto model for sequence classification api instead" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[-1.3782, 1.4346],\n", " [ 4.1692, -3.3464]], grad_fn=)\n" ] } ], "source": [ "from transformers import AutoModelForSequenceClassification\n", "\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "model = AutoModelForSequenceClassification.from_pretrained(checkpoint)\n", "\n", "outputs_pytorch = model(**inputs_pytorch)\n", "print(outputs_pytorch.logits)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is seen that the outputs are not probabilities yet as each of the outputs don't are from from being between 0 and 1.\n", "This is because each model of the transformers library returns logits.\n", "The logits are converted into probabilities in the third and last step of the pipeline, which is\n", "### Postprocessing " ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tensor([[5.6636e-02, 9.4336e-01],\n", " [9.9946e-01, 5.4418e-04]], grad_fn=)\n" ] }, { "data": { "text/plain": [ "{0: 'NEGATIVE', 1: 'POSITIVE'}" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import torch\n", "\n", "'''to convert logits into probabilities we apply the softmax layer'''\n", "predictions = torch.nn.functional.softmax(outputs_pytorch.logits, dim=-1)\n", "print(predictions)\n", "\n", "'''the last of these is to see which of these responses \n", "are positive or negative. \n", "this is given by id2label field of the model config'''\n", "model.config.id2label" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Tensorflow:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tokenizer:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "from transformers import AutoTokenizer\n", "\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "tokenizer = AutoTokenizer.from_pretrained(checkpoint)\n", "\n", "raw_inputs = [\n", " '''I've been waiting for a HuggingFace course my whole life.''',\n", " 'I hate this so much!'\n", "]\n", "\n", "inputs_tensorflow = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors='tf')" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'input_ids': , 'attention_mask': }\n" ] } ], "source": [ "print(inputs_tensorflow)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### AutoModel API" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFDistilBertModel: ['pre_classifier.weight', 'classifier.weight', 'pre_classifier.bias', 'classifier.bias']\n", "- This IS expected if you are initializing TFDistilBertModel from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing TFDistilBertModel from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).\n", "All the weights of TFDistilBertModel were initialized from the PyTorch model.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertModel for predictions without further training.\n" ] }, { "data": { "text/plain": [ "TensorShape([2, 16, 768])" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers import TFAutoModel\n", "\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "model =TFAutoModel.from_pretrained(checkpoint)\n", "\n", "outputs_tensorflow = model(inputs_tensorflow)\n", "outputs_tensorflow.last_hidden_state.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### AutoModel for Sequence Classification Class:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "All PyTorch model weights were used when initializing TFDistilBertForSequenceClassification.\n", "\n", "All the weights of TFDistilBertForSequenceClassification were initialized from the PyTorch model.\n", "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertForSequenceClassification for predictions without further training.\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from transformers import TFAutoModelForSequenceClassification\n", "\n", "checkpoint = 'distilbert-base-uncased-finetuned-sst-2-english'\n", "model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint)\n", "\n", "outputs_tensorflow = model(inputs_tensorflow)\n", "outputs_tensorflow.logits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Postprocessing" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[4.0195428e-02 9.5980465e-01]\n", " [9.9945587e-01 5.4418371e-04]], shape=(2, 2), dtype=float32)\n" ] } ], "source": [ "import tensorflow as tf\n", "\n", "predictions = tf.math.softmax(outputs_tensorflow.logits, axis=-1)\n", "print(predictions)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{0: 'NEGATIVE', 1: 'POSITIVE'}" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.config.id2label" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6" } }, "nbformat": 4, "nbformat_minor": 2 }