vector
listlengths
1.02k
1.02k
text
stringlengths
2
11.8k
[ 0.04237021, -0.0069785793, -0.0047975406, 0.025844937, -0.015125838, -0.022882296, 0.04969493, -0.013622187, -0.005441431, 0.027705891, -0.028256733, 0.0073358826, 0.007562919, -0.061813466, -0.043561224, 0.025591847, 0.002084269, -0.042846616, -0.03885673, 0.002000526, 0.027...
pip install -q transformers datasets We encourage you to share your model with the community. Log in to your Hugging Face account to upload it to the 🤗 Hub. When prompted, enter your token to log in: from huggingface_hub import notebook_login notebook_login() Let's define the model checkpoint as a global variable. model_checkpoint = "dandelin/vilt-b32-mlm"
[ 0.021102646, -0.0063151834, -0.0218406, 0.021102646, -0.01805149, 0.0007375105, -0.022621129, 0.017512217, -0.0035212468, 0.03840199, 0.00081866776, 0.024238952, 0.0078052827, 0.0020311475, -0.027744232, -0.004154539, -0.020378884, -0.009550828, -0.049215857, 0.010189442, -0....
Visual Question Answering [[open-in-colab]] Visual Question Answering (VQA) is the task of answering open-ended questions based on an image. The input to models supporting this task is typically a combination of an image and a question, and the output is an answer expressed in natural language. Some noteworthy use case examples for VQA include: * Accessibility applications for visually impaired individuals. * Education: posing questions about visual materials presented in lectures or textbooks. VQA can also be utilized in interactive museum exhibits or historical sites. * Customer service and e-commerce: VQA can enhance user experience by letting users ask questions about products. * Image retrieval: VQA models can be used to retrieve images with specific characteristics. For example, the user can ask "Is there a dog?" to find all images with dogs from a set of images. In this guide you'll learn how to:
[ 0.02584071, -0.025753949, -0.0024781488, 0.0063734152, -0.0048767654, -0.0065758605, 0.0032065907, -0.011344173, 0.017482607, 0.019680586, 0.016224554, -0.008133967, 0.028674947, -0.040865052, 0.006543325, 0.03178393, 0.010751298, -0.005173203, -0.052086312, -0.033490255, -0....
Load the data For illustration purposes, in this guide we use a very small sample of the annotated visual question answering Graphcore/vqa dataset. You can find the full dataset on 🤗 Hub. As an alternative to the Graphcore/vqa dataset, you can download the same data manually from the official VQA dataset page. If you prefer to follow the tutorial with your custom data, check out how to Create an image dataset guide in the 🤗 Datasets documentation. Let's load the first 200 examples from the validation split and explore the dataset's features: thon
[ 0.024637116, -0.024474153, -0.025451934, -0.0042370507, -0.04085939, -0.019274136, -0.0013564859, 0.0068740966, -0.0048814975, 0.026696382, -0.012674116, 0.0051666833, 0.009859291, -0.036444563, 0.030088987, 0.031051952, -0.0018398208, -0.03680012, -0.06589651, -0.020814883, ...
Fine-tune a classification VQA model, specifically ViLT, on the Graphcore/vqa dataset. Use your fine-tuned ViLT for inference. Run zero-shot VQA inference with a generative model, like BLIP-2.
[ 0.027249692, -0.019303689, -0.0004380782, 0.013654006, -0.0072097233, -0.031200826, 0.050825268, 0.03245469, -0.010723461, 0.024727384, 0.008077223, -0.00083925115, 0.01665016, -0.010067369, -0.039540485, -0.005172193, -0.007807496, 0.0010488362, -0.028007844, -0.008004324, 0...
dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}}
[ 0.03787579, -0.008084599, 0.023132473, 0.006586041, 0.007385502, 0.0048279176, 0.01994847, -0.015463179, 0.009918861, 0.03704518, 0.000571044, -0.001094501, 0.017636606, -0.007918477, -0.0030369158, 0.043523934, -0.008707556, -0.02590117, -0.03427648, -0.020557584, -0.0112339...
from datasets import load_dataset dataset = load_dataset("Graphcore/vqa", split="validation[:200]") dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) Let's take a look at an example to understand the dataset's features:
[ 0.027525166, -0.014617312, 0.0077432636, 0.0020227374, 0.0041613705, 0.00046109178, -0.0040490967, 0.012466002, -0.008829784, 0.010988336, 0.006548092, -0.022063592, 0.009213687, 0.0053348118, -0.040302638, 0.011060771, -0.009981494, -0.027510678, -0.03925958, -0.020962587, -...
The features relevant to the task include: * question: the question to be answered from the image * image_id: the path to the image the question refers to * label: the annotations We can remove the rest of the features as they won't be necessary: dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type'])
[ 0.03541616, -0.029508604, 0.005101648, 0.0028370852, -0.005922142, -0.014878287, 0.014200012, 0.017314242, -0.018962523, 0.02096088, 0.018320715, -0.003475247, 0.012216241, 0.011931803, -0.041659202, -0.0030887031, -0.0063816183, -0.00022574974, -0.038683545, -0.009576074, 0....
dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) As you can see, the label feature contains several answers to the same question (called ids here) collected by different human annotators. This is because the answer to a question can be subjective. In this case, the question is "where is he looking?". Some people annotated this with "down", others with "at table", another one with "skateboard", etc. Take a look at the image and consider which answer would you give: thon
[ 0.025228834, 0.013218162, 0.025267372, -0.02494623, 0.015067934, -0.017303076, -0.016750712, 0.012460269, 0.020655787, 0.061402153, 0.0016554816, -0.03077815, 0.026025264, -0.003911497, -0.01086741, 0.015299155, 0.0051221983, -0.013757679, -0.03360419, 0.006554487, 0.00222550...
from PIL import Image image = Image.open(dataset[0]['image_id']) image
[ 0.014464591, -0.0061340025, -0.01787154, -0.0115582235, -0.020023298, -0.014023779, 0.03185796, 0.010026591, -0.027001565, 0.018932475, 0.0197095, -0.03161888, 0.03406949, -0.010639245, -0.039717853, 0.015316328, -0.01592898, -0.021622172, -0.02653834, -0.033561435, 0.0514628...
Due to the questions' and answers' ambiguity, datasets like this are treated as a multi-label classification problem (as multiple answers are possibly valid). Moreover, rather than just creating a one-hot encoded vector, one creates a soft encoding, based on the number of times a certain answer appeared in the annotations. For instance, in the example above, because the answer "down" is selected way more often than other answers, it has a score (called weight in the dataset) of 1.0, and the rest of the answers have scores < 1.0. To later instantiate the model with an appropriate classification head, let's create two dictionaries: one that maps the label name to an integer and vice versa:
[ 0.0029235112, 0.00097275735, 0.020426158, -0.011721988, -0.028431758, -0.042556822, -0.010911648, -0.018833421, 0.0080056, 0.04674824, -0.005972765, -0.0037373442, 0.033726912, -0.011386675, -0.02794276, 0.0073000463, -0.042584766, -0.021236498, -0.040125802, -0.0017761117, -...
import itertools labels = [item['ids'] for item in dataset['label']] flattened_labels = list(itertools.chain(*labels)) unique_labels = list(set(flattened_labels)) label2id = {label: idx for idx, label in enumerate(unique_labels)} id2label = {idx: label for label, idx in label2id.items()} Now that we have the mappings, we can replace the string answers with their ids, and flatten the dataset for a more convenient further preprocessing. thon
[ -0.00025030362, 0.007746327, 0.037360664, -0.009846683, -0.03266019, -0.05370427, 0.0036334125, -0.011089337, 0.00088387146, 0.028716115, 0.0067400476, 0.0017128703, 0.006699526, -0.06418578, -0.028310902, 0.0031353377, -0.001902814, -0.007131754, -0.0351455, -0.049273934, 0....
def replace_ids(inputs): inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]] return inputs dataset = dataset.map(replace_ids) flat_dataset = dataset.flatten() flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)}
[ 0.02976543, -0.033640552, 0.028417561, -0.03116946, 0.0011504268, -0.0064550266, -0.03678558, 0.0053458433, 0.0047807214, 0.066494845, 0.011513044, -0.0054476354, 0.049449928, -0.024289714, -0.009392959, 0.026704645, -0.027139895, -0.029568866, -0.033303585, -0.008445239, -0....
import torch def preprocess_data(examples): image_paths = examples['image_id'] images = [Image.open(image_path) for image_path in image_paths] texts = examples['question']
[ -0.00048493056, -0.023014149, 0.034652483, 0.009158989, -0.007831811, -0.039261144, -0.013519718, -0.010435122, -0.044365678, 0.04707837, -0.016465763, -0.00786098, 0.010077805, -0.013096771, 0.0035421809, 0.032523163, -0.01828881, -0.06825489, -0.064754635, -0.012221709, 0.0...
encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt") for k, v in encoding.items(): encoding[k] = v.squeeze() targets = [] for labels, scores in zip(examples['label.ids'], examples['label.weights']): target = torch.zeros(len(id2label)) for label, score in zip(labels, scores): target[label] = score targets.append(target) encoding["labels"] = targets return encoding
[ 0.0055663334, -0.029031683, -0.0012714012, -0.020327961, 0.016135138, -0.014262826, -0.015224284, -0.003921735, -0.017147198, 0.052771732, -0.020501457, -0.022380998, 0.034525726, -0.03761974, 0.02556176, 0.022034006, -0.02984133, -0.060723636, -0.0512681, -0.0044783684, -0.0...
Preprocessing data The next step is to load a ViLT processor to prepare the image and text data for the model. [ViltProcessor] wraps a BERT tokenizer and ViLT image processor into a convenient single processor: from transformers import ViltProcessor processor = ViltProcessor.from_pretrained(model_checkpoint)
[ 0.01425305, -0.04130569, -0.002766142, -0.0128680775, 0.01636095, -0.017882898, -0.019359188, 0.021535575, -0.040027253, 0.02765381, -0.013035491, 0.002010875, 0.029297514, -0.02289011, 0.0005074748, 0.017745923, -0.03123039, -0.05351172, -0.03981418, -0.023498889, -0.0282017...
To preprocess the data we need to encode the images and questions using the [ViltProcessor]. The processor will use the [BertTokenizerFast] to tokenize the text and create input_ids, attention_mask and token_type_ids for the text data. As for images, the processor will leverage [ViltImageProcessor] to resize and normalize the image, and create pixel_values and pixel_mask. All these preprocessing steps are done under the hood, we only need to call the processor. However, we still need to prepare the target labels. In this representation, each element corresponds to a possible answer (label). For correct answers, the element holds their respective score (weight), while the remaining elements are set to zero. The following function applies the processor to the images and questions and formats the labels as described above:
[ 0.0037092816, -0.00309293, 0.010114868, -0.008461333, -0.027499331, -0.032057725, 0.0017959849, -0.005057434, -0.01713122, 0.034143265, -0.0032270004, 0.01128426, 0.038224965, -0.017727088, -0.037092816, 0.027663196, -0.06173198, -0.052913126, -0.058573876, -0.0058990982, -0....
processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights']) processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) As a final step, create a batch of examples using [DefaultDataCollator]: from transformers import DefaultDataCollator data_collator = DefaultDataCollator()
[ -0.0042433552, -0.017440043, -0.008158616, -0.01800874, -0.033771858, -0.009507448, -0.005001618, 0.032721955, -0.0042324187, 0.05287425, 0.012802975, -0.0042542918, 0.028376527, -0.009069989, -0.0052495115, 0.026247557, -0.040712878, -0.0436876, -0.073726475, -0.01859202, -0...
As a final step, create a batch of examples using [DefaultDataCollator]: from transformers import DefaultDataCollator data_collator = DefaultDataCollator() Train the model You’re ready to start training your model now! Load ViLT with [ViltForQuestionAnswering]. Specify the number of labels along with the label mappings: from transformers import ViltForQuestionAnswering model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id)
[ 0.011377005, -0.0033765885, 0.017785527, 0.019368889, 0.016461508, -0.024118977, -0.020447213, -0.013520003, 0.0067497646, 0.048128758, 0.030957464, -0.011895692, 0.01306274, -0.034752075, -0.026371174, 0.018604508, -0.045153126, -0.027135555, -0.05121358, -0.00016166334, 0.0...
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.map] function. You can speed up map by setting batched=True to process multiple elements of the dataset at once. At this point, feel free to remove the columns you don't need.
[ 0.04816367, 0.050367426, -0.012432375, 0.00528829, -0.05486193, -0.020109275, 0.00546952, 0.00037990327, 0.010931791, 0.019746816, 0.03821777, 0.025662161, 0.02702501, -0.055586852, -0.009844411, 0.045321986, -0.0007462143, -0.017311085, -0.05312212, 0.012309139, -0.003314695...
At this point, only three steps remain: Define your training hyperparameters in [TrainingArguments]: from transformers import TrainingArguments repo_id = "MariaK/vilt_finetuned_200" training_args = TrainingArguments( output_dir=repo_id, per_device_train_batch_size=4, num_train_epochs=20, save_steps=200, logging_steps=50, learning_rate=5e-5, save_total_limit=2, remove_unused_columns=False, push_to_hub=True, )
[ 0.0076807993, -0.013472756, -0.017471788, -0.00047866654, -0.06180055, -0.016586395, -0.013214516, 0.024171276, -0.016955309, 0.036773395, 0.013767888, -0.0068433634, 0.0057956465, -0.023020264, -0.004157666, 0.035386276, -0.0064117336, -0.018401453, -0.03981325, -0.031165894, ...
from transformers import ViltForQuestionAnswering model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) At this point, only three steps remain: Define your training hyperparameters in [TrainingArguments]:
[ 0.032612994, 0.008613176, 0.015275146, -0.004027844, -0.0009520834, -0.05577658, -0.021449314, 0.018160142, -0.0126619255, 0.08128162, 0.0067281723, -0.023219336, 0.047553647, -0.004285682, -0.046968285, -0.0021027715, -0.011679354, -0.035511926, -0.022090426, -0.006310057, -...
example = dataset[0] image = Image.open(example['image_id']) question = example['question'] print(question) pipe(image, question, top_k=1) "Where is he looking?" [{'score': 0.5498199462890625, 'answer': 'down'}]
[ 0.038990103, 0.027886398, -0.021275034, -0.0035917512, -0.019170133, -0.024326432, 0.021373922, 0.011591079, -0.0037648052, 0.018082365, 0.0015654314, 0.040120248, 0.01084942, -0.043228157, 0.00635355, 0.031587634, -0.006805609, -0.046533838, -0.08736043, -0.011986631, -0.029...
Pass the training arguments to [Trainer] along with the model, dataset, processor, and data collator. from transformers import Trainer trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=processed_dataset, tokenizer=processor, ) Call [~Trainer.train] to finetune your model. trainer.train() Once training is completed, share your model to the Hub with the [~Trainer.push_to_hub] method to share your final model on the 🤗 Hub:
[ 0.028242333, 0.0045315837, -0.046353944, -0.010778617, -0.061667826, 0.0011614254, -0.037283413, 0.02525318, 0.010439944, 0.042024825, -0.0012552966, 0.03124621, 0.009688975, -0.03507468, 0.015446406, 0.029214175, 0.0016685138, -0.031805754, -0.07945549, -0.0016565497, -0.013...
trainer.train() Once training is completed, share your model to the Hub with the [~Trainer.push_to_hub] method to share your final model on the 🤗 Hub: trainer.push_to_hub() Inference Now that you have fine-tuned a ViLT model, and uploaded it to the 🤗 Hub, you can use it for inference. The simplest way to try out your fine-tuned model for inference is to use it in a [Pipeline]. from transformers import pipeline pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200")
[ 0.042667042, 0.000893359, -0.01816348, 0.015164792, 0.0032682125, -0.027430853, -0.0058724303, 0.04295263, 0.01336558, 0.060487814, 0.04089639, -0.009324491, 0.009317351, -0.025231816, -0.018620422, 0.026616924, 0.026902512, -0.0065292856, -0.024717754, -0.021190727, 0.029358...
Even though not very confident, the model indeed has learned something. With more examples and longer training, you'll get far better results! You can also manually replicate the results of the pipeline if you'd like: 1. Take an image and a question, prepare them for the model using the processor from your model. 2. Forward the result or preprocessing through the model. 3. From the logits, get the most likely answer's id, and find the actual answer in the id2label.
[ 0.025991818, -0.013691193, -0.01890583, -0.012929339, -0.0059616975, -0.01621345, -0.020577472, 0.028625024, -0.03417251, 0.080475524, -0.0028514075, 0.026391236, 0.0251486, -0.053965937, -0.0058951275, 0.013032892, 0.001583807, -0.037397448, -0.05532692, -0.018210545, -0.000...
processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200") image = Image.open(example['image_id']) question = example['question'] prepare inputs inputs = processor(image, question, return_tensors="pt") model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200") forward pass with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) Predicted answer: down
[ 0.034901667, 0.002368688, -0.032509565, -0.007687878, -0.04251029, -0.028100023, -0.026658995, 0.027624484, -0.0044707856, 0.0868651, 0.025462944, -0.013444779, 0.022912325, -0.02531884, 0.0016787963, 0.033575922, -0.0018445143, -0.032163717, -0.055854198, -0.009943085, 0.015...
from transformers import pipeline pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") The model in this guide has only been trained on 200 examples, so don't expect a lot from it. Let's see if it at least learned something from the data and take the first example from the dataset to illustrate inference:
[ 0.04247192, -0.007950482, -0.027462598, 0.017208707, -0.03706262, -0.04811899, -0.0018074343, 0.015395699, -0.019898495, 0.035249613, -0.014028513, 0.0014266284, 0.03144527, -0.02126568, -0.005762242, 0.040272534, -0.01041736, -0.024743088, -0.059918396, 0.0047962954, -0.0058...
Zero-shot VQA The previous model treated VQA as a classification task. Some recent models, such as BLIP, BLIP-2, and InstructBLIP approach VQA as a generative task. Let's take BLIP-2 as an example. It introduced a new visual-language pre-training paradigm in which any combination of pre-trained vision encoder and LLM can be used (learn more in the BLIP-2 blog post). This enables achieving state-of-the-art results on multiple visual-language tasks including visual question answering. Let's illustrate how you can use this model for VQA. First, let's load the model. Here we'll explicitly send the model to a GPU, if available, which we didn't need to do earlier when training, as [Trainer] handles this automatically:
[ 0.044402733, -0.0051278747, 0.011486731, 0.013096476, -0.02445938, -0.02505666, -0.011115251, 0.014378444, -0.0033251063, 0.059844628, -0.023541607, -0.017029788, -0.000793036, 0.006781323, -0.012047592, 0.025581103, 0.010372292, -0.024401108, -0.03202008, -0.017495958, -0.01...
The model takes image and text as input, so let's use the exact same image/question pair from the first example in the VQA dataset: example = dataset[0] image = Image.open(example['image_id']) question = example['question'] To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: Question: {} Answer:. prompt = f"Question: {question} Answer:"
[ 0.057855275, -0.006709583, -0.009478679, 0.028420674, -0.041570995, -0.013119597, -0.010469561, -0.010515649, -0.006275592, 0.034811486, -0.019310696, -0.029941563, 0.030786501, -0.02822096, 0.011806102, 0.03674716, 0.013004378, -0.01611529, -0.047439482, -0.031831153, -0.011...
from transformers import AutoProcessor, Blip2ForConditionalGeneration import torch processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) The model takes image and text as input, so let's use the exact same image/question pair from the first example in the VQA dataset:
[ 0.048414916, -0.020733042, -0.0027883956, 0.012139182, -0.006420578, -0.016577924, -0.021215206, 0.0063283993, -0.021158481, 0.059901755, -0.042628948, -0.009749635, -0.013259503, -0.004318202, -0.011756287, 0.020662135, -0.0015271474, -0.046996783, -0.022349708, -0.026547369, ...
To use BLIP-2 for visual question answering task, the textual prompt has to follow a specific format: Question: {} Answer:. prompt = f"Question: {question} Answer:" Now we need to preprocess the image/prompt with the model's processor, pass the processed input through the model, and decode the output:
[ 0.016697606, -0.013208973, -0.030075384, 0.008939619, 0.0042658374, -0.018371586, 0.017288422, 0.010458862, 0.031313285, 0.03809361, 0.024547027, -0.040091135, 0.032072905, -0.021269402, -0.011394322, 0.021297535, 0.0091365585, -0.02606627, -0.05348298, -0.0050324923, 0.04065...
As you can see, the model recognized the crowd, and the direction of the face (looking down), however, it seems to miss the fact the crowd is behind the skater. Still, in cases where acquiring human-annotated datasets is not feasible, this approach can quickly produce useful results.
[ 0.04336986, 0.015808642, -0.017944172, -0.016023628, 0.0023236412, -0.018130492, 0.042681903, 0.00396291, 0.011308268, 0.0080548115, -0.028736472, 0.0077036675, 0.029754072, -0.05463513, -0.011702409, 0.03580235, -0.0028521486, -0.05523709, -0.026228301, 0.007216366, 0.021713...
Before you begin, make sure you have all the necessary libraries installed: pip install transformers datasets evaluate We encourage you to log in to your Hugging Face account to upload and share your model with the community. When prompted, enter your token to log in: from huggingface_hub import notebook_login notebook_login()
[ 0.010672739, -0.013843362, 0.01516385, -0.017804828, -0.030269656, -0.041355956, 0.031575635, 0.012094803, 0.003161554, 0.061409965, 0.024668464, -0.003910677, 0.042487804, -0.004447579, -0.018312708, 0.03671248, 0.028528353, -0.0251183, -0.055431493, -0.02729493, 0.021505097...
Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method: food = food.train_test_split(test_size=0.2) Then take a look at an example: food["train"][0] {'image': , 'label': 79} Each example in the dataset has two fields: image: a PIL image of the food item label: the label class of the food item To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa:
[ 0.037272066, -0.03633814, -0.014065464, 0.022017969, -0.004103606, -0.043498226, -0.022244375, 0.0153955985, -0.008483144, 0.063903056, -0.010994835, 0.00024696812, 0.009162362, -0.019159596, -0.024932945, 0.008575122, 0.0033730934, -0.056544863, -0.022088721, -0.031074204, -...
prompt = f"Question: {question} Answer:" Now we need to preprocess the image/prompt with the model's processor, pass the processed input through the model, and decode the output: inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) generated_ids = model.generate(**inputs, max_new_tokens=10) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() print(generated_text) "He is looking at the crowd"
[ 0.03994359, -0.017292315, 0.014792388, -0.01519293, -0.009101946, -0.012920896, -0.011298015, -0.008383734, -0.00029501042, 0.023438547, 0.024750663, -0.042871684, 0.04939083, -0.029833388, -0.0016159752, 0.06723562, -0.019778432, -0.039032016, -0.01232699, -0.0015313782, -0....
from transformers import AutoImageProcessor checkpoint = "google/vit-base-patch16-224-in21k" image_processor = AutoImageProcessor.from_pretrained(checkpoint) Apply some image transformations to the images to make the model more robust against overfitting. Here you'll use torchvision's transforms module, but you can also use any image library you like. Crop a random part of the image, resize it, and normalize it with the image mean and standard deviation:
[ 0.016948856, -0.0053601125, 0.0056617325, -0.030583529, -0.0072279754, 0.0060869073, -0.025699468, 0.016847106, -0.027022235, 0.051340792, 0.04128195, -0.030176524, 0.024972674, -0.056602787, -0.00598879, 0.051660582, -0.035235018, -0.043782126, -0.02412959, -0.0054618637, 0....
from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) size = ( image_processor.size["shortest_edge"] if "shortest_edge" in image_processor.size else (image_processor.size["height"], image_processor.size["width"]) ) _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize])
[ 0.028591989, 0.022098262, 0.007970209, 0.0060423845, -0.016178336, -0.012119756, 0.05614834, 0.006294296, 0.008732942, 0.043608733, -0.008138151, 0.009516668, 0.051334027, -0.047219466, -0.029501671, -0.0046393755, 0.014806815, -0.030397357, -0.06566501, -0.0016837855, 0.0105...
from huggingface_hub import notebook_login notebook_login() Load Food-101 dataset Start by loading a smaller subset of the Food-101 dataset from the 🤗 Datasets library. This will give you a chance to experiment and make sure everything works before spending more time training on the full dataset. from datasets import load_dataset food = load_dataset("food101", split="train[:5000]") Split the dataset's train split into a train and test set with the [~datasets.Dataset.train_test_split] method:
[ -0.002127803, -0.03380677, 0.01321321, -0.023867104, -0.009924787, -0.026724014, 0.026099065, 0.0036008973, 0.0057696197, 0.025161643, 0.008935284, -0.031812884, 0.018584795, -0.00977599, 0.009649511, 0.044401146, 0.028479824, -0.029610684, -0.07350592, -0.041157365, 0.023777...
To make it easier for the model to get the label name from the label id, create a dictionary that maps the label name to an integer and vice versa: labels = food["train"].features["label"].names label2id, id2label = dict(), dict() for i, label in enumerate(labels): label2id[label] = str(i) id2label[str(i)] = label Now you can convert the label id to a label name: id2label[str(79)] 'prime_rib' Preprocess The next step is to load a ViT image processor to process the image into a tensor:
[ 0.021746429, -0.00061201066, 0.045306224, -0.027591044, -0.0074975663, -0.046589527, -0.009875864, 0.0015727453, -0.013105048, 0.0643884, 0.030436631, -0.014841693, 0.047705445, -0.028121104, -0.011668305, 0.030938793, -0.018008107, -0.03568144, -0.04547361, -0.027995564, -0....
Then create a preprocessing function to apply the transforms and return the pixel_values - the inputs to the model - of the image: def transforms(examples): examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]] del examples["image"] return examples To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.with_transform] method. The transforms are applied on the fly when you load an element of the dataset:
[ 0.017446008, 0.0035414374, 0.047224034, -0.012288185, -0.0017150127, -0.024766319, 0.0071047917, 0.0040363986, -0.028901344, 0.045675226, 0.017709013, -0.0003997587, 0.027995437, -0.014282641, -0.05157823, 0.025789116, -0.03953844, -0.024123417, -0.072355635, 0.00028583544, -...
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.with_transform] method. The transforms are applied on the fly when you load an element of the dataset: food = food.with_transform(transforms) Now create a batch of examples using [DefaultDataCollator]. Unlike other data collators in 🤗 Transformers, the DefaultDataCollator does not apply additional preprocessing such as padding. from transformers import DefaultDataCollator data_collator = DefaultDataCollator()
[ 0.005574539, -0.030885471, 0.02366924, -0.0005669252, -0.008082179, -0.03561932, 0.0115748355, -0.0043513877, 0.005549282, 0.01327065, 0.004719415, -0.030221578, 0.011149078, -0.021937344, 0.012642838, 0.058307152, 0.026166055, -0.02454962, -0.04710756, -0.02805671, 0.0073822...
Now you can convert the label id to a label name: id2label[str(79)] 'prime_rib' Preprocess The next step is to load a ViT image processor to process the image into a tensor: from transformers import AutoImageProcessor checkpoint = "google/vit-base-patch16-224-in21k" image_processor = AutoImageProcessor.from_pretrained(checkpoint)
[ 0.01715852, 0.031155547, -0.007525429, -0.014483411, -0.03212831, -0.01971203, 0.00017194406, 0.008761653, -0.006471599, 0.06863408, -0.013470113, -0.016510008, 0.045098532, -0.03710023, 0.014024049, 0.04174789, -0.017550327, -0.06312173, -0.05220513, -0.009680377, 0.01079500...
from tensorflow import keras from tensorflow.keras import layers size = (image_processor.size["height"], image_processor.size["width"]) train_data_augmentation = keras.Sequential( [ layers.RandomCrop(size[0], size[1]), layers.Rescaling(scale=1.0 / 127.5, offset=-1), layers.RandomFlip("horizontal"), layers.RandomRotation(factor=0.02), layers.RandomZoom(height_factor=0.2, width_factor=0.2), ], name="train_data_augmentation", ) val_data_augmentation = keras.Sequential( [ layers.CenterCrop(size[0], size[1]), layers.Rescaling(scale=1.0 / 127.5, offset=-1), ], name="val_data_augmentation", )
[ 0.023361715, 0.01620906, -0.012292777, -0.015229989, -0.033832334, -0.044955663, -0.020886842, -0.009743114, -0.003219375, 0.08137166, -0.010409426, -0.01204121, 0.047974467, -0.041447327, -0.009090399, 0.03271728, -0.012075205, -0.04898073, -0.043949395, -0.02787632, -0.0143...
import numpy as np import tensorflow as tf from PIL import Image def convert_to_tf_tensor(image: Image): np_image = np.array(image) tf_image = tf.convert_to_tensor(np_image) # expand_dims() is used to add a batch dimension since # the TF augmentation layers operates on batched inputs. return tf.expand_dims(tf_image, 0) def preprocess_train(example_batch): """Apply train_transforms across a batch.""" images = [ train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ] example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] return example_batch
[ -0.0039403476, -0.01614048, 0.016227962, -0.0068454877, 0.001095351, -0.0009787079, 0.023153642, -0.0007217288, -0.006510139, 0.033301584, -0.0011044637, -0.015484362, 0.02345983, -0.0113872755, -0.012830733, 0.029175337, -0.038754646, -0.040650096, -0.054443136, -0.012684929, ...
from transformers import DefaultDataCollator data_collator = DefaultDataCollator() To avoid overfitting and to make the model more robust, add some data augmentation to the training part of the dataset. Here we use Keras preprocessing layers to define the transformations for the training data (includes data augmentation), and transformations for the validation data (only center cropping, resizing and normalizing). You can use tf.imageor any other library you prefer.
[ -0.005811512, -0.0030329062, 0.03955119, -0.008466708, -0.019626006, -0.020867592, 0.0055609513, 0.019805513, -0.021151809, 0.055078473, 0.0018446128, 0.0013042245, 0.04436794, -0.025699299, -0.044726953, 0.009700812, -0.04212411, -0.042363454, -0.07275983, 0.0037789787, 0.00...
Use 🤗 Datasets [~datasets.Dataset.set_transform] to apply the transformations on the fly: py food["train"].set_transform(preprocess_train) food["test"].set_transform(preprocess_val) As a final preprocessing step, create a batch of examples using DefaultDataCollator. Unlike other data collators in 🤗 Transformers, the DefaultDataCollator does not apply additional preprocessing, such as padding. from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf")
[ -0.0025117858, -0.0024107264, -0.026767882, -0.015949013, -0.042746294, 0.022108125, 0.021755336, -0.0033165861, -0.0045752353, 0.0457744, 0.027223568, 0.018771326, 0.0034452071, -0.030633863, -0.00066882954, 0.013641183, -0.030281074, -0.052624393, -0.07055784, 0.0058026477, ...
from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf") Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load an evaluation method with the 🤗 Evaluate library. For this task, load the accuracy metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric): import evaluate accuracy = evaluate.load("accuracy")
[ 0.02040031, 0.009495321, 0.018711366, -0.03159787, -0.002531753, -0.050801292, -0.03000202, 0.011776059, 0.002757832, 0.04016228, 0.023525521, 0.0044018924, 0.04252946, -0.02720928, -0.014442462, 0.025241062, -0.03593327, -0.05096088, -0.032342605, -0.001138707, 0.0014129941,...
Next, create functions to apply appropriate transformations to a batch of images, instead of one image at a time.
[ 0.0028934102, 0.025210438, -0.027153797, 0.0016971123, -0.034714267, -0.0010848208, 0.018754756, -0.00072168873, 0.015467016, 0.032930635, 0.031067139, 0.021723038, 0.011819888, -0.0009375714, 0.022854446, 0.047093205, -0.015693298, -0.022761272, -0.066713154, -0.029496478, 0...
import evaluate accuracy = evaluate.load("accuracy") Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy: import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) Your compute_metrics function is ready to go now, and you'll return to it when you set up your training. Train
[ -0.008481281, 0.015755903, 0.016699038, 0.0060402253, -0.016837735, -0.06762833, 0.009410546, -0.021664368, -0.023009721, 0.031900156, -0.0015568663, -0.01248267, 0.03961167, -0.058529854, -0.0119556235, 0.0317892, -0.01042303, -0.05936203, -0.016352298, -0.022135934, -0.0312...
def preprocess_val(example_batch): """Apply val_transforms across a batch.""" images = [ val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ] example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] return example_batch
[ 0.006638639, -0.0198328, 0.0009585732, -0.0009802562, -0.04160262, 0.0028856576, 0.020700121, 0.024342882, -0.022290215, 0.026915941, 0.019399136, -0.0064543327, 0.03394126, -0.021379525, 0.016363503, 0.050998624, -0.013884405, -0.020280916, -0.08615414, -0.006638639, 0.00597...
Your compute_metrics function is ready to go now, and you'll return to it when you set up your training. Train If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here! You're ready to start training your model now! Load ViT with [AutoModelForImageClassification]. Specify the number of labels along with the number of expected labels, and the label mappings:
[ 0.010798644, -0.0090143345, 0.0080579445, -0.026136573, -0.019969998, -0.002480191, 0.026022378, 0.0042573637, -0.011790721, 0.036142983, 0.0017521925, -0.012540131, 0.045678336, -0.023781285, 0.023410147, 0.059838623, -0.0072657103, -0.00262472, -0.060238305, -0.013139659, -...
You're ready to start training your model now! Load ViT with [AutoModelForImageClassification]. Specify the number of labels along with the number of expected labels, and the label mappings: from transformers import AutoModelForImageClassification, TrainingArguments, Trainer model = AutoModelForImageClassification.from_pretrained( checkpoint, num_labels=len(labels), id2label=id2label, label2id=label2id, ) At this point, only three steps remain:
[ 0.026533004, 0.0024018162, 0.0014693463, -0.01658666, -0.01955361, -0.009275248, -0.0060257325, 0.031534433, -0.025049528, 0.026321078, 0.0013351272, -0.021941297, 0.04899705, -0.040887386, 0.016939867, 0.04230022, 0.005644268, -0.021008827, -0.071150266, 0.0146228215, -0.001...
from transformers import create_optimizer batch_size = 16 num_epochs = 5 num_train_steps = len(food["train"]) * num_epochs learning_rate = 3e-5 weight_decay_rate = 0.01 optimizer, lr_schedule = create_optimizer( init_lr=learning_rate, num_train_steps=num_train_steps, weight_decay_rate=weight_decay_rate, num_warmup_steps=0, ) Then, load ViT with [TFAutoModelForImageClassification] along with the label mappings:
[ 0.04550561, 0.014733018, 0.007449522, 0.019937579, -0.038315956, -0.039557543, -0.008236342, 0.0028783886, 0.0032176594, 0.02818114, 0.027401537, 0.00036972406, 0.0028747793, -0.02808008, -0.015346593, 0.015635334, 0.0066482658, -0.02940829, -0.07328251, 0.018248443, 0.009405...
Define your training hyperparameters in [TrainingArguments]. It is important you don't remove unused columns because that'll drop the image column. Without the image column, you can't create pixel_values. Set remove_unused_columns=False to prevent this behavior! The only other required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the accuracy and save the training checkpoint. Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function. Call [~Trainer.train] to finetune your model.
[ 0.042389207, 0.036166415, 0.006979233, 0.01232427, -0.036223505, -0.022350673, 0.021037607, 0.035252977, -0.0025512022, 0.026061513, 0.00849925, 0.021137513, 0.015885249, -0.047384568, -0.018811103, 0.024291728, -0.011746235, -0.021280238, -0.06611004, 0.0004803575, 0.0199100...
training_args = TrainingArguments( output_dir="my_awesome_food_model", remove_unused_columns=False, evaluation_strategy="epoch", save_strategy="epoch", learning_rate=5e-5, per_device_train_batch_size=16, gradient_accumulation_steps=4, per_device_eval_batch_size=16, num_train_epochs=3, warmup_ratio=0.1, logging_steps=10, load_best_model_at_end=True, metric_for_best_model="accuracy", push_to_hub=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=food["train"], eval_dataset=food["test"], tokenizer=image_processor, compute_metrics=compute_metrics, ) trainer.train()
[ 0.030218331, 0.018064402, -0.038848177, 0.0058584437, -0.033797912, -0.03132828, 0.010669374, -0.0015808087, 0.009913222, 0.015872255, 0.00586885, 0.026721995, 0.0046097524, -0.04897645, 0.009927097, 0.041068073, 0.015872255, -0.0469508, -0.079583265, 0.0069683916, 0.00525144...
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model: trainer.push_to_hub() If you are unfamiliar with fine-tuning a model with Keras, check out the basic tutorial first!
[ -0.004188135, -0.022502998, 0.010267685, 0.01806891, -0.0065333517, -0.040294778, 0.023472954, 0.0029375835, -0.015477739, 0.022904838, -0.0169881, -0.008771181, 0.06529195, -0.041181594, 0.0008036785, 0.023265107, -0.0036165533, -0.019177431, -0.045172274, -0.030983191, -0.0...
Then, load ViT with [TFAutoModelForImageClassification] along with the label mappings: from transformers import TFAutoModelForImageClassification model = TFAutoModelForImageClassification.from_pretrained( checkpoint, id2label=id2label, label2id=label2id, ) Convert your datasets to the tf.data.Dataset format using the [~datasets.Dataset.to_tf_dataset] and your data_collator:
[ 0.032037955, 0.045702405, -0.029724255, -0.012555231, 0.012038051, -0.02101385, -0.014222457, -0.0040455745, -0.016944459, 0.029315956, 0.023381991, 0.024253033, 0.03827134, -0.043742564, -0.0031166917, 0.007975464, -0.0068798587, -0.035358798, -0.080407925, -0.008417789, 0.0...
To fine-tune a model in TensorFlow, follow these steps: 1. Define the training hyperparameters, and set up an optimizer and a learning rate schedule. 2. Instantiate a pre-trained model. 3. Convert a 🤗 Dataset to a tf.data.Dataset. 4. Compile your model. 5. Add callbacks and use the fit() method to run the training. 6. Upload your model to 🤗 Hub to share with the community. Start by defining the hyperparameters, optimizer and learning rate schedule:
[ 0.0049657365, -0.008891421, -0.011467758, 0.009122544, -0.023873601, -0.03695242, 0.028577626, 0.023343379, -0.02139923, 0.026307186, -0.00086840906, 0.020583505, 0.041792396, -0.04530002, -0.03148705, 0.012201912, -0.017184643, -0.035266586, -0.07178395, -0.014710272, 0.0034...
converting our train dataset to tf.data.Dataset tf_train_dataset = food["train"].to_tf_dataset( columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ) converting our test dataset to tf.data.Dataset tf_eval_dataset = food["test"].to_tf_dataset( columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ) Configure the model for training with compile():
[ 0.020023365, 0.043553643, -0.032438982, -0.026924072, -0.009955118, -0.0046275747, 0.008272364, 0.009113741, -0.0021600062, 0.03950938, 0.0026567015, -0.026513988, 0.008618814, -0.047173686, 0.0122742085, 0.008746081, -0.0081663085, -0.024661545, -0.069516145, 0.013271134, 0....
Configure the model for training with compile(): from tensorflow.keras.losses import SparseCategoricalCrossentropy loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) To compute the accuracy from the predictions and push your model to the 🤗 Hub, use Keras callbacks. Pass your compute_metrics function to KerasMetricCallback, and use the PushToHubCallback to upload the model:
[ -0.0012036667, 0.04074654, -0.027639361, 0.000726609, -0.020994063, -0.008084407, -0.0016031252, -0.00080068083, -0.007230818, 0.048760403, -0.03865842, 0.008331313, 0.019385647, -0.07099605, -0.01766436, 0.042862877, -0.0074988874, -0.05110248, -0.057253968, 0.0040845303, 0....
from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) push_to_hub_callback = PushToHubCallback( output_dir="food_classifier", tokenizer=image_processor, save_strategy="no", ) callbacks = [metric_callback, push_to_hub_callback]
[ 0.02822587, 0.04900786, -0.021570653, 0.04731984, -0.035808116, -0.000119445416, 0.012722396, -0.0104186665, -0.008260218, 0.065030195, 0.015205996, 0.00894511, 0.004472555, -0.022539187, -0.013946901, 0.020809662, 0.022027249, -0.020878842, -0.04864812, -0.015911642, 0.01292...
model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch 1/5 250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290 Epoch 2/5 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690 Epoch 3/5 250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820 Epoch 4/5 250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900 Epoch 5/5 250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890
[ 0.037175022, -0.024338111, -0.010467778, -0.031747807, -0.021076158, 0.017729843, 0.0003712757, 0.01448195, 0.007072253, 0.052556824, 0.04372705, -0.035094123, 0.03599397, -0.011536349, -0.013033754, 0.036415774, 0.019768564, -0.024759915, -0.053990956, -0.0036099423, 0.02061...
Congratulations! You have fine-tuned your model and shared it on the 🤗 Hub. You can now use it for inference! For a more in-depth example of how to finetune a model for image classification, take a look at the corresponding PyTorch notebook. Inference Great, now that you've fine-tuned a model, you can use it for inference! Load an image you'd like to run inference on: ds = load_dataset("food101", split="validation[:10]") image = ds["image"][0]
[ 0.010178668, -0.004490383, 0.0066164844, -0.031579792, -0.034129713, -0.006535924, 0.0019614776, 0.03354127, -0.0073835626, 0.06898797, 0.026185727, -0.03653953, 0.037744436, -0.00004142965, -0.013303022, 0.034578048, 0.0033712897, -0.028581532, -0.053100005, -0.017036835, 0....
ds = load_dataset("food101", split="validation[:10]") image = ds["image"][0] The simplest way to try out your finetuned model for inference is to use it in a [pipeline]. Instantiate a pipeline for image classification with your model, and pass your image to it:
[ 0.01825877, 0.032221362, -0.020447647, 0.00030186283, 0.0075251153, -0.0058528674, -0.01719832, -0.0038169397, 0.049134176, 0.024675852, 0.027136639, 0.025613941, -0.00038895907, -0.033988778, -0.004833204, 0.054518, -0.00034711038, -0.046632603, -0.04527305, 0.01995821, -0.0...
Finally, you are ready to train your model! Call fit() with your training and validation datasets, the number of epochs, and your callbacks to fine-tune the model:
[ 0.014153189, 0.011967225, 0.0022470967, -0.007424867, -0.021770718, -0.025801783, 0.0062651946, 0.027698755, 0.012122836, 0.04558661, 0.018139794, -0.027772855, 0.035834987, -0.02683919, -0.020436909, 0.012041326, -0.008884646, -0.04463812, -0.07463399, -0.033197008, 0.014531...
from transformers import pipeline classifier = pipeline("image-classification", model="my_awesome_food_model") classifier(image) [{'score': 0.31856709718704224, 'label': 'beignets'}, {'score': 0.015232225880026817, 'label': 'bruschetta'}, {'score': 0.01519392803311348, 'label': 'chicken_wings'}, {'score': 0.013022331520915031, 'label': 'pork_chop'}, {'score': 0.012728818692266941, 'label': 'prime_rib'}] You can also manually replicate the results of the pipeline if you'd like:
[ 0.051092785, -0.015238703, -0.015454345, -0.0034484754, -0.010537707, -0.004355969, -0.014124553, 0.04278338, 0.024022521, 0.04775752, 0.012370665, -0.012471298, 0.03001737, -0.06981051, -0.010882734, 0.03812551, 0.011752491, -0.022556156, -0.037349198, -0.00319869, 0.0141101...
You can also manually replicate the results of the pipeline if you'd like: Load an image processor to preprocess the image and return the input as PyTorch tensors: from transformers import AutoImageProcessor import torch image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model") inputs = image_processor(image, return_tensors="pt") Pass your inputs to the model and return the logits:
[ 0.03339708, 0.016728042, -0.012730423, -0.0060406816, -0.013969537, -0.047705896, 0.006313582, 0.031243378, -0.014198183, 0.04348701, 0.0056313314, -0.0076780827, 0.015400419, -0.05369495, -0.004786816, 0.03894359, 0.01367451, -0.02625742, -0.05708776, -0.043575514, 0.0229383...
predicted_label = logits.argmax(-1).item() model.config.id2label[predicted_label] 'beignets' Load an image processor to preprocess the image and return the input as TensorFlow tensors: from transformers import AutoImageProcessor image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier") inputs = image_processor(image, return_tensors="tf") Pass your inputs to the model and return the logits:
[ 0.037954886, -0.0069522033, -0.04046997, -0.060590632, -0.017219743, 0.012425367, -0.020749433, 0.027937427, 0.010439023, 0.06967923, -0.0024257689, -0.03698315, 0.006977211, -0.023636064, 0.005437438, 0.08065414, -0.0069021876, -0.021178141, -0.026194017, -0.011768016, 0.013...
In this guide you'll learn how to: create a depth estimation pipeline run depth estimation inference by hand Before you begin, make sure you have all the necessary libraries installed: pip install -q transformers Depth estimation pipeline The simplest way to try out inference with a model supporting depth estimation is to use the corresponding [pipeline]. Instantiate a pipeline from a checkpoint on the Hugging Face Hub:
[ 0.017128633, 0.00653331, -0.022823976, 0.0031477062, -0.008124028, -0.042778976, 0.039966814, 0.044256072, -0.014302268, 0.026658745, 0.026914395, -0.0038844787, 0.030678151, -0.040194057, -0.0146431355, 0.024599332, 0.025096431, -0.014209949, -0.040989418, -0.031643946, 0.02...
Pass your inputs to the model and return the logits: from transformers import TFAutoModelForImageClassification model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier") logits = model(**inputs).logits Get the predicted label with the highest probability, and use the model's id2label mapping to convert it to a label: predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) model.config.id2label[predicted_class_id] 'beignets'
[ 0.019573428, -0.003027875, -0.011885381, -0.008097888, -0.018160185, -0.02519814, 0.03499192, 0.038581558, 0.007900033, 0.033013377, 0.03829891, -0.010881978, 0.017425297, -0.038977265, -0.013100771, 0.04299088, 0.024929622, -0.0091719525, -0.043217, -0.029932506, 0.027586522...
Pass your inputs to the model and return the logits: from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model") with torch.no_grad(): logits = model(**inputs).logits Get the predicted label with the highest probability, and use the model's id2label mapping to convert it to a label: predicted_label = logits.argmax(-1).item() model.config.id2label[predicted_label] 'beignets'
[ 0.03421313, -0.0015273718, -0.021339566, -0.058214687, -0.015215533, -0.014604583, -0.008604195, 0.04538476, 0.0017492044, 0.088907585, 0.018823039, -0.025790764, 0.0019055782, -0.009433339, -0.004767582, 0.036336903, -0.009098772, -0.009927916, -0.028321838, 0.011760763, 0.0...
Pass the image to the pipeline. predictions = depth_estimator(image) The pipeline returns a dictionary with two entries. The first one, called predicted_depth, is a tensor with the values being the depth expressed in meters for each pixel. The second one, depth, is a PIL image that visualizes the depth estimation result. Let's take a look at the visualized result: predictions["depth"]
[ 0.02387226, -0.008312399, -0.045437165, -0.024182867, -0.0024515658, -0.0033833827, -0.009717518, 0.0086747715, 0.021313462, 0.07277046, 0.012483388, -0.020455599, -0.0025347637, -0.03801221, -0.012121014, 0.03437369, 0.015064373, -0.021905093, -0.023576446, -0.0005981007, 0....
predictions["depth"] Depth estimation inference by hand Now that you've seen how to use the depth estimation pipeline, let's see how we can replicate the same result by hand. Start by loading the model and associated processor from a checkpoint on the Hugging Face Hub. Here we'll use the same checkpoint as before:
[ 0.045319475, 0.0057690428, 0.014611078, -0.004817707, -0.010899075, -0.024756266, -0.028188253, -0.016858384, -0.025991207, 0.046726733, 0.0008378036, -0.014323883, 0.041930564, -0.05031668, -0.006914235, 0.058530476, -0.022903854, -0.04032227, -0.040149953, -0.025660932, -0....
from transformers import AutoImageProcessor, AutoModelForDepthEstimation checkpoint = "vinvino02/glpn-nyu" image_processor = AutoImageProcessor.from_pretrained(checkpoint) model = AutoModelForDepthEstimation.from_pretrained(checkpoint) Prepare the image input for the model using the image_processor that will take care of the necessary image transformations such as resizing and normalization: pixel_values = image_processor(image, return_tensors="pt").pixel_values
[ 0.054681376, -0.0016068299, -0.0071592694, -0.04066873, -0.014595295, -0.017872682, -0.031317253, 0.029889768, 0.019984776, 0.08803789, 0.019431261, -0.02285431, 0.046757385, -0.024077868, 0.020552857, 0.0510981, -0.013939818, -0.030588944, -0.029074062, -0.006558415, 0.01109...
from transformers import pipeline checkpoint = "vinvino02/glpn-nyu" depth_estimator = pipeline("depth-estimation", model=checkpoint) Next, choose an image to analyze: from PIL import Image import requests url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640" image = Image.open(requests.get(url, stream=True).raw) image Pass the image to the pipeline. predictions = depth_estimator(image)
[ 0.020221738, -0.032434303, -0.0096138455, -0.04072749, -0.019724714, -0.0015363337, -0.00237151, 0.020576755, 0.012439777, 0.078898855, 0.039137013, -0.007199734, 0.040642284, -0.011388928, 0.004245997, 0.041891944, -0.024694946, -0.057484265, -0.038796198, -0.0069583226, -0....
import torch with torch.no_grad(): outputs = model(pixel_values) predicted_depth = outputs.predicted_depth Visualize the results: import numpy as np interpolate to original size prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ).squeeze() output = prediction.numpy() formatted = (output * 255 / np.max(output)).astype("uint8") depth = Image.fromarray(formatted) depth
[ 0.03922578, 0.019787746, -0.000769999, -0.004633655, 0.008116181, -0.0032803507, 0.037710372, -0.00075132964, 0.012968404, 0.00003614351, -0.028035067, -0.0058977096, 0.026505088, -0.06492945, 0.0021091872, 0.035291545, -0.0052711465, -0.043888576, -0.038788643, -0.006083493, ...
Before you begin, make sure you have all the necessary libraries installed: pip install transformers datasets evaluate We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login: from huggingface_hub import notebook_login notebook_login() Load SWAG dataset Start by loading the regular configuration of the SWAG dataset from the 🤗 Datasets library:
[ -0.002569805, 0.0055235135, 0.021169553, 0.0270613, 0.0007888635, -0.024538504, -0.018239347, 0.022720836, 0.03390889, 0.044376142, 0.047353357, 0.025180954, 0.02878495, -0.028549906, -0.07214257, 0.047572732, -0.03387755, -0.054718044, -0.06593744, -0.026465857, -0.016061282...
swag["train"][0] {'ending0': 'passes by walking down the street playing their instruments.', 'ending1': 'has heard approaching them.', 'ending2': "arrives and they're outside dancing and asleep.", 'ending3': 'turns the lead singer watches the performance.', 'fold-ind': '3416', 'gold-source': 'gold', 'label': 0, 'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', 'sent2': 'A drum line', 'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line', 'video-id': 'anetv_jkn6uvmqwh4'}
[ 0.040569033, -0.0074627837, -0.0031488973, -0.034182154, -0.01057399, -0.004488635, -0.006373176, 0.005718726, -0.013904489, 0.07664258, 0.008970417, 0.003412733, 0.039527398, -0.041665494, 0.0046873684, 0.03393545, -0.006256677, -0.059592623, -0.03434662, -0.021367272, -0.00...
pixel_values = image_processor(image, return_tensors="pt").pixel_values Pass the prepared inputs through the model: import torch with torch.no_grad(): outputs = model(pixel_values) predicted_depth = outputs.predicted_depth Visualize the results:
[ 0.010449434, -0.019637482, -0.003002958, -0.0011305886, -0.034057412, -0.028653523, -0.031075956, 0.015365971, -0.024396347, 0.032996703, 0.030416597, -0.00409592, 0.010298927, -0.0034688106, -0.020024499, 0.05071344, -0.042084415, -0.04638459, -0.025084374, 0.008091503, -0.0...
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") The preprocessing function you want to create needs to: Make four copies of the sent1 field and combine each of them with sent2 to recreate how a sentence starts. Combine sent2 with each of the four possible sentence endings. Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding input_ids, attention_mask, and labels field.
[ 0.016077597, 0.0045531695, 0.02715423, 0.012488079, 0.006806469, 0.018785855, 0.053190764, 0.0016299702, 0.00093051593, 0.0329863, -0.026753005, 0.0022228495, 0.024374323, -0.051413916, -0.015074537, 0.013605773, -0.01620656, -0.038546108, -0.055483468, 0.008253739, 0.0155044...
from huggingface_hub import notebook_login notebook_login() Load SWAG dataset Start by loading the regular configuration of the SWAG dataset from the 🤗 Datasets library: from datasets import load_dataset swag = load_dataset("swag", "regular") Then take a look at an example:
[ 0.017797934, -0.027880576, 0.013548106, 0.008100341, -0.02803745, -0.06771201, -0.04837387, 0.014225512, -0.013990202, 0.049657375, 0.03625188, 0.021876626, 0.01995137, -0.003993126, -0.045264937, 0.042669404, -0.061665274, -0.053479366, -0.031374563, 0.01920979, 0.009398107,...
ending_names = ["ending0", "ending1", "ending2", "ending3"] def preprocess_function(examples): first_sentences = [[context] * 4 for context in examples["sent1"]] question_headers = examples["sent2"] second_sentences = [ [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers) ]
[ -0.010535321, -0.049490176, -0.013161975, 0.002985486, 0.0077938414, -0.040016998, 0.0019574312, -0.026596664, 0.0052281893, 0.059709724, 0.024027422, 0.017123485, 0.02384083, -0.0013097385, -0.020812284, 0.02084099, -0.009645416, -0.038294602, -0.027917167, 0.029252023, 0.00...
While it looks like there are a lot of fields here, it is actually pretty straightforward: sent1 and sent2: these fields show how a sentence starts, and if you put the two together, you get the startphrase field. ending: suggests a possible ending for how a sentence can end, but only one of them is correct. label: identifies the correct sentence ending. Preprocess The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings:
[ 0.011267724, -0.01336197, 0.014295797, 0.00522188, -0.041582778, -0.059600152, -0.02039314, 0.010478091, -0.033590317, 0.055040877, 0.021848813, -0.010869475, -0.00008615157, -0.0023105359, -0.049135793, 0.015064832, -0.06937788, -0.075475216, -0.03685871, 0.027808828, 0.0214...
first_sentences = sum(first_sentences, []) second_sentences = sum(second_sentences, []) tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True) return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()}
[ 0.023950636, -0.036708333, -0.017071348, -0.00941673, -0.009783249, -0.017719805, -0.038963836, 0.012863423, -0.017762097, 0.034819346, 0.031069571, -0.009014969, 0.00016178397, -0.015506592, -0.008740079, 0.0359471, -0.04344665, -0.057374388, -0.033381462, 0.032733005, -0.03...
Preprocess The next step is to load a BERT tokenizer to process the sentence starts and the four possible endings: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") The preprocessing function you want to create needs to:
[ 0.023628945, -0.011785101, 0.019267356, 0.024040138, -0.058624454, 0.009574936, -0.00039329563, 0.004490087, -0.0054850285, 0.042411678, 0.019531695, -0.006472627, 0.0090609435, -0.005639226, -0.013400504, 0.01635963, -0.032807373, -0.050753035, -0.05242718, 0.000034505152, -...
from dataclasses import dataclass from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy from typing import Optional, Union import torch @dataclass class DataCollatorForMultipleChoice: """ Data collator that will dynamically pad the inputs for multiple choice received. """
[ 0.016209062, -0.0025407923, 0.0105893435, 0.00884853, -0.02902318, -0.01824603, -0.011737847, 0.0009841733, -0.016786925, 0.02687064, -0.000030896495, -0.00087356666, -0.00472042, -0.016772479, -0.02733293, 0.04524669, -0.029427685, -0.059982203, -0.053076737, -0.012178468, -...
tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None def call(self, features): label_name = "label" if "label" in features[0].keys() else "labels" labels = [feature.pop(label_name) for feature in features] batch_size = len(features) num_choices = len(features[0]["input_ids"]) flattened_features = [ [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ] flattened_features = sum(flattened_features, []) batch = self.tokenizer.pad( flattened_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="pt", ) batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()} batch["labels"] = torch.tensor(labels, dtype=torch.int64) return batch </pt> <tf>py
[ 0.022692304, 0.007526091, 0.013904738, 0.02645535, -0.06140606, -0.013512754, 0.0029256253, 0.0069915676, -0.025756907, 0.049375717, 0.00948601, 0.0031109268, 0.017090498, -0.011239247, -0.020069577, 0.009692693, -0.036319092, -0.058783334, -0.053224288, -0.014667325, -0.0249...
from dataclasses import dataclass from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy from typing import Optional, Union import tensorflow as tf @dataclass class DataCollatorForMultipleChoice: """ Data collator that will dynamically pad the inputs for multiple choice received. """
[ 0.016168939, 0.010310732, 0.008655306, 0.011794906, -0.032794546, -0.028955674, -0.007428008, 0.002210207, -0.021948658, 0.031852666, -0.0019051663, 0.0045131748, -0.005505003, -0.023290124, -0.03230934, 0.04275564, -0.02762848, -0.06433325, -0.0540582, -0.020849798, -0.01828...
tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None def call(self, features): label_name = "label" if "label" in features[0].keys() else "labels" labels = [feature.pop(label_name) for feature in features] batch_size = len(features) num_choices = len(features[0]["input_ids"]) flattened_features = [ [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ] flattened_features = sum(flattened_features, []) batch = self.tokenizer.pad( flattened_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="tf", ) batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()} batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64) return batch
[ 0.046453107, -0.008191507, 0.01490173, 0.025063347, -0.0026792749, -0.029270198, 0.018338311, 0.0032791954, -0.023330243, 0.03709138, 0.01693109, 0.0041253795, 0.016471893, -0.025211476, -0.042453635, 0.007450865, -0.047134496, -0.049149048, -0.06553206, -0.0065213586, -0.007...
To apply the preprocessing function over the entire dataset, use 🤗 Datasets [~datasets.Dataset.map] method. You can speed up the map function by setting batched=True to process multiple elements of the dataset at once: py tokenized_swag = swag.map(preprocess_function, batched=True) 🤗 Transformers doesn't have a data collator for multiple choice, so you'll need to adapt the [DataCollatorWithPadding] to create a batch of examples. It's more efficient to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. DataCollatorForMultipleChoice flattens all the model inputs, applies padding, and then unflattens the results:
[ 0.014170436, 0.0138735995, -0.032086674, -0.01563342, -0.04147238, 0.010834553, 0.03316094, -0.015675824, 0.019237868, 0.03734493, 0.03895633, 0.010205541, 0.012961885, -0.012749859, 0.025075665, 0.038193036, -0.005293594, -0.03477234, -0.06671766, -0.005752985, 0.022149699, ...
Evaluate Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 🤗 Evaluate library. For this task, load the accuracy metric (see the 🤗 Evaluate quick tour to learn more about how to load and compute a metric): import evaluate accuracy = evaluate.load("accuracy") Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy:
[ 0.041389707, 0.008466076, -0.0036372773, -0.012793182, -0.082779415, -0.0041842624, -0.009434624, 0.03584321, -0.004417689, 0.06064916, 0.039856754, -0.0059053497, 0.019789018, -0.03453323, 0.0036477293, 0.06505291, 0.004090195, -0.023705013, -0.024875632, -0.008709955, -0.02...
from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased") At this point, only three steps remain:
[ 0.0013696788, 0.02490811, -0.02947816, -0.00039085955, -0.0348767, -0.0010715231, 0.017184459, -0.00067314703, 0.015888274, 0.03129549, 0.03065408, 0.020003991, 0.011197959, -0.00030087, 0.024373602, 0.047544558, -0.015233501, -0.023745554, -0.06611874, -0.028863475, 0.022302...
import evaluate accuracy = evaluate.load("accuracy") Then create a function that passes your predictions and labels to [~evaluate.EvaluationModule.compute] to calculate the accuracy: import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) Your compute_metrics function is ready to go now, and you'll return to it when you setup your training. Train
[ 0.04426768, 0.02668692, -0.014951725, 0.017404513, -0.03266467, -0.009025378, 0.008401166, -0.012528311, 0.017595448, 0.023778824, 0.014386262, 0.021737281, -0.005856582, -0.030285321, 0.0026694264, 0.031548433, 0.010831922, -0.025776304, -0.07678548, 0.00057051185, 0.0055187...
Define your training hyperparameters in [TrainingArguments]. The only required parameter is output_dir which specifies where to save your model. You'll push this model to the Hub by setting push_to_hub=True (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [Trainer] will evaluate the accuracy and save the training checkpoint. Pass the training arguments to [Trainer] along with the model, dataset, tokenizer, data collator, and compute_metrics function. Call [~Trainer.train] to finetune your model.
[ 0.024668766, 0.03265111, -0.04030146, -0.0053408095, -0.032218073, -0.03490291, 0.00025260585, 0.0012810726, -0.021406543, 0.025520409, 0.011410568, 0.019833168, 0.01709059, -0.053032797, 0.00665436, 0.042120222, 0.0033199626, -0.036808282, -0.07355882, 0.009469111, 0.0142541...
Once training is completed, share your model to the Hub with the [~transformers.Trainer.push_to_hub] method so everyone can use your model: trainer.push_to_hub() If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial here! To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
[ 0.03470633, 0.031048488, 0.0053571602, 0.032315772, -0.04089875, -0.018865269, 0.031278905, 0.021169424, -0.0054075634, 0.02154385, -0.00034044788, 0.030270835, 0.005533572, -0.040236305, -0.0075029046, 0.029521985, -0.021903872, -0.021889472, -0.07373296, -0.0085325735, 0.00...
training_args = TrainingArguments( output_dir="my_awesome_swag_model", evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, learning_rate=5e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, weight_decay=0.01, push_to_hub=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_swag["train"], eval_dataset=tokenized_swag["validation"], tokenizer=tokenizer, data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), compute_metrics=compute_metrics, ) trainer.train()
[ 0.034996305, -0.007044939, -0.02066047, 0.004673201, -0.06898067, 0.013829865, -0.0043464284, 0.025959106, -0.012501692, 0.02920575, 0.045930892, -0.01568509, 0.020702634, -0.026071545, 0.018425765, 0.05852394, -0.010498892, -0.032382123, -0.069542855, 0.002199567, 0.00045546...
Your compute_metrics function is ready to go now, and you'll return to it when you setup your training. Train If you aren't familiar with finetuning a model with the [Trainer], take a look at the basic tutorial here! You're ready to start training your model now! Load BERT with [AutoModelForMultipleChoice]: from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer model = AutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased")
[ 0.016159529, -0.011505952, -0.004929247, 0.0011494466, -0.03132139, -0.015818221, 0.00028715652, 0.015805095, -0.012693959, 0.033237956, 0.024153966, -0.02874847, 0.042925797, -0.032214038, -0.008545778, 0.040772945, -0.002689426, -0.041455556, -0.009366225, -0.024403382, -0....
Then you can load BERT with [TFAutoModelForMultipleChoice]: from transformers import TFAutoModelForMultipleChoice model = TFAutoModelForMultipleChoice.from_pretrained("google-bert/bert-base-uncased") Convert your datasets to the tf.data.Dataset format with [~transformers.TFPreTrainedModel.prepare_tf_dataset]:
[ 0.027737197, 0.032394614, -0.04671237, -0.03709349, -0.023853714, -0.00972253, 0.010061125, 0.0069446643, -0.022844836, 0.016736295, 0.022720454, -0.016169667, 0.024061017, -0.044114165, 0.00091731764, 0.053733043, -0.008824215, -0.020757983, -0.09734968, 0.0340254, 0.0225546...
Configure the model for training with compile. Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to: model.compile(optimizer=optimizer) # No loss argument! The last two things to setup before you start training is to compute the accuracy from the predictions, and provide a way to push your model to the Hub. Both are done by using Keras callbacks. Pass your compute_metrics function to [~transformers.KerasMetricCallback]:
[ 0.041240625, 0.022183534, -0.015674848, -0.022851456, -0.004554659, -0.013742139, -0.012072333, 0.0131808, -0.047294557, 0.041041672, 0.0486304, -0.02791061, 0.03174761, -0.042405937, 0.027555332, 0.02972963, -0.011930223, -0.055991754, -0.055536997, 0.0059473473, -0.00402529...
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters: from transformers import create_optimizer batch_size = 16 num_train_epochs = 2 total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) Then you can load BERT with [TFAutoModelForMultipleChoice]: