category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
T5 model
Flan T5 - How to give the correct prompt/question?
https://stackoverflow.com/questions/75203036/flan-t5-how-to-give-the-correct-prompt-question
<p>Giving the right kind of prompt to Flan T5 Language model in order to get the correct/accurate responses for a chatbot/option matching use case.</p> <p>I am trying to use a Flan T5 model for the following task. Given a chatbot that presents the user with a list of options, the model has to do semantic option matching. For instance, if the options are &quot;Barbeque Chicken, Smoked Salmon&quot;, if the user says &quot;I want fish&quot;, the model should select smoked salmon. Another use case could be &quot;The first one&quot; in which case the model should select Barbeque Chicken. A third use case could be &quot;The BBQ one&quot; in which case the model should select Barbeque chicken.</p> <p>I am using some code from the huggingface docs to play around with flan-t5 but I did not get the correct output.</p> <pre><code> model = AutoModelForSeq2SeqLM.from_pretrained(&quot;google/flan-t5-small&quot;) tokenizer = AutoTokenizer.from_pretrained(&quot;google/flan-t5-small&quot;) inputs = tokenizer('''Q:Select from the following options (a) Quinoa Salad (b) Kale Smoothie A:Select the first one ''', return_tensors=&quot;pt&quot;) outputs = model.generate(**inputs) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) </code></pre> <p>The output is</p> <pre><code>['(b) Kale Smoothie'] </code></pre> <p>How should I give the correct prompt/question to elicit the correct response from Flan t5 ?</p>
<p>A recent paper goes into detail about how the Flan Collection was created (&quot;<a href="https://arxiv.org/abs/2301.13688" rel="noreferrer">The Flan Collection: Designing Data and Methods for Effective Instruction Tuning</a>&quot;) and points to a GitHub repo with <a href="https://github.com/google-research/FLAN/blob/main/flan/v2/flan_templates_branched.py" rel="noreferrer">the templates used for creating the training data</a> for it.</p> <p>Some examples:</p> <blockquote> <p><code>&quot;Write a short summary for this text: {text}&quot;</code></p> </blockquote> <blockquote> <p><code>&quot;Context: {context}\n\nQuestion: {question}\n\nAnswer:&quot;</code></p> </blockquote> <blockquote> <p><code>&quot;Who is {pronoun} in the following sentence?\n\n{sentence}\n\n{options_}&quot;</code></p> </blockquote> <p>For picking from a list of options, it looks like the code <a href="https://github.com/google-research/FLAN/blob/2c79a315c9855fde5e4dc966448bbe56b13a97f7/flan/v2/preprocessors.py#L160-L172" rel="noreferrer">generates a newline/hyphen-separated list or prefaces the answers with capitalized letters in parentheses</a>:</p> <blockquote> <pre><code>OPTIONS: - first thing - second thing - third thing </code></pre> </blockquote> <p>or</p> <blockquote> <pre><code>OPTIONS: (A) first thing (B) second thing (C) third thing </code></pre> </blockquote>
34
T5 model
How to use huggingface T5 model to test translation task?
https://stackoverflow.com/questions/60513592/how-to-use-huggingface-t5-model-to-test-translation-task
<p>I see there exits two configs of the T5model - <strong>T5Model</strong> and <strong>TFT5WithLMHeadModel</strong>. I want to test this for translation tasks (eg. en-de) as they have shown in the google's original repo. Is there a way I can use this model from hugging face to test out translation tasks. I did not see any examples related to this on the documentation side and was wondering how to provide the input and get the results. </p> <p>Any help appreciated</p>
<p>You can use <em>T5ForConditionalGeneration</em> to translate your text...</p> <pre><code>!pip install transformers from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small', return_dict=True) input = &quot;My name is Azeem and I live in India&quot; # You can also use &quot;translate English to French&quot; and &quot;translate English to Romanian&quot; input_ids = tokenizer(&quot;translate English to German: &quot;+input, return_tensors=&quot;pt&quot;).input_ids # Batch size 1 outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) </code></pre> <p>As of today, <em>T5WithLMHeadModel</em> is not supported by Transformers.</p>
35
T5 model
How to run inference for T5 tensorrt model deployed on nvidia triton?
https://stackoverflow.com/questions/71911630/how-to-run-inference-for-t5-tensorrt-model-deployed-on-nvidia-triton
<p>I have deployed T5 tensorrt model on nvidia triton server and below is the config.pbtxt file, but facing problem while inferencing the model using triton client.</p> <p>As per the config.pbtxt file there should be 4 inputs to the tensorrt model along with the decoder ids. But how can we send decoder as input to the model I think decoder is to be generated from models output.</p> <pre><code>name: &quot;tensorrt_model&quot; platform: &quot;tensorrt_plan&quot; max_batch_size: 0 input [ { name: &quot;input_ids&quot; data_type: TYPE_INT32 dims: [ -1, -1 ] }, { name: &quot;attention_mask&quot; data_type: TYPE_INT32 dims: [-1, -1 ] }, { name: &quot;decoder_input_ids&quot; data_type: TYPE_INT32 dims: [ -1, -1] }, { name: &quot;decoder_attention_mask&quot; data_type: TYPE_INT32 dims: [ -1, -1 ] } ] output [ { name: &quot;last_hidden_state&quot; data_type: TYPE_FP32 dims: [ -1, -1, 768 ] }, { name: &quot;input.151&quot; data_type: TYPE_FP32 dims: [ -1, -1, -1 ] } ] instance_group [ { count: 1 kind: KIND_GPU } ] </code></pre>
<p>You have several examples in the <a href="https://github.com/triton-inference-server/client/tree/main" rel="nofollow noreferrer">NVIDIA Triton Client</a> repository. However, it might be the case, if your use case is too complex, that you might need the Python backend instead of the Torch one.</p> <p>You initialize the client as follows:</p> <pre class="lang-py prettyprint-override"><code>import tritonclient.http as httpclient triton_url = None # your triton url triton_client = httpclient.InferenceServerClient(url=url) </code></pre> <p>Considering that you already have the client initialized, in Python you will need to create a function to generate the requests, such as the following.</p> <pre class="lang-py prettyprint-override"><code>inputs_dtype = [] # list with inputs dtypes inputs_name = [] # list with inputs name outputs_name = [] # list with outputs name def request_generator(data): client = httpclient inputs = [ client.InferInput(input_name, data[i].shape, inputs_dtype[i]) for i, input_name in enumerate(inputs_name) ] for i, _input in enumerate(inputs): _input.set_data_from_numpy(data[i]) outputs = [ client.InferRequestedOutput(output_name) for output_name in outputs_name ] yield inputs, outputs </code></pre> <p>Then, you can use this <code>request_generator</code> in your loop to run inferences:</p> <pre class="lang-py prettyprint-override"><code># assuming your data comes in a variable named data # assuming your triton client is triton_client data = preprocess(data) # your preprocess function model_name = None # your model name model_version = None # your model version responses = [] sent_count = 0 try: for inputs, outputs in request_generator(data): sent_count += 1 responses.append( triton_client.infer(model_name, inputs, request_id=str(sent_count), model_version=model_version, outputs=outputs)) except InferenceServerException as exception: print(&quot;Caught an exception:&quot;, exception) </code></pre> <p>As I said, this is just a simple illustration on how you should do it, but it misses a lot of the implementation details. You have lots of <a href="https://github.com/triton-inference-server/client/tree/main/src/python/examples" rel="nofollow noreferrer">examples</a> in the repo, as I said.</p>
36
T5 model
I am trying to convert a flan-t5 pytorch model to GGUF format
https://stackoverflow.com/questions/77200600/i-am-trying-to-convert-a-flan-t5-pytorch-model-to-gguf-format
<p>I have tried to convert the model using the llama.cpp convert.py following the colab note <a href="https://colab.research.google.com/github/TrelisResearch/gguf-quantization/blob/main/HuggingFace_to_GGUF.ipynb#scrollTo=CmT3xkz0GHJN" rel="nofollow noreferrer">HERE</a>.</p> <pre><code> model = AutoModelForSeq2SeqLM.from_pretrained( model_name, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map='cpu', offload_folder='offload', cache_dir=cache_dir ) </code></pre> <p>Facing an error at <code>!python convert.py models/</code> Error:</p> <pre><code>Loading model file models/pytorch_model.bin Traceback (most recent call last): File &quot;/content/llama.cpp/convert.py&quot;, line 1208, in &lt;module&gt; main() File &quot;/content/llama.cpp/convert.py&quot;, line 1157, in main params = Params.load(model_plus) File &quot;/content/llama.cpp/convert.py&quot;, line 288, in load params = Params.loadHFTransformerJson(model_plus.model, hf_config_path) File &quot;/content/llama.cpp/convert.py&quot;, line 203, in loadHFTransformerJson n_embd = config[&quot;hidden_size&quot;] KeyError: 'hidden_size' </code></pre> <p>Kindly help me convert the <a href="https://huggingface.co/google/flan-t5-large" rel="nofollow noreferrer">google/flan-t5-large</a> model to GGUF format.</p>
<p>This can be achieved using the candle framework resources to convert encoder-decoder models. <a href="https://huggingface.co/lmz/candle-quantized-t5/tree/main" rel="nofollow noreferrer">Here</a> are the GGUF conversions of T5 Models</p>
37
T5 model
Can we convert dynamic DNN model to TorchScript?
https://stackoverflow.com/questions/76473823/can-we-convert-dynamic-dnn-model-to-torchscript
<p>all.</p> <p>I'm trying to convert SwitchTransformer model to TorchScript. (SwitchTransformer model is MoE DNN based on Google T5 model.)</p> <p>When converting both T5 and SwitchTransforemer, there's no error for T5 but I got following error for SwitchTransformer.</p> <pre><code>/root/HuggingFace/.HF/lib/python3.8/site-packages/transformers/modeling_utils.py:776: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_mask.shape[1] &lt; attention_mask.shape[1]: Traceback (most recent call last): File &quot;example.py&quot;, line 423, in &lt;module&gt; traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids)) File &quot;/root/HuggingFace/.HF/lib/python3.8/site-packages/torch/jit/_trace.py&quot;, line 794, in trace return trace_module( File &quot;/root/HuggingFace/.HF/lib/python3.8/site-packages/torch/jit/_trace.py&quot;, line 1056, in trace_module module._c._create_method_from_trace( RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions </code></pre> <p>I think it is because of the dynamic characteristics of SwitchTransformer.</p> <p>This is the code for T5.</p> <pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True) input_ids = tokenizer('The &lt;extra_id_0&gt; walks in &lt;extra_id_1&gt; park', return_tensors='pt').input_ids attention_mask = input_ids.ne(model.config.pad_token_id).long() decoder_input_ids = tokenizer('&lt;pad&gt; &lt;extra_id_0&gt; cute dog &lt;extra_id_1&gt; the &lt;extra_id_2&gt;', return_tensors='pt').input_ids traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids)) torch.jit.save(traced_model, &quot;traced_t5.pt&quot;) </code></pre> <p>And this is the code for SwitchTransformer.</p> <pre><code>from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration from transformers import AutoTokenizer, SwitchTransformersConfig import torch # Tokenizer tokenizer = AutoTokenizer.from_pretrained( &quot;google/switch-base-8&quot;, resume_download=True) model = SwitchTransformersForConditionalGeneration.from_pretrained( &quot;google/switch-base-8&quot;, resume_download=True, torch_dtype=torch.bfloat16, torchscript=True, ) input_text = &quot;A &lt;extra_id_0&gt; walks into a bar a orders a &lt;extra_id_1&gt; with &lt;extra_id_2&gt; pinch of &lt;extra_id_3&gt;.&quot; output_text = &quot;&lt;pad&gt; &lt;extra_id_0&gt; man&lt;extra_id_1&gt; beer&lt;extra_id_2&gt; a&lt;extra_id_3&gt; salt&lt;extra_id_4&gt;.&lt;/s&gt;&quot; input_ids = tokenizer(input_text, return_tensors=&quot;pt&quot;).input_ids decoder_input_ids = tokenizer(output_text, return_tensors=&quot;pt&quot;, padding=True).input_ids attention_mask = input_ids.ne(model.config.pad_token_id).long() # model.eval() traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids)) </code></pre>
38
T5 model
Fine-tune T5 pre-trained model on a specific domain for question answering
https://stackoverflow.com/questions/75459693/fine-tune-t5-pre-trained-model-on-a-specific-domain-for-question-answering
<p>I need to build a question-answering system on a specific domain of Finance, I have documents data containing all the information about the field,</p> <p>Can I fine-tune T5 pre-trained model (large) unsupervised training on the documents so it can answer related questions based on my documents corpus?<br /> The documents corpus I have is quite large, so I cannot just use it as a context in the current QA within T5,</p> <p>I am open to your suggestions!</p>
<p>What I found is that it is not really feasible to fine-tune T5 LLM word embeddings, you can only use context or fine-tune the model on a dataset of QA, but not retrain the model on a specific domain like finance which was my case,<br /> I ended up building the QA system using Haystack which is an open-source library offering project architecture to build NLP QA systems based on transformers you can specify<br /> <a href="https://github.com/deepset-ai/haystack" rel="nofollow noreferrer">https://github.com/deepset-ai/haystack</a></p>
39
T5 model
Sentence embedding using T5
https://stackoverflow.com/questions/64579258/sentence-embedding-using-t5
<p>I would like to use state-of-the-art LM T5 to get sentence embedding vector. I found this repository <a href="https://github.com/UKPLab/sentence-transformers" rel="noreferrer">https://github.com/UKPLab/sentence-transformers</a> As I know, in BERT I should take the first token as [CLS] token, and it will be the sentence embedding. In this repository I see the same behaviour on T5 model:</p> <pre><code>cls_tokens = output_tokens[:, 0, :] # CLS token is first token </code></pre> <p>Does this behaviour correct? I have taken encoder from T5 and encoded two phrases with it:</p> <pre><code>&quot;I live in the kindergarden&quot; &quot;Yes, I live in the kindergarden&quot; </code></pre> <p>The cosine similarity between them was only &quot;0.2420&quot;.</p> <p>I just need to understand how sentence embedding works - should I train network to find similarity to reach correct results? Or I it is enough of base pretrained language model?</p>
<p>In order to obtain the sentence embedding from the T5, you need to take the take the <code>last_hidden_state</code> from the T5 encoder output:</p> <pre><code>model.encoder(input_ids=s, attention_mask=attn, return_dict=True) pooled_sentence = output.last_hidden_state # shape is [batch_size, seq_len, hidden_size] # pooled_sentence will represent the embeddings for each word in the sentence # you need to sum/average the pooled_sentence pooled_sentence = torch.mean(pooled_sentence, dim=1) </code></pre> <p>You have now a sentence embeddings from T5</p>
40
T5 model
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
https://stackoverflow.com/questions/65140400/valueerror-you-have-to-specify-either-decoder-input-ids-or-decoder-inputs-embed
<p>Trying to convert a <code>question-generation</code> t5 model to <code>torchscript</code> <a href="http://%5Bmodel%5D(https://huggingface.co/transformers/torchscript.html#torchscript)." rel="noreferrer">model</a>, while doing that Running into this error</p> <p><strong>ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds</strong></p> <p>here's the code that I ran on colab.</p> <pre><code>!pip install -U transformers==3.0.0 !python -m nltk.downloader punkt from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch model = AutoModelForSeq2SeqLM.from_pretrained('valhalla/t5-base-qg-hl') t_input = 'Python is a programming language. It is developed by &lt;hl&gt; Guido Van Rossum &lt;hl&gt;. &lt;/s&gt;' tokenizer = AutoTokenizer.from_pretrained('valhalla/t5-base-qg-hl', return_tensors = 'pt') def _tokenize( inputs, padding=True, truncation=True, add_special_tokens=True, max_length=64 ): inputs = tokenizer.batch_encode_plus( inputs, max_length=max_length, add_special_tokens=add_special_tokens, truncation=truncation, padding=&quot;max_length&quot; if padding else False, pad_to_max_length=padding, return_tensors=&quot;pt&quot; ) return inputs token = _tokenize(t_input, padding=True, truncation=True) traced_model = torch.jit.trace(model, [token['input_ids'], token['attention_mask']] ) torch.jit.save(traced_model, &quot;traced_t5.pt&quot;) </code></pre> <p>got this error</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-1-f9b449524ef1&gt; in &lt;module&gt;() 32 33 ---&gt; 34 traced_model = torch.jit.trace(model, [token['input_ids'], token['attention_mask']] ) 35 torch.jit.save(traced_model, &quot;traced_t5.pt&quot;) 7 frames /usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_value_states, use_cache, output_attentions, output_hidden_states) 682 else: 683 if self.is_decoder: --&gt; 684 raise ValueError(&quot;You have to specify either decoder_input_ids or decoder_inputs_embeds&quot;) 685 else: 686 raise ValueError(&quot;You have to specify either input_ids or inputs_embeds&quot;) ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds </code></pre> <p>how to resolve this issue? or is there a better way for converting the t5 model to <code>torchscript</code>.</p> <p>thank you.</p>
<p><strong>Update</strong>: refer to <a href="https://stackoverflow.com/a/66117248/13273054">this</a> answer and if you are exporting <code>t5</code> to <code>onnx</code>, it can be done easily using the <a href="https://github.com/Ki6an/fastT5" rel="nofollow noreferrer"><code>fastT5</code></a> library.</p> <p>I figured out what was causing the issue. Since the above model is <strong><a href="https://huggingface.co/transformers/model_summary.html#seq-to-seq-models" rel="nofollow noreferrer">sequential</a></strong>, it has both an encoder and a decoder. We need to pass the features into the encoder and labels (targets) into the decoder.</p> <pre><code>traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids, decoder_attention_mask) ) torch.jit.save(traced_model, &quot;qg_model.pt&quot;) </code></pre> <p>the <code>decoder_input_ids</code> is tokenized ids of the question (here the question is a label).</p> <p>Even though the <code>torchscript</code> model is created, it does not have the <code>generate()</code> method as the <a href="https://huggingface.co/transformers/_modules/transformers/models/t5/modeling_t5.html#T5Model" rel="nofollow noreferrer">huggingface'</a> t5 do.</p>
41
T5 model
How to finetune the huggingface T5 model on custom data?
https://stackoverflow.com/questions/64818504/how-to-finetune-the-huggingface-t5-model-on-custom-data
<p>I have a small text dataset for translation which I want to fine-tune with <code>t5-small</code>, Here is the code which I am trying to use to finetune.</p> <pre><code>import numpy as np import tensorflow as tf from transformers import TFT5ForConditionalGeneration, T5Tokenizer model = TFT5ForConditionalGeneration.from_pretrained('t5-small') tokenizer = T5Tokenizer.from_pretrained('t5-small') def data_gen(): for _ in range(256): x = np.random.randint(1,tokenizer.vocab_size, model.config.n_positions) attention = np.ones_like(x) yield ((x, attention), (x, attention)) output_type = ((tf.int32, tf.int32), (tf.int32, tf.int32)) ds = tf.data.Dataset.from_generator(data_gen, output_type).batch(2) optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) model.fit(ds, epochs=3, steps_per_epoch=128) </code></pre> <p>but while running this code I am getting the following error.</p> <pre><code>All model checkpoint layers were used when initializing TFT5ForConditionalGeneration. All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-small. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training. Epoch 1/3 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-12-12c0ab7ab337&gt; in &lt;module&gt;() 19 model.compile(optimizer=optimizer, loss=loss) 20 ---&gt; 21 model.fit(ds, epochs=3, steps_per_epoch=128) 10 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 971 except Exception as e: # pylint:disable=broad-except 972 if hasattr(e, &quot;ag_error_metadata&quot;): --&gt; 973 raise e.ag_error_metadata.to_exception(e) 974 else: 975 raise ValueError: in user code: /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py:1285 call * encoder_outputs = self.encoder( /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py:618 call * input_shape = shape_list(input_ids) /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py:1026 shape_list * static = x.shape.as_list() /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py:1190 as_list ** raise ValueError(&quot;as_list() is not defined on an unknown TensorShape.&quot;) ValueError: as_list() is not defined on an unknown TensorShape </code></pre> <pre><code>Python 3.6.9 tensorflow==2.3.0 tensorflow-addons==0.8.3 tensorflow-datasets==4.0.1 tensorflow-estimator==2.3.0 tensorflow-gcs-config==2.3.0 tensorflow-hub==0.10.0 tensorflow-metadata==0.24.0 tensorflow-privacy==0.2.2 tensorflow-probability==0.11.0 transformers==3.5.0 </code></pre>
42
T5 model
Is it valid to evaluate a flan-t5 model on sequences longer than it&#39;s max_length of 2048 tokens (assuming I have enough memory)?
https://stackoverflow.com/questions/76495456/is-it-valid-to-evaluate-a-flan-t5-model-on-sequences-longer-than-its-max-length
<p>I am evaluating the different flan-t5 models with few-shot chain of thought prompts which can go over the 2048 maximum token length. I am under the impression that because T5 uses relative position encoding, that it would be valid (make sense) to do zero shot on sequences longer than 2048, provided that I can handle the quadratic memory scaling, but wanted to double check if that is indeed the case.</p> <p>Way I see it, the linear mappings only learn relative dependencies from the relative positional encodings, so evaluation should still be valid on longer sequences even if it was not actually trained on sequences of that length. The only issue I think of would be of that it would not learn a pattern for relative dependencies longer than 2048.</p>
43
T5 model
Why my trained t5-small model generate a mess after I saved and loaded the checkpoint?
https://stackoverflow.com/questions/79205866/why-my-trained-t5-small-model-generate-a-mess-after-i-saved-and-loaded-the-check
<p>I was distilling my student model (base model t5-small) based on a fine-tuned T5-xxl. Here is the config</p> <pre><code>student_model = AutoModelForSeq2SeqLM.from_pretrained( args.student_model_name_or_path, torch_dtype=torch.float32, device_map=&quot;auto&quot;, cache_dir=args.cache_dir, quantization_config=quantization_config, ) </code></pre> <p>I saved the trained model using</p> <pre><code>output_dir = f&quot;checkpoint&quot; student_model.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) </code></pre> <p>But when I try to load the ckp using</p> <pre><code>tokenizer = AutoTokenizer.from_pretrained(args.model_path) model = AutoModelForSeq2SeqLM.from_pretrained( &quot;checkpoint&quot; torch_dtype=torch.float32, device_map=&quot;auto&quot;, cache_dir=args.cache_dir, quantization_config=quantization_config ) </code></pre> <p>it said that &quot;Some weights of the model checkpoint at checkpoints were not used when initializing T5ForConditionalGeneration: ...&quot; and the outputs of the model are really a mess. I was trying to figure it out but have no clues now.</p> <p>I thought it was the problem with the quant_method since I trained the model with &quot;load_8_bits&quot; but add quantization_config didn't help.</p>
44
T5 model
T5 Encoder model output all zeros?
https://stackoverflow.com/questions/67455305/t5-encoder-model-output-all-zeros
<p>I am trying out a project where I use the T5EncoderModel from HuggingFace in order to obtain hidden representations of my input sentences. I have 100K sentences which I tokenize and pad as follows:</p> <pre><code> for sentence in dataset[original]: sentence = tokenizer(sentence, max_length=40, padding='max_length', return_tensors='tf', truncation= True) original_sentences.append(sentence.input_ids) org_mask.append(sentence.attention_mask) </code></pre> <p>This gives me the right outputs and tokenizes everything decently. The problem I achieve is when I am trying to actually train the model. The setup is a bit complex and is taken from <a href="https://keras.io/examples/vision/semantic_image_clustering/" rel="nofollow noreferrer">https://keras.io/examples/vision/semantic_image_clustering/</a> which I am trying to apply to text.</p> <p>The set-up for training is as follows:</p> <pre><code>def create_encoder(rep_dim): encoder = TFT5EncoderModel.from_pretrained('t5-small', output_hidden_states=True) encoder.trainable = True original_input = Input(shape=(max_length), name = 'originalIn', dtype=tf.int32) augmented_input = Input(shape=(max_length), name = 'originalIn', dtype=tf.int32) concat = keras.layers.Concatenate(axis=1)([original_input, augmented_input]) #Take 0-index because it returns a TFBERTmodel type, and 0 returns a tensor encoded = encoder(input_ids=concat)[0] #This outputs shape: [sentences, max_length, encoded_dims] output = Dense(rep_dim, activation='relu')(encoded) return encoder </code></pre> <p>This function is fed into the ReprensentationLearner class from the above link as such:</p> <pre><code>class RepresentationLearner(keras.Model): def __init__( self, encoder, projection_units, temperature=0.8, dropout_rate=0.1, l2_normalize=False, **kwargs ): super(RepresentationLearner, self).__init__(**kwargs) self.encoder = encoder # Create projection head. self.projector = keras.Sequential( [ layers.Dropout(dropout_rate), layers.Dense(units=projection_units, use_bias=False), layers.BatchNormalization(), layers.ReLU(), ] ) self.temperature = temperature self.l2_normalize = l2_normalize self.loss_tracker = keras.metrics.Mean(name=&quot;loss&quot;) @property def metrics(self): return [self.loss_tracker] def compute_contrastive_loss(self, feature_vectors, batch_size): num_augmentations = tf.shape(feature_vectors)[0] // batch_size if self.l2_normalize: feature_vectors = tf.math.l2_normalize(feature_vectors, -1) # The logits shape is [num_augmentations * batch_size, num_augmentations * batch_size]. logits = ( tf.linalg.matmul(feature_vectors, feature_vectors, transpose_b=True) / self.temperature ) # Apply log-max trick for numerical stability. logits_max = tf.math.reduce_max(logits, axis=1) logits = logits - logits_max # The shape of targets is [num_augmentations * batch_size, num_augmentations * batch_size]. # targets is a matrix consits of num_augmentations submatrices of shape [batch_size * batch_size]. # Each [batch_size * batch_size] submatrix is an identity matrix (diagonal entries are ones). targets = tf.tile(tf.eye(batch_size), [num_augmentations, num_augmentations]) # Compute cross entropy loss return keras.losses.categorical_crossentropy( y_true=targets, y_pred=logits, from_logits=True ) def call(self, inputs): features = self.encoder(inputs[0])[0] # Apply projection head. return self.projector(features[0]) def train_step(self, inputs): batch_size = tf.shape(inputs)[0] # Run the forward pass and compute the contrastive loss with tf.GradientTape() as tape: feature_vectors = self(inputs, training=True) loss = self.compute_contrastive_loss(feature_vectors, batch_size) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update loss tracker metric self.loss_tracker.update_state(loss) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} def test_step(self, inputs): batch_size = tf.shape(inputs)[0] feature_vectors = self(inputs, training=False) loss = self.compute_contrastive_loss(feature_vectors, batch_size) self.loss_tracker.update_state(loss) return {&quot;loss&quot;: self.loss_tracker.result()} </code></pre> <p>In order to train it, I use the Colab TPU and train it as such:</p> <pre><code>with strategy.scope(): encoder = create_encoder(rep_dim) training_model = RepresentationLearner(encoder=encoder, projection_units=128, temperature=0.1) lr_scheduler = keras.experimental.CosineDecay(initial_learning_rate=0.001, decay_steps=500, alpha=0.1) training_model.compile(optimizer=tfa.optimizers.AdamW(learning_rate=lr_scheduler, weight_decay=0.0001)) history = training_model.fit(x = [original_train, augmented_train], batch_size=32*8, epocs = 10) training_model.save_weights('representation_learner.h5', overwrite=True) </code></pre> <p>Note that I am giving my model two inputs. When I predict on my input data, I get all zeros, and I can not seem to understand why. I predict as follows:</p> <pre><code>training_model.load_weights('representation_learner.h5') feature_vectors= training_model.predict([[original_train, augmented_train]], verbose = 1) </code></pre> <p>And the output is:</p> <pre><code>array([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], dtype=float32) </code></pre> <p>With a way too large shape of (1000000, 128)</p>
45
T5 model
How to denoise text using T5?
https://stackoverflow.com/questions/76186015/how-to-denoise-text-using-t5
<p>I'm trying to denoise text using a T5 model following <a href="https://huggingface.co/docs/transformers/v4.28.1/en/model_doc/t5#transformers.T5ForConditionalGeneration" rel="nofollow noreferrer">the Huggingface doc:</a></p> <pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained(&quot;t5-small&quot;) model = T5ForConditionalGeneration.from_pretrained(&quot;t5-small&quot;) input_ids = tokenizer(&quot;The &lt;extra_id_0&gt; walks in &lt;extra_id_1&gt; park&quot;, return_tensors=&quot;pt&quot;).input_ids labels = tokenizer(&quot;&lt;extra_id_0&gt; cute dog &lt;extra_id_1&gt; the &lt;extra_id_2&gt;&quot;, return_tensors=&quot;pt&quot;).input_ids # the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss loss.item() </code></pre> <p>But I can't figure out how to get the actual text that corresponds to the masked input. They only show how to get the loss and mention</p> <blockquote> <p>the forward function automatically creates the correct decoder_input_ids</p> </blockquote> <p>I tried the following:</p> <pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained(&quot;t5-small&quot;) model = T5ForConditionalGeneration.from_pretrained(&quot;t5-small&quot;) input_ids = tokenizer(&quot;The &lt;extra_id_0&gt; walks in &lt;extra_id_1&gt; park&quot;, return_tensors=&quot;pt&quot;).input_ids labels = tokenizer(&quot;&lt;extra_id_0&gt; cute dog &lt;extra_id_1&gt; the &lt;extra_id_2&gt;&quot;, return_tensors=&quot;pt&quot;).input_ids outputs = model(input_ids=input_ids, labels=labels) loss = outputs.loss logits = outputs.logits tokenizer.batch_decode(logits.argmax(-1)) </code></pre> <p>But the output doesn't make sense:</p> <pre><code>['&lt;extra_id_0&gt; park park&lt;extra_id_1&gt; the&lt;extra_id_2&gt; park'] </code></pre> <p><strong>I don't care for the loss, nor do I have labels in my setting. I just have text with masked tokens that I need to fill:</strong></p> <pre><code>my_masked_text = [ &quot;The kid went to the [MASK]&quot;, &quot;The dog likes [MASK] and also [MASK]&quot; ] </code></pre>
<p>In the in <a href="https://huggingface.co/docs/transformers/v4.29.1/model_doc/t5#inference" rel="nofollow noreferrer">docs for T5 (§ Inference)</a> there is an example of what you're looking for.</p> <pre class="lang-py prettyprint-override"><code>from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained(&quot;t5-small&quot;) model = T5ForConditionalGeneration.from_pretrained(&quot;t5-small&quot;) input_ids = tokenizer(&quot;The &lt;extra_id_0&gt; walks in &lt;extra_id_1&gt; park&quot;, return_tensors=&quot;pt&quot;).input_ids sequence_ids = model.generate(input_ids) sequences = tokenizer.batch_decode(sequence_ids) </code></pre> <p>for the result that <code>sequences</code> is the following:</p> <pre class="lang-py prettyprint-override"><code>['&lt;pad&gt;&lt;extra_id_0&gt; park offers&lt;extra_id_1&gt; the&lt;extra_id_2&gt; park.&lt;/s&gt;'] </code></pre> <p>Where this to be interpreted as follows:</p> <pre class="lang-py prettyprint-override"><code>'&lt;pad&gt;' # Marks beginning of output sequence '&lt;extra_id_0&gt; park offers'# &lt;- model prediction for first blank '&lt;extra_id_1&gt; the' # &lt;- model prediction for second blank '&lt;extra_id_2&gt; park.&lt;/s&gt;' # ignore (there was no third blank) </code></pre> <p>So the model filled in the blanks as</p> <p>&quot;The <em><strong>park offers</strong></em> walks in <em><strong>the</strong></em> park&quot;</p> <hr /> <p>For your examples, that means you'd do something like the following (haven't tested this, but it should work modulo some typo):</p> <pre class="lang-py prettyprint-override"><code>my_masked_text = [ &quot;The kid went to the &lt;extra_id_0&gt;.&quot;, &quot;The dog likes &lt;extra_id_0&gt; and also &lt;extra_id_1&gt;.&quot; ] inputs = tokenizer( my_masked_text, # tokenizer will encode each string in your list padding=&quot;longest&quot;, # need to pad if encoded strings are different of lengths return_tensors=&quot;pt&quot;, ) sequence_ids = model.generate( input_ids=inputs[&quot;input_ids&quot;], attention_mask=inputs[&quot;attention_mask&quot;] ) sequences = tokenizer.batch_decode(sequence_ids) </code></pre> <p>Then you should have a decoded predictions list <code>sequences</code> like the example above.</p>
46
T5 model
use_cuda is set True even though it was specified as False T5
https://stackoverflow.com/questions/74138756/use-cuda-is-set-true-even-though-it-was-specified-as-false-t5
<p>I am trying to train a T5 model using <code>simpletransformers</code>. Here is my code:</p> <pre><code>from simpletransformers.t5 import T5Model model_args = { &quot;max_seq_length&quot;: MAX_LEN, &quot;train_batch_size&quot;: 8, &quot;eval_batch_size&quot;: 8, &quot;num_train_epochs&quot;: 1, &quot;evaluate_during_training&quot;: True, &quot;evaluate_during_training_steps&quot;: 15000, &quot;evaluate_during_training_verbose&quot;: True, &quot;learning_rate&quot;: 1e-4, &quot;evaluate_generated_text&quot;: True, &quot;use_multiprocessing&quot;: False, &quot;fp16&quot;: False, &quot;use_cuda&quot;:False, &quot;save_steps&quot;: -1, &quot;save_eval_checkpoints&quot;: False, &quot;save_model_every_epoch&quot;: False, &quot;reprocess_input_data&quot;: True, &quot;overwrite_output_dir&quot;: True, &quot;wandb_project&quot;: None } model = T5Model('t5', 't5-base', args=model_args) </code></pre> <p>But I am getting this error:</p> <pre><code>ValueError: 'use_cuda' set to True when cuda is unavailable.Make sure CUDA is available or set `use_cuda=False`. </code></pre> <p>I have specified both <code>use_cuda=False</code> and <code>fp16 =False</code>, not sure why I am getting this error. I am running my code on Jupyter and I tried restarting the kernel and re-running the code but with no hope.</p>
<p>You need to pass in the arg <code>use_cuda</code> to the call to the <code>T5Model</code> constructor, not in your <code>model_args</code> dict.</p> <pre class="lang-py prettyprint-override"><code>from simpletransformers.t5 import T5Model model_args = {...} model = T5Model('t5', 't5-base', args=model_args, use_cuda=False) </code></pre>
47
T5 model
How to get the logits for the T5 model when using the `generate` method for inference?
https://stackoverflow.com/questions/73781510/how-to-get-the-logits-for-the-t5-model-when-using-the-generate-method-for-infe
<p>I'm currently using HuggingFace's T5 implementation for text generation purposes. More specifically, I'm using the <code>T5ForConditionalGeneration</code> to solve a text classification problem as generation.</p> <p>The model's performance is overall very satisfactory after training, but what I am wondering is how I can get the logits for generation?</p> <p>I'm currently performing inference as is suggested in the documentation via <code>model.generate(**tokenizer_outputs)</code>, but this simply outputs the IDs themselves without anything else.</p> <p>The reason why I want the logits is because I want to measure the model's confidence of generation. I'm not 100% certain if my approach is correct, but I'm thinking that if I can get the logit values of each generated token and average them, I could get the overall confidence score of the generated sequence.</p> <p>Would anybody know how I could do this? Thanks.</p>
<p>I was struggling with this because I wasn't familiar with how the Transformers library works, but after looking at the source code all you have to do is set the arguments <code>output_scores</code> and <code>return_dict_in_generate</code> to <code>True</code>.</p> <p>For more information, take a look at the method <a href="https://github.com/huggingface/transformers/blob/ba9da49aa298345022f35a0b7be44ce4c72b85c2/src/transformers/generation/utils.py#L999" rel="nofollow noreferrer"><code>transformers.generation.utils.GenerationMixin.generate</code></a>.</p>
48
T5 model
How to use forward() method instead of model.generate() for T5 model
https://stackoverflow.com/questions/67328345/how-to-use-forward-method-instead-of-model-generate-for-t5-model
<p>For my use case, I need to use the model.forward() instead of the model.generate() method i.e instead of the below code</p> <pre><code>outs = model.model.generate(input_ids=batch['source_ids'], attention_mask=batch['source_mask'], output_scores=True, max_length=model.model_arguments.max_output_seq_length) preds_cleaned = [model.tokenizer.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) for ids in outs] </code></pre> <p>I need to use</p> <pre><code>model_outputs = model.model( input_ids=batch[&quot;source_ids&quot;], attention_mask=batch[&quot;source_mask&quot;], labels=lm_labels.to(device), decoder_attention_mask=batch['target_mask'] ) logits = model_outputs.logits softmax_logits = m(logits) max_logits = torch.max(softmax_logits, dim=2) </code></pre> <p>decoding these logits gives unprocessed text that has many issues like repetition of words at the end etc. What do I need to do to get the same result as model.generate() ?</p>
<p>The two methods do something completely different.</p> <p>Calling the model (which means the <code>forward</code> method) uses the <code>labels</code> for teacher forcing. This means inputs to the decoder are the <code>labels</code> shifted by one (see <a href="https://huggingface.co/transformers/model_doc/t5.html#training" rel="noreferrer">documentation</a>). With teacher forcing, the decoder always gets the ground-truth token in the next step, no matter what the prediction was. Teacher forcing is used from model training, all steps are fully differentiable.</p> <p>When you call the <code>generate</code> method, the model is used in the autoregressive fashion. Any token it generates is put as the input in the next step. However, selecting the token is a &quot;hard&quot; decision, and the gradient cannot be propagated through this decision. The generate method cannot be used for training. The output is coherent because the decoder reacts to what was previously generated.</p> <p>With teacher forcing, the model might want to prefer generating a token and continue consistently with the generated token. However, it cannot continue consistently, because it is forced to continue as if it generated the token that actually is in the <code>labels</code> argument. This why you observe the incoherent output (which was nevertheless never intended to be output but only to be used for training).</p>
49
T5 model
Pytorch T5 training loss not changing
https://stackoverflow.com/questions/76337204/pytorch-t5-training-loss-not-changing
<p>I am trying to fine tune a T5 model for more accurate summarization, but my loss is very high and does not change with each epoch. I have tried increasing the learning rate, but the model still does not train. It seems like there is some issue with the code since the loss doesn't change at all. My input texts are very large, but I thought this would be fine given that T5 can already be used for summarization.</p> <p>training code:</p> <pre><code>import torch from torchtext.models import T5_BASE_GENERATION, T5Transform from torchtext.prototype.generate import GenerationUtils from torch.utils.data import DataLoader, Dataset import torch.nn.functional as F import json import os padding_idx = 0 eos_idx = 0 max_seq_len = 65536 #16384 t5_sp_model_path = &quot;/t5_tokenizer_base.model&quot; # Define your custom dataset class class CustomDataset(Dataset): def __init__(self, data): self.data = data def __getitem__(self, index): example = self.data[index] input_text = example[&quot;input_text&quot;] target_text = example[&quot;target_text&quot;] return input_text, target_text def __len__(self): return len(self.data) # Load labeled dataset train_data = [] # List of labeled training examples valid_data = [] # List of labeled validation examples labeled = json.loads(open(&quot;data.json&quot;, 'r').read()) datalen = len(labeled) valper = 0.2 train_ind = range(datalen - int(datalen * valper)) valid_ind = range(datalen - int(datalen * valper), datalen) for x in train_ind: train_data.append(labeled[str(x)]) for x in valid_ind: valid_data.append(labeled[str(x)]) # Create instances of the T5 model and transformation if os.path.exists('model.pt'): t5_base = torch.load(&quot;model.pt&quot;) else: t5_base = T5_BASE_GENERATION.get_model() transform = T5Transform(sp_model_path=t5_sp_model_path, max_seq_len=max_seq_len, eos_idx=eos_idx, padding_idx=padding_idx) # Define the training parameters device = torch.device(&quot;cuda&quot;) # if torch.cuda.is_available() else &quot;cpu&quot;) batch_size = 1 num_epochs = 5 # Convert your training and validation data into tensors train_dataset = CustomDataset(train_data) valid_dataset = CustomDataset(valid_data) train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) valid_dataloader = DataLoader(valid_dataset, batch_size=batch_size) # Define the loss function and optimizer criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(t5_base.parameters(), lr=0.001) # Training loop t5_base.train() t5_base.to(device) for epoch in range(num_epochs): total_loss = 0 total_batches = 0 for batch in train_dataloader: input_batch = [] target_batch = [] for input_text, target_text in zip(batch[0], batch[1]): input_batch.append('summarize: ' + input_text[:5000]) target_batch.append(target_text) #print(input_batch) #print(target_batch) #print('\n\n\n') input_batch = transform(input_batch) target_batch = transform(target_batch) input_batch = input_batch.to(device) target_batch = target_batch.to(device) optimizer.zero_grad() sequence_generator = GenerationUtils(t5_base) sequence_generator.device = device beam_size = 1 output = sequence_generator.generate(input_batch, eos_idx=eos_idx, num_beams=beam_size, max_length=30) #print(' ') #print(output) #print('OUTPUT:', transform.decode(output.tolist())) #print('TARGET:', target_batch) #print('\n') logits = output.float().squeeze() log_size = int(logits.numel()) #print(logits) #print(logits.shape) #print('\n\n') target = target_batch.view(-1) tar_size = int(target.numel()) #print(target) #print(target.shape) if log_size &gt; tar_size: #print('Target Shorter') target = torch.nn.functional.pad(target, (1, log_size - tar_size - 1)).float() else: #print('Input Shorter') logits = torch.nn.functional.pad(logits, (0, tar_size - log_size)).float() #print('\n') #print(logits) #print(logits.shape) #print(target) #print(target.shape) #print('\n\n') loss = criterion(logits, target) #print('LOSS:', loss) loss.requires_grad = True loss.backward() optimizer.step() #print(f&quot;Completed Batch #{total_batches}&quot;) #print('\n') total_loss += loss.item() total_batches += 1 #print(total_loss, total_batches) average_loss = total_loss / total_batches print(f&quot;Epoch {epoch + 1}/{num_epochs}, Loss: {average_loss}&quot;) print('------------------------------------------------------') print('\n') torch.save(t5_base, &quot;model.pt&quot;) </code></pre> <p>result:</p> <pre><code>Epoch 1/5, Loss: 1139705410.5365853 ------------------------------------------------------ Epoch 2/5, Loss: 1139705410.5365853 ------------------------------------------------------ Epoch 3/5, Loss: 1139705410.5365853 ------------------------------------------------------ Epoch 4/5, Loss: 1139705410.5365853 ------------------------------------------------------ Epoch 5/5, Loss: 1139705410.5365853 ------------------------------------------------------ </code></pre> <p>sample data.json (news article summary):</p> <pre><code>{ &quot;0&quot;: { &quot;input_text&quot;: &quot;Ukrainian President Volodymyr Zelenskiy attended a summit of the Arab League in Saudi Arabia on Friday to canvas support for his people, while Saudi Crown Prince Mohammed bin Salman expressed his readiness to mediate in the war between Moscow and Kyiv. Also at the Jeddah gathering, Arab leaders warmly welcomed back into their fold Syria\u2019s President Bashar al-Assad \u2014 who has received heavy support from Russia in his country\u2019s civil war \u2014 following a decade of isolation. \u201cWe reaffirm the kingdom\u2019s readiness to continue mediating efforts between Russia and Ukraine, and to support all international efforts aimed at resolving the crisis politically in a way that contributes to achieving security,\u201d the Saudi Crown Prince said in his opening speech. Prince Mohammed has mediated in the conflict before. Zelenskiy, who was also due to attend a summit of the G7 leaders in the Japanese city of Hiroshima this weekend, thanked Saudi Arabia for its past help and said delegates would each receive the text of his 10-point peace plan. He asked them to work with Ukraine directly without intermediaries. Gulf states have tried to remain neutral in the Ukraine conflict despite Western pressure on Gulf oil producers to help isolate Russia, a fellow OPEC+ member. Saving people In his address to the summit, Zelenskiy said some countries including members of the Arab League preferred to \u201cturn a blind eye\u201d to Russia\u2019s illegal annexation of Ukrainian land and to its jailing of some Ukrainians during the 15-month war. \u201cI am sure we can all be united in saving people from the cages of Russian prisons,\u201d he said, speaking in English. Last year, in a diplomatic coup, Crown Prince Mohammed secured the release of 10 foreigners captured by Russia in Ukraine. The move was apparently made possible by his close ties with Russian President Vladimir Putin. \u201cThe Kingdom of Saudi Arabia plays a significant role and we are ready to take our cooperation to a new level,\u201d Zelenskiy said wrote on Twitter shortly after arriving in Jeddah. Saudi Arabia faced heavy criticism from the United States over an OPEC+ decision to cut oil production, seen as helping Russia to refill its coffers by boosting prices. Even though the October decision initially drew the ire of the United States and other Western countries, market dynamics since then have shown the cuts to be prudent. At a time when Russia\u2019s war on Ukraine has roiled global energy markets, the role the kingdom plays as the world\u2019s largest oil exporter has grown in importance to both Washington and Moscow. KYIV, Ukraine (AP) \u2014 Ukrainian President Volodymyr Zelenskyy addressed a summit of Arab leaders in Saudi Arabia on Friday before what a senior official said would be a trip to Japan for a meeting with the leaders of the world\u2019s most powerful democracies. Zelenskyy has in recent months made foreign trips to shore up diplomatic support for Ukraine\u2019s fight against Russia\u2019s full-scale invasion almost 15 months ago and solicit more military support. He earlier this week returned from a three-day trip to Italy, the Vatican, Germany, France and the United Kingdom. Ukraine and Russia are squaring up for a major and potentially decisive phase of the war as Kyiv prepares an expected counteroffensive. The conflict has been bogged down in a war of attrition in recent months amid bad weather. Zelenskyy\u2019s office said he was invited to attend the Arab League summit in Jeddah, where he met with Saudi Crown Prince Mohammed bin Salman before holding other bilateral meetings. They discussed Zelenskyy\u2019s peace plan, the security situation in Ukraine and possible investments in the reconstruction of the country, a presidential statement said. Zelenskyy also invited Prince Mohammed to visit Ukraine. Zelenskyy urged leaders at the summit to resist Moscow\u2019s influence and consider his peace proposals, which include the withdrawal of the Kremlin\u2019s forces from occupied areas of Ukraine. \u201cI\u2019m more than sure that none of you will agree to surrender a third of your country to the invaders,\u201d Zelenskyy said in English. \u201cAnother priority is the protection of the Muslim community of Ukraine,\u201d Zelenskyy said. \u201cCrimea was the first to suffer from the Russian occupation, and most of those who suffer repression in occupied Crimea are Muslims.\u201d Crimean Tatar leader Mustafa Dzhemilev accompanied Zelenskyy on the visit. Zelenskyy will later travel to a Group of Seven summit in Japan, where leaders of the world\u2019s most powerful democracies aim to step up punishment on Russia for its full-scale invasion of Ukraine, according to Oleksiy Danilov, the secretary of Ukraine\u2019s National Security and Defense Council. However, Danilov\u2019s office later posted a statement backtracking on his announcement and saying Zelenskyy would appear at the G-7 summit via video. Zelenskyy\u2019s movements are kept secret for security reasons. Meanwhile, Russian forces kept up their long-range bombardment of Ukrainian targets while drones reportedly damaged train lines behind their front line. About 130 meters (430 feet) of railway track were damaged and trains were halted for hours after an explosion derailed eight cars of a freight train carrying grain in Russia-occupied Crimea, Russian state media reported Friday. Thursday\u2019s blast prompted renewed suspicions about possible Ukrainian saboteur activity behind Russian lines. Train traffic was also halted in northern Crimea on Thursday night after a drone hit a railway track near the town of Dzhankoi, Russia\u2019s Baza Telegram channel reported. Sergei Aksyonov, the Kremlin-appointed head of Crimea, said in a separate post that four Ukrainian drones were shot down overnight in the peninsula\u2019s north. Aksyonov claimed there was no damage or casualties. Russia overnight fired cruise missiles, drones and artillery at targets across Ukraine, killing two civilians, officials said Friday. The attacks included an air assault on Kyiv for the second straight day and the 10th time in three weeks. The Kremlin\u2019s forces also took aim at central, eastern and southern Ukraine, and the western Lviv region near the border with Poland. Russia launched 22 Iranian-made Shahed drones and six Kalibr cruise missiles during the night, the Ukrainian Air Force said. It said air defenses downed 16 drones and three missiles. The Russian shelling killed two civilians and wounded nine others in Ukraine\u2019s eastern Donetsk region, said its governor, Pavlo Kyrylenko. The missile attacks that have intensified recently aim to \u201cdisrupt Ukraine\u2019s plans and preparations for active military operations during the spring-summer campaign,\u201d according to a statement from Ukraine\u2019s intelligence agency, published on Telegram. The targets are Ukraine\u2019s military control points and barracks, supply routes and the places where ammunition, equipment, fuel are stored, it said. On Friday, the United Nations said operations to ship Ukrainian grain were \u201cpartially restarting,\u201d two days after Russia gave a green light to extend the deal for two months. The U.N. also urged a swift return to the previous tempo of ship arrivals and departures from all three Black Sea ports and inspections of their cargo. U.N. associate spokesperson Stephanie Tremblay said the Joint Coordination Center, which includes representatives from the four parties involved in the deal \u2013 Russia, Ukraine, Turkey and the United Nations -- approved the registration Friday of six new vessels to participate in the grain shipments. Nine applications to participate remain pending, she said. No ships are currently loading at any of the three ports, Tremblay said, but inspection teams from the center checked and cleared three new vessels Friday to proceed to the ports of Odesa and Chornomorsk. ___ Hanna Arhirova in Kyiv and Edith M. Lederer at the United Nations contributed to this report. ___ Follow AP\u2019s coverage of the war in Ukraine at https://apnews.com/hub/russia-ukraine President Zelenskyy makes a stop in Saudi Arabia on his way to Japan to meet with the G7. At the G7 increased sanctions against Russia are on the table, but negotiations still continue. A look at the battle in Bakhmut plus Crimean Tartars; expelled by the Soviets decades ago, now looking to return. TALLINN, Estonia (AP) \u2014 While the world awaits Ukraine\u2019s spring battlefield offensive, its leader, Volodymyr Zelenskyy, has launched a diplomatic one. In the span of a week, he's dashed to Italy, the Vatican, Germany, France and Britain to shore up support for defending his country. On Friday, he was in Saudi Arabia to meet with Arab leaders, some of whom are allies with Moscow. President Vladimir Putin, meanwhile, was in the southern Russian city of Pyatigorsk, chairing a meeting with local officials, sitting at a large table at a distance from the other attendees. The Russian president has faced unprecedented international isolation, with an International Criminal Court arrest warrant hanging over his head and clouding the prospects of traveling to many destinations, including those viewed as Moscow's allies. With his invasion of Ukraine, \u201cPutin took a gamble and lost really, really big time,\u201d said Theresa Fallon, director of the Brussels-based Centre for Russia Europe Asia Studies. \u201cHe is an international pariah, really.\u201d It was only 10 years ago when Putin stood proudly among his peers at the time -\u2013 Barack Obama, Angela Merkel and Shinzo Abe \u2013 at a Group of Eight summit in Northern Ireland. Russia has since been kicked out of the group, which consists of Canada, France, Germany, Italy, Japan, Britain and the United States, for illegally annexing Crimea in 2014. Now it appears to be Ukraine\u2019s turn in the spotlight. There were conflicting messages from Kyiv whether Zelenskyy would attend the G7 in Japan on Sunday. The secretary of Ukraine\u2019s National Security and Defense Council said on national television the president would be there, but the council later walked back those remarks, saying Zelenskyy would join via video link. The president\u2019s office would not confirm either way for security reasons. But whether in person or via video, it would be of great symbolic and geopolitical significance. \u201cIt conveys the fact that the G7 continues to strongly support Ukraine,\u201d said Nigel Gould-Davies, senior fellow for Russia and Eurasia at the International Institute for Strategic Studies. \u201cIt\u2019s a visible marker of the continued commitment of the most highly industrialized and highly developed countries in the world.\u201d Story continues It also comes at a time when the optics are just not in the Kremlin\u2019s favor. There\u2019s uncertainty over whether Putin can travel to South Africa in August for a summit of the BRICS nations of Brazil, Russia, India, China and South Africa. Moscow has long showcased the alliance as an alternative to the West\u2019s global dominance, but this year it is already proving awkward for the Kremlin. South Africa, the host of the summit, is a signatory to the ICC and is obligated to comply with the arrest warrant on war crimes charges. South Africa has not announced that Putin will definitely come to the summit but has been planning for his possible arrival. South African President Cyril Ramaphosa has appointed an inter-ministerial committee, led by Deputy President Paul Mashatile, to consider South Africa\u2019s options with regard to its ICC commitment over Putin\u2019s possible trip. While it is highly unlikely the Russian president would be arrested there if he decides to go, the public debate about whether he can is in itself \u201can unwelcome development whose impact should not be underestimated,\u201d according to Gould-Davies. Then there are Moscow\u2019s complicated relations with its own neighbors. Ten days ago, Putin projected the image of solidarity, with leaders of Armenia, Belarus and Central Asian states standing beside him at a Victory Day military parade on Red Square. This week, however, the leaders of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan flocked to China and met with leader Xi Jinping at a summit that highlighted the erosion of Russia\u2019s influence in the region as Beijing seeks to make economic inroads into Central Asia. Xi is using the opportunity \u201cof a weakened Russia, a distracted Russia, almost a pariah-state Russia to increase (China\u2019s) influence in the region,\u201d Fallon said. Putin\u2019s effort this month to shore up more friends in the South Caucasus by scrapping visa requirements for Georgian nationals and lifting a four-year ban on direct flights to the country also didn\u2019t appear to go as smoothly as the Kremlin may have hoped. The first flight that landed Friday in Georgia was met with protests, and the country\u2019s pro-Western president has decried the move as a provocation. Zelenskyy\u2019s ongoing world tour can be seen as a success on many levels. Invitations from other world leaders is a sign they think Ukraine is \&quot;going to come out of the war in good shape,\u201d said Phillips P. O\u2019Brien, professor of strategic studies at the University of St. Andrews in Scotland. Otherwise, \u201cit simply wouldn\u2019t be happening,\u201d he said. \&quot;No one would want to be around a leader they think is going to be defeated and a country that\u2019s going to collapse.\u201d By contrast, the ICC warrant might make it harder for leaders even to visit Putin in Moscow because \u201cit\u2019s not a good look to visit an indicted war criminal,\u201d Gould-Davies said. European leaders promised him an arsenal of missiles, tanks and drones, and even though no commitment has been made on fighter jets \u2013 something Kyiv has wanted for months \u2013 a conversation about finding ways to do it has begun. His appearance Friday at the Arab League summit in Jeddah, a Saudi Arabian port on the Red Sea, highlighted Kyiv\u2019s effort to spread its plight for support far and wide, including in some countries whose sympathies are with Russia. In addition to Zelenskyy, Saudi Crown Prince Mohammed bin Salman also welcomed Syrian President Bashar Assad at the summit after a 12-year suspension \u2013 something analysts say aligns with Moscow\u2019s interests. Anna Borshchevskaya, a senior fellow at the Washington Institute who focuses on Russia\u2019s policy in the Middle East, called it \u201canother testament to the fact that Russia is not isolated globally for its invasion of Ukraine, that the Middle East is one part of the world where Russia is able to find avenues to avoid global isolation \u2013 both ideological isolation but also economic isolation.\u201d She added that Zelenskyy and his government deserve credit for \u201cin recognizing that they need to reach out more to improve their diplomatic efforts in this part of the world and other parts of the world where the Russian narrative resonates.\u201d Kyiv could expect that \u201cthis is the beginning of a larger shift in perception that could eventually translate into potential support,\u201d Borshchevskaya said. Similarly, the Ukrainian president\u2019s participation in the G7 summit is \u201ca message to the rest of the world, to Russia and beyond, and the so-called Global South,\u201d Gould-Davies believes. There is a concern in the West over the extent to which some major developing economies \u2013 Brazil, South Africa and, to a degree, India \u2013 \u201care not criticizing, not condemning Russia and indeed in various ways are helping to mitigate the impact of sanctions on Russia,\u201d he said. \u201cCollectively, economically, they matter. So there is, I think, this felt need for a renewed diplomatic campaign to bring some of these most important states into the kind of the Western way of looking at these things,\u201d Gould-Davies said. ___ Associated Press writers Danica Kirka in London and Gerald Imray in Cape Town, South Africa, contributed. ___ Follow AP's coverage of the war in Ukraine at https://apnews.com/hub/russia-ukraine Syrian President Bashar al-Assad (L) is welcomed in Jeddah on the eve of the Arab League summit Ukrainian President Volodymyr Zelensky said he landed Friday in Saudi Arabia, host of an Arab League summit attended by long isolated Syrian President Bashar al-Assad, a close Russian ally. The previously unannounced visit is Zelensky's first to the Middle East since Moscow's invasion in February 2022, giving the Ukrainian leader an opportunity to address leaders in the region that has been far less united in its support of Kyiv than staunch Western allies. \&quot;Arrived in Saudi Arabia. I will speak at the Arab League summit,\&quot; Zelensky said on Twitter, adding he plans to meet with Saudi Crown Prince Mohammed bin Salman and other leaders. He arrived in the Red Sea coastal city of Jeddah one day after Assad, whose government is being readmitted to the Arab League after its suspension in 2011 over the brutal crackdown on pro-democracy demonstrators that led to civil war. The summit in Saudi Arabia comes at a time when the world's biggest oil exporter is flexing its diplomatic muscle across the Middle East and beyond. An Arab League official told AFP Zelenky's invitation came from Saudi Arabia, not the bloc.&quot;, &quot;target_text&quot;: &quot;Ukranian President Zelenskyy attends Arab League summit in Saudi Arabia&quot; } } </code></pre> <p>Any help would be greatly appreciated!</p>
<p>Based on your <a href="https://stackoverflow.com/questions/76320189/pytorch-calculate-loss-with-smaller-target-tensor">other question on this problem</a>, where you mentioned that you're new to <code>PyTorch</code>, my answer reflects the general approach I personally use when I need to work with a new machine-learning library/algorithm I am unfamiliar with.</p> <p>Since T5 is a popular algorithm, I tried to find a &quot;template&quot; showing how to fine-tune it for summarization, where my code represents an <strong>adaptation</strong> of this <a href="https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb#scrollTo=932p8NhxeNw4" rel="nofollow noreferrer">source</a>, which I found by reading the <a href="https://huggingface.co/docs/transformers/model_doc/t5#resources" rel="nofollow noreferrer">documentation on T5</a> of the <a href="https://huggingface.co/docs/transformers/index" rel="nofollow noreferrer"><code>transformers</code></a> library. Most importantly, I modified the <code>CustomDataset</code> class so it's compatible with the data you're working with; also, I modified some pieces of the code which returned warnings. Please note that the approach uses <code>PyTorch</code> for training but loads T5 using <code>transformers</code>.</p> <p>As shown below, the example code trains T5 &quot;successfully&quot; in terms of obtaining a lower loss over time. Of course you still need to adapt parts of the code (such as increasing <code>max_sequence_size</code> to your requirement, moving the computations to <code>GPU</code>, add the validation code, etc); nevertheless, I'm sure you will be able to take it from here using the example below, the aforementioned source, as well as the official user guides of <code>PyTorch</code> and <code>Transformers</code>!</p> <pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration import torch from torch.utils.data import Dataset, DataLoader input_data ={ &quot;0&quot;: { &quot;input_text&quot;: &quot;Ukrainian President Volodymyr Zelenskiy attended a summit of the Arab League in Saudi Arabia on Friday to canvas support for his people, while Saudi Crown Prince Mohammed bin Salman expressed his readiness to mediate in the war between Moscow and Kyiv. Also at the Jeddah gathering, Arab leaders warmly welcomed back into their fold Syria\u2019s President Bashar al-Assad \u2014 who has received heavy support from Russia in his country\u2019s civil war \u2014 following a decade of isolation. \u201cWe reaffirm the kingdom\u2019s readiness to continue mediating efforts between Russia and Ukraine, and to support all international efforts aimed at resolving the crisis politically in a way that contributes to achieving security,\u201d the Saudi Crown Prince said in his opening speech. Prince Mohammed has mediated in the conflict before. Zelenskiy, who was also due to attend a summit of the G7 leaders in the Japanese city of Hiroshima this weekend, thanked Saudi Arabia for its past help and said delegates would each receive the text of his 10-point peace plan. He asked them to work with Ukraine directly without intermediaries. Gulf states have tried to remain neutral in the Ukraine conflict despite Western pressure on Gulf oil producers to help isolate Russia, a fellow OPEC+ member. Saving people In his address to the summit, Zelenskiy said some countries including members of the Arab League preferred to \u201cturn a blind eye\u201d to Russia\u2019s illegal annexation of Ukrainian land and to its jailing of some Ukrainians during the 15-month war. \u201cI am sure we can all be united in saving people from the cages of Russian prisons,\u201d he said, speaking in English. Last year, in a diplomatic coup, Crown Prince Mohammed secured the release of 10 foreigners captured by Russia in Ukraine. The move was apparently made possible by his close ties with Russian President Vladimir Putin. \u201cThe Kingdom of Saudi Arabia plays a significant role and we are ready to take our cooperation to a new level,\u201d Zelenskiy said wrote on Twitter shortly after arriving in Jeddah. Saudi Arabia faced heavy criticism from the United States over an OPEC+ decision to cut oil production, seen as helping Russia to refill its coffers by boosting prices. Even though the October decision initially drew the ire of the United States and other Western countries, market dynamics since then have shown the cuts to be prudent. At a time when Russia\u2019s war on Ukraine has roiled global energy markets, the role the kingdom plays as the world\u2019s largest oil exporter has grown in importance to both Washington and Moscow. KYIV, Ukraine (AP) \u2014 Ukrainian President Volodymyr Zelenskyy addressed a summit of Arab leaders in Saudi Arabia on Friday before what a senior official said would be a trip to Japan for a meeting with the leaders of the world\u2019s most powerful democracies. Zelenskyy has in recent months made foreign trips to shore up diplomatic support for Ukraine\u2019s fight against Russia\u2019s full-scale invasion almost 15 months ago and solicit more military support. He earlier this week returned from a three-day trip to Italy, the Vatican, Germany, France and the United Kingdom. Ukraine and Russia are squaring up for a major and potentially decisive phase of the war as Kyiv prepares an expected counteroffensive. The conflict has been bogged down in a war of attrition in recent months amid bad weather. Zelenskyy\u2019s office said he was invited to attend the Arab League summit in Jeddah, where he met with Saudi Crown Prince Mohammed bin Salman before holding other bilateral meetings. They discussed Zelenskyy\u2019s peace plan, the security situation in Ukraine and possible investments in the reconstruction of the country, a presidential statement said. Zelenskyy also invited Prince Mohammed to visit Ukraine. Zelenskyy urged leaders at the summit to resist Moscow\u2019s influence and consider his peace proposals, which include the withdrawal of the Kremlin\u2019s forces from occupied areas of Ukraine. \u201cI\u2019m more than sure that none of you will agree to surrender a third of your country to the invaders,\u201d Zelenskyy said in English. \u201cAnother priority is the protection of the Muslim community of Ukraine,\u201d Zelenskyy said. \u201cCrimea was the first to suffer from the Russian occupation, and most of those who suffer repression in occupied Crimea are Muslims.\u201d Crimean Tatar leader Mustafa Dzhemilev accompanied Zelenskyy on the visit. Zelenskyy will later travel to a Group of Seven summit in Japan, where leaders of the world\u2019s most powerful democracies aim to step up punishment on Russia for its full-scale invasion of Ukraine, according to Oleksiy Danilov, the secretary of Ukraine\u2019s National Security and Defense Council. However, Danilov\u2019s office later posted a statement backtracking on his announcement and saying Zelenskyy would appear at the G-7 summit via video. Zelenskyy\u2019s movements are kept secret for security reasons. Meanwhile, Russian forces kept up their long-range bombardment of Ukrainian targets while drones reportedly damaged train lines behind their front line. About 130 meters (430 feet) of railway track were damaged and trains were halted for hours after an explosion derailed eight cars of a freight train carrying grain in Russia-occupied Crimea, Russian state media reported Friday. Thursday\u2019s blast prompted renewed suspicions about possible Ukrainian saboteur activity behind Russian lines. Train traffic was also halted in northern Crimea on Thursday night after a drone hit a railway track near the town of Dzhankoi, Russia\u2019s Baza Telegram channel reported. Sergei Aksyonov, the Kremlin-appointed head of Crimea, said in a separate post that four Ukrainian drones were shot down overnight in the peninsula\u2019s north. Aksyonov claimed there was no damage or casualties. Russia overnight fired cruise missiles, drones and artillery at targets across Ukraine, killing two civilians, officials said Friday. The attacks included an air assault on Kyiv for the second straight day and the 10th time in three weeks. The Kremlin\u2019s forces also took aim at central, eastern and southern Ukraine, and the western Lviv region near the border with Poland. Russia launched 22 Iranian-made Shahed drones and six Kalibr cruise missiles during the night, the Ukrainian Air Force said. It said air defenses downed 16 drones and three missiles. The Russian shelling killed two civilians and wounded nine others in Ukraine\u2019s eastern Donetsk region, said its governor, Pavlo Kyrylenko. The missile attacks that have intensified recently aim to \u201cdisrupt Ukraine\u2019s plans and preparations for active military operations during the spring-summer campaign,\u201d according to a statement from Ukraine\u2019s intelligence agency, published on Telegram. The targets are Ukraine\u2019s military control points and barracks, supply routes and the places where ammunition, equipment, fuel are stored, it said. On Friday, the United Nations said operations to ship Ukrainian grain were \u201cpartially restarting,\u201d two days after Russia gave a green light to extend the deal for two months. The U.N. also urged a swift return to the previous tempo of ship arrivals and departures from all three Black Sea ports and inspections of their cargo. U.N. associate spokesperson Stephanie Tremblay said the Joint Coordination Center, which includes representatives from the four parties involved in the deal \u2013 Russia, Ukraine, Turkey and the United Nations -- approved the registration Friday of six new vessels to participate in the grain shipments. Nine applications to participate remain pending, she said. No ships are currently loading at any of the three ports, Tremblay said, but inspection teams from the center checked and cleared three new vessels Friday to proceed to the ports of Odesa and Chornomorsk. ___ Hanna Arhirova in Kyiv and Edith M. Lederer at the United Nations contributed to this report. ___ Follow AP\u2019s coverage of the war in Ukraine at https://apnews.com/hub/russia-ukraine President Zelenskyy makes a stop in Saudi Arabia on his way to Japan to meet with the G7. At the G7 increased sanctions against Russia are on the table, but negotiations still continue. A look at the battle in Bakhmut plus Crimean Tartars; expelled by the Soviets decades ago, now looking to return. TALLINN, Estonia (AP) \u2014 While the world awaits Ukraine\u2019s spring battlefield offensive, its leader, Volodymyr Zelenskyy, has launched a diplomatic one. In the span of a week, he's dashed to Italy, the Vatican, Germany, France and Britain to shore up support for defending his country. On Friday, he was in Saudi Arabia to meet with Arab leaders, some of whom are allies with Moscow. President Vladimir Putin, meanwhile, was in the southern Russian city of Pyatigorsk, chairing a meeting with local officials, sitting at a large table at a distance from the other attendees. The Russian president has faced unprecedented international isolation, with an International Criminal Court arrest warrant hanging over his head and clouding the prospects of traveling to many destinations, including those viewed as Moscow's allies. With his invasion of Ukraine, \u201cPutin took a gamble and lost really, really big time,\u201d said Theresa Fallon, director of the Brussels-based Centre for Russia Europe Asia Studies. \u201cHe is an international pariah, really.\u201d It was only 10 years ago when Putin stood proudly among his peers at the time -\u2013 Barack Obama, Angela Merkel and Shinzo Abe \u2013 at a Group of Eight summit in Northern Ireland. Russia has since been kicked out of the group, which consists of Canada, France, Germany, Italy, Japan, Britain and the United States, for illegally annexing Crimea in 2014. Now it appears to be Ukraine\u2019s turn in the spotlight. There were conflicting messages from Kyiv whether Zelenskyy would attend the G7 in Japan on Sunday. The secretary of Ukraine\u2019s National Security and Defense Council said on national television the president would be there, but the council later walked back those remarks, saying Zelenskyy would join via video link. The president\u2019s office would not confirm either way for security reasons. But whether in person or via video, it would be of great symbolic and geopolitical significance. \u201cIt conveys the fact that the G7 continues to strongly support Ukraine,\u201d said Nigel Gould-Davies, senior fellow for Russia and Eurasia at the International Institute for Strategic Studies. \u201cIt\u2019s a visible marker of the continued commitment of the most highly industrialized and highly developed countries in the world.\u201d Story continues It also comes at a time when the optics are just not in the Kremlin\u2019s favor. There\u2019s uncertainty over whether Putin can travel to South Africa in August for a summit of the BRICS nations of Brazil, Russia, India, China and South Africa. Moscow has long showcased the alliance as an alternative to the West\u2019s global dominance, but this year it is already proving awkward for the Kremlin. South Africa, the host of the summit, is a signatory to the ICC and is obligated to comply with the arrest warrant on war crimes charges. South Africa has not announced that Putin will definitely come to the summit but has been planning for his possible arrival. South African President Cyril Ramaphosa has appointed an inter-ministerial committee, led by Deputy President Paul Mashatile, to consider South Africa\u2019s options with regard to its ICC commitment over Putin\u2019s possible trip. While it is highly unlikely the Russian president would be arrested there if he decides to go, the public debate about whether he can is in itself \u201can unwelcome development whose impact should not be underestimated,\u201d according to Gould-Davies. Then there are Moscow\u2019s complicated relations with its own neighbors. Ten days ago, Putin projected the image of solidarity, with leaders of Armenia, Belarus and Central Asian states standing beside him at a Victory Day military parade on Red Square. This week, however, the leaders of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan flocked to China and met with leader Xi Jinping at a summit that highlighted the erosion of Russia\u2019s influence in the region as Beijing seeks to make economic inroads into Central Asia. Xi is using the opportunity \u201cof a weakened Russia, a distracted Russia, almost a pariah-state Russia to increase (China\u2019s) influence in the region,\u201d Fallon said. Putin\u2019s effort this month to shore up more friends in the South Caucasus by scrapping visa requirements for Georgian nationals and lifting a four-year ban on direct flights to the country also didn\u2019t appear to go as smoothly as the Kremlin may have hoped. The first flight that landed Friday in Georgia was met with protests, and the country\u2019s pro-Western president has decried the move as a provocation. Zelenskyy\u2019s ongoing world tour can be seen as a success on many levels. Invitations from other world leaders is a sign they think Ukraine is \&quot;going to come out of the war in good shape,\u201d said Phillips P. O\u2019Brien, professor of strategic studies at the University of St. Andrews in Scotland. Otherwise, \u201cit simply wouldn\u2019t be happening,\u201d he said. \&quot;No one would want to be around a leader they think is going to be defeated and a country that\u2019s going to collapse.\u201d By contrast, the ICC warrant might make it harder for leaders even to visit Putin in Moscow because \u201cit\u2019s not a good look to visit an indicted war criminal,\u201d Gould-Davies said. European leaders promised him an arsenal of missiles, tanks and drones, and even though no commitment has been made on fighter jets \u2013 something Kyiv has wanted for months \u2013 a conversation about finding ways to do it has begun. His appearance Friday at the Arab League summit in Jeddah, a Saudi Arabian port on the Red Sea, highlighted Kyiv\u2019s effort to spread its plight for support far and wide, including in some countries whose sympathies are with Russia. In addition to Zelenskyy, Saudi Crown Prince Mohammed bin Salman also welcomed Syrian President Bashar Assad at the summit after a 12-year suspension \u2013 something analysts say aligns with Moscow\u2019s interests. Anna Borshchevskaya, a senior fellow at the Washington Institute who focuses on Russia\u2019s policy in the Middle East, called it \u201canother testament to the fact that Russia is not isolated globally for its invasion of Ukraine, that the Middle East is one part of the world where Russia is able to find avenues to avoid global isolation \u2013 both ideological isolation but also economic isolation.\u201d She added that Zelenskyy and his government deserve credit for \u201cin recognizing that they need to reach out more to improve their diplomatic efforts in this part of the world and other parts of the world where the Russian narrative resonates.\u201d Kyiv could expect that \u201cthis is the beginning of a larger shift in perception that could eventually translate into potential support,\u201d Borshchevskaya said. Similarly, the Ukrainian president\u2019s participation in the G7 summit is \u201ca message to the rest of the world, to Russia and beyond, and the so-called Global South,\u201d Gould-Davies believes. There is a concern in the West over the extent to which some major developing economies \u2013 Brazil, South Africa and, to a degree, India \u2013 \u201care not criticizing, not condemning Russia and indeed in various ways are helping to mitigate the impact of sanctions on Russia,\u201d he said. \u201cCollectively, economically, they matter. So there is, I think, this felt need for a renewed diplomatic campaign to bring some of these most important states into the kind of the Western way of looking at these things,\u201d Gould-Davies said. ___ Associated Press writers Danica Kirka in London and Gerald Imray in Cape Town, South Africa, contributed. ___ Follow AP's coverage of the war in Ukraine at https://apnews.com/hub/russia-ukraine Syrian President Bashar al-Assad (L) is welcomed in Jeddah on the eve of the Arab League summit Ukrainian President Volodymyr Zelensky said he landed Friday in Saudi Arabia, host of an Arab League summit attended by long isolated Syrian President Bashar al-Assad, a close Russian ally. The previously unannounced visit is Zelensky's first to the Middle East since Moscow's invasion in February 2022, giving the Ukrainian leader an opportunity to address leaders in the region that has been far less united in its support of Kyiv than staunch Western allies. \&quot;Arrived in Saudi Arabia. I will speak at the Arab League summit,\&quot; Zelensky said on Twitter, adding he plans to meet with Saudi Crown Prince Mohammed bin Salman and other leaders. He arrived in the Red Sea coastal city of Jeddah one day after Assad, whose government is being readmitted to the Arab League after its suspension in 2011 over the brutal crackdown on pro-democracy demonstrators that led to civil war. The summit in Saudi Arabia comes at a time when the world's biggest oil exporter is flexing its diplomatic muscle across the Middle East and beyond. An Arab League official told AFP Zelenky's invitation came from Saudi Arabia, not the bloc.&quot;, &quot;target_text&quot;: &quot;Ukranian President Zelenskyy attends Arab League summit in Saudi Arabia&quot; } } # duplicate single entry to get some additional &quot;examples&quot; for i in range(1, 5): input_data[str(i)] = dict() input_data[str(i)]['input_text'] = input_data['0']['input_text'] input_data[str(i)]['target_text'] = input_data['0']['target_text'] class CustomDataset(Dataset): def __init__(self, data_dict, tokenizer, max_sequence_size, max_summary_len): self.tokenizer = tokenizer self.data_dict = data_dict self.max_sequence_size = max_sequence_size self.max_summary_len = max_summary_len self.input_text = [ &quot;summarize: &quot; + self.data_dict[key]['input_text'] for key in self.data_dict.keys() ] self.target_text = [ self.data_dict[key]['target_text'] for key in self.data_dict.keys() ] def __len__(self): return len(self.input_text) def __getitem__(self, index): input_text_at_index = self.input_text[index] target_text_at_index = self.target_text[index] source = self.tokenizer.batch_encode_plus( batch_text_or_text_pairs=[input_text_at_index], max_length= self.max_sequence_size, padding='max_length', return_tensors='pt', truncation=True ) target = self.tokenizer.batch_encode_plus( batch_text_or_text_pairs=[target_text_at_index], max_length= self.max_summary_len, padding='max_length', return_tensors='pt', truncation=True ) source_ids = source['input_ids'].squeeze() source_mask = source['attention_mask'].squeeze() target_ids = target['input_ids'].squeeze() target_mask = target['attention_mask'].squeeze() return { 'source_ids': source_ids.to(dtype=torch.long), 'source_mask': source_mask.to(dtype=torch.long), 'target_ids': target_ids.to(dtype=torch.long), 'target_ids_y': target_ids.to(dtype=torch.long) } def train(epoch, tokenizer, model, loader, optimizer): model.train() for i, data in enumerate(loader, 0): y = data['target_ids'] #.to(device, dtype = torch.long) y_ids = y[:, :-1].contiguous() lm_labels = y[:, 1:].clone().detach() lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100 ids = data['source_ids'] #.to(device, dtype = torch.long) mask = data['source_mask'] #.to(device, dtype = torch.long) outputs = model(input_ids = ids, attention_mask = mask, decoder_input_ids=y_ids, labels=lm_labels) loss = outputs[0] print(f'Epoch: {epoch}, Loss: {loss.item()}') optimizer.zero_grad() loss.backward() optimizer.step() MAX_SEQUENCE_SIZE = 600 tokenizer = T5Tokenizer.from_pretrained( &quot;t5-base&quot;, model_max_length=MAX_SEQUENCE_SIZE ) model = T5ForConditionalGeneration.from_pretrained(&quot;t5-base&quot;) dataset = CustomDataset( data_dict = input_data, tokenizer = tokenizer, max_sequence_size=MAX_SEQUENCE_SIZE, max_summary_len=128 ) # Defining the parameters for creation of dataloaders train_params = { 'batch_size': 2, 'shuffle': True, 'num_workers': 0 } training_loader = DataLoader(dataset, **train_params) optimizer = torch.optim.Adam( params=model.parameters(), lr=0.001 ) for epoch in range(2): train( epoch=epoch, tokenizer=tokenizer, model=model, loader=training_loader, optimizer=optimizer ) # Epoch: 0, Loss: 6.379357814788818 # Epoch: 0, Loss: 2.551377534866333 # Epoch: 0, Loss: 2.4488468170166016 # Epoch: 1, Loss: 1.555290937423706 # Epoch: 1, Loss: 0.6974927186965942 # Epoch: 1, Loss: 0.1120450347661972 </code></pre>
50
T5 model
Speeding-up inference of T5-like model
https://stackoverflow.com/questions/71210238/speeding-up-inference-of-t5-like-model
<p>I am currently using a model called T0pp (<a href="https://huggingface.co/bigscience/T0pp" rel="nofollow noreferrer">https://huggingface.co/bigscience/T0pp</a>) in production and would like to speed up inference.</p> <p>I am running the following code on an on-demand EC2 g4dn.12xlarge instance (4 Nvidia T4 GPUs):</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(&quot;bigscience/T0pp&quot;) model = AutoModelForSeq2SeqLM.from_pretrained(&quot;bigscience/T0pp&quot;) model.parallelize() input_dict = tokenizer(generation_input.inputs, return_tensors=&quot;pt&quot;, padding=True) inputs = input_dict.input_ids.to(&quot;cuda:0&quot;) attention_mask = input_dict.attention_mask.to(&quot;cuda:0&quot;) with torch.no_grad(): outputs = model.generate(inputs, attention_mask=attention_mask) tokenizer.batch_decode(outputs, skip_special_tokens=True) </code></pre> <p>I wanted to know which alternative you would try in order to speed-up inference, and if you knew good tutorials to do so. The main alternatives I see to speed-up inference would be to use the underlying Pytorch models with:</p> <ul> <li>ONNX</li> <li>Deepspeed</li> <li>or using fp16 instead of fp32 parameters (with the main drawback of losing some quality)</li> </ul> <p>Would someone have experience in using these tools, and would know which is the best / simplest option?</p> <p>All this is quite new for me, and I must admit I've been a bit lost in ONNX and Deepspeed tutorials.</p> <p>PS:</p> <ul> <li>I already tried SageMaker, but this is not working for huge models like T0pp (40Gb).</li> <li>Batching speeds up things, allowing to go from 1-2 seconds for batch size 1, to 16 seconds for batch size 32. In an ideal world, even batch size 32 would be under 1 or 2 seconds.</li> </ul>
<p>Maybe you could try <a href="https://docs.openvino.ai/latest/openvino_docs_install_guides_overview.html" rel="nofollow noreferrer">OpenVINO</a>? It allows you to convert your model into Intermediate Representation (IR) and then run on the CPU with the FP16 support. OpenVINO is optimized for Intel hardware but it should work with any processor. I cannot guarantee your model will be faster on CPU than Nvidia GPU but it's worth giving it a try. Some NLP models are fast enough (like this <a href="https://docs.openvino.ai/latest/openvino_docs_performance_benchmarks_openvino.html#bert-large-uncased-whole-word-masking-squad-int8-0001-384" rel="nofollow noreferrer">BERT</a>).</p> <p>You can find a full tutorial on how to convert the PyTorch model <a href="https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/102-pytorch-onnx-to-openvino" rel="nofollow noreferrer">here</a> (FastSeg) and <a href="https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/105-language-quantize-bert" rel="nofollow noreferrer">here</a> (BERT). Some snippets below.</p> <p><strong>Install OpenVINO</strong></p> <p>The easiest way to do it is using PIP. Alternatively, you can use <a href="https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/105-language-quantize-bert" rel="nofollow noreferrer">this tool</a> to find the best way in your case.</p> <pre><code>pip install openvino-dev[pytorch,onnx] </code></pre> <p><strong>Save your model to ONNX</strong></p> <p>OpenVINO cannot convert PyTorch model directly for now but it can do it with ONNX model. This sample code assumes the model is for computer vision.</p> <pre><code>dummy_input = torch.randn(1, 3, IMAGE_HEIGHT, IMAGE_WIDTH) torch.onnx.export(model, dummy_input, &quot;model.onnx&quot;, opset_version=11) </code></pre> <p><strong>Use Model Optimizer to convert ONNX model</strong></p> <p>The Model Optimizer is a command line tool that comes from OpenVINO Development Package so be sure you have installed it. It converts the ONNX model to OV format (aka IR), which is a default format for OpenVINO. It also changes the precision to FP16 (to further increase performance). The accuracy drop, in most cases, is insignificant. Run in command line:</p> <pre><code>mo --input_model &quot;model.onnx&quot; --input_shape &quot;[1, 3, 224, 224]&quot; --mean_values=&quot;[123.675, 116.28 , 103.53]&quot; --scale_values=&quot;[58.395, 57.12 , 57.375]&quot; --data_type FP16 --output_dir &quot;model_ir&quot; </code></pre> <p><strong>Run the inference on the CPU</strong></p> <p>The converted model can be loaded by the runtime and compiled for a specific device e.g. CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what is the best choice for you, just use AUTO.</p> <pre><code># Load the network ie = Core() model_ir = ie.read_model(model=&quot;model_ir/model.xml&quot;) compiled_model_ir = ie.compile_model(model=model_ir, device_name=&quot;CPU&quot;) # Get output layer output_layer_ir = compiled_model_ir.output(0) # Run inference on the input image result = compiled_model_ir([input_image])[output_layer_ir] </code></pre> <p>It's worth mentioning that Runtime can process the ONNX model directly. In that case, just skip the conversion (Model Optimizer) step and give onnx path to the <code>read_model</code> function.</p> <p>Disclaimer: I work on OpenVINO.</p>
51
T5 model
What do the `&lt;extra_idx&gt;` tokens in T5 mean?
https://stackoverflow.com/questions/73740232/what-do-the-extra-idx-tokens-in-t5-mean
<p>I'm using the mT5 model from HuggingFace Transformers to perform some conditional generation and apply it to text classification.</p> <p>I noticed that whatever the T5 model generates (and before it's trained sufficiently) it usually generates special tokens that look like <code>&lt;extra_id0&gt;</code> or <code>&lt;extra_id1&gt;</code>. These tokens aren't contained in <code>T5Tokenizer.special_tokens</code> either and I'm wondering what they mean.</p> <p>I looked at the <a href="https://huggingface.co/docs/transformers/model_doc/t5#training" rel="nofollow noreferrer">HuggingFace page for T5</a> and it seems like these are special tokens that are used during MLM training? Is that correct?</p> <p>I couldn't find any details about these tokens in the original paper or anywhere else online strangely enough. Does anybody know what the meaning of these tokens are? Thanks.</p>
<p>The &lt;extra_id_0&gt; and &lt;extra_id_1&gt; tokens are part of the T5 tokenizer's extra_ids vocabulary, used as sentinel tokens for masking parts of the input that the model needs to predict. These tokens are indexed from the end of the vocabulary to the beginning. You can specify the number of masked tokens to use in the tokenizer.</p> <pre><code>extra_ids (:obj:`int`, `optional`, defaults to 100): Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are accessible as &quot;&lt;extra_id_{%d}&gt;&quot; where &quot;{%d}&quot; is a number between 0 and extra_ids-1. Extra tokens are indexed from the end of the vocabulary up to beginning (&quot;&lt;extra_id_0&gt;&quot; is the last token in the vocabulary like in T5 preprocessing see `here &lt;https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117&gt;`__). </code></pre> <p>In the output, the text between &lt;extra_id0&gt; and &lt;extra_id1&gt; represents the output for the first mask, and so on. It's important to note that T5 might not be the best choice for the masked language modeling task, as discussed in this document <a href="https://github.com/huggingface/transformers/issues/3985" rel="nofollow noreferrer">here</a>. You can also visit <a href="https://stackoverflow.com/questions/75977316/how-to-use-output-from-t5-model-to-replace-masked-tokens-in-input-sequence">this</a> page on Stackoverflow for a similar question.</p>
52
T5 model
Getting logits from T5 Hugging Face model using forward() method without labels
https://stackoverflow.com/questions/73411215/getting-logits-from-t5-hugging-face-model-using-forward-method-without-labels
<p>For my use case, I need to obtain the logits from T5's forward() method without inputting labels. I know that forward() and .generate() are different (<a href="https://stackoverflow.com/questions/67328345/how-to-use-forward-method-instead-of-model-generate-for-t5-model">see here</a>). I have also seen <a href="https://stackoverflow.com/questions/72177055/forward-outputs-on-multiple-sequences-is-wrong?noredirect=1#comment127536983_72177055">this</a> post in which the logits were obtained but labels had to be generated first. Is it possible to obtain the logits from the forward() method without inputting the labels?</p>
<p>Yes, you can do it. Huggingface library does not requires labels strictly as input to the forward method. See for example here for Bert: <a href="https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bert/modeling_bert.py#L1079" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bert/modeling_bert.py#L1079</a></p> <p>Labels is an optional parameter. Also, later in the forward method, they check if labels are None or not. If they are None, they do not compute the loss and just return the output.</p> <p>See here: <a href="https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bert/modeling_bert.py#L1135" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bert/modeling_bert.py#L1135</a></p> <p>And the forward returns a dictionary that also contains logits.</p> <p><a href="https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bert/modeling_bert.py#L1147" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/v4.21.1/src/transformers/models/bert/modeling_bert.py#L1147</a></p> <p>But please do look at the specific model's code you are using.</p>
53
T5 model
why the output of T5 model contains &lt;extra_id_0&gt; ,.... when the input is not mask
https://stackoverflow.com/questions/76731327/why-the-output-of-t5-model-contains-extra-id-0-when-the-input-is-not-mas
<p>I fintuned mT5 with new dataset for summarization task.<br /> In the inference phase, mT5 generate outputs contains &lt;extra_id_1&gt;, ... when the input is not mask.</p> <p>I use the blow code to encode the input:</p> <p>`tokenized_inputs = self.tokenizer.batch_encode_plus(<br /> [line], max_length=self.max_len,</p> <pre><code> padding=&quot;max_length&quot;, return_tensors=&quot;pt&quot; ).to(self.args.device)` </code></pre> <p>and use the code below for generating output:</p> <pre><code>`Summary_input_ids= model. generate( input_ids=input_ids, attention_mask=input_mask, do_sample=True, temperature=0.8, top_k=45, top_p=0.9, max_length=_max_length, min_length=_min_length, num_beams=_num_beams, repetition_penalty=2.5, no_repeat_ngram_size = _no_repeat_ngram_size, length_penalty=2.5, early_stopping=False, use_cache=True, num_return_sequences=1) Summary = tokenizer.batch_decode(Summary_input_ids, skip_special_tokens=True,clean_up_tokenization_spaces=False) ` </code></pre> <p>I want to generate summary for each input because the model has been fine-tuned in supervised manner.</p>
54
T5 model
How to load model after saving only with torch.save(model)?
https://stackoverflow.com/questions/73034501/how-to-load-model-after-saving-only-with-torch-savemodel
<p>I just trained a model based on the T5 network, but I managed to save it only with</p> <pre class="lang-py prettyprint-override"><code>torch.save(model, 'trained_model') </code></pre> <p>Which saved the model in a single <code>trained_model</code> file.</p> <p>When I now try to load it with</p> <pre class="lang-py prettyprint-override"><code>model = torch.load(&quot;trained_model&quot;) </code></pre> <p>I get an error of <code>No module named 'transformers.modeling_t5'</code></p> <p>Or with this:</p> <pre class="lang-py prettyprint-override"><code>model = T5ForConditionalGeneration.from_pretrained(&quot;trained_model&quot;) </code></pre> <p>I get an error of <code>It looks like the config file at 'trained_model' is not a valid JSON file.</code></p> <p>Is there any way to recover the model without retraining it?</p> <p>EDIT</p> <p>to train the model I used a script in which:</p> <ol> <li>I loaded the raw T5 model</li> </ol> <pre class="lang-py prettyprint-override"><code>raw_model = 'rut5-base-multitask' model = T5ForConditionalGeneration.from_pretrained(raw_model).cuda() tokenizer = T5Tokenizer.from_pretrained(raw_model) optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) </code></pre> <ol start="2"> <li><p>Trained the model</p> </li> <li><p>Saved the model</p> </li> </ol> <pre class="lang-py prettyprint-override"><code>torch.save(model, 'trained_model') </code></pre> <ol start="4"> <li>Tested the model with <code>model.eval()</code></li> </ol>
<p>try this for saving the model</p> <pre><code>torch.save(model.state_dict(), PATH) </code></pre> <p>and for load that model again</p> <pre><code>model = TheModelClass(*args, **kwargs) model.load_state_dict(torch.load(PATH)) model.eval() </code></pre> <p>there other way to do that you can find in <a href="https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended" rel="nofollow noreferrer">this article</a></p>
55
T5 model
How to freeze a huggingface model?
https://stackoverflow.com/questions/70352825/how-to-freeze-a-huggingface-model
<p>I use</p> <pre><code> for p in model.parameters(): p.requires_grad = False </code></pre> <p>to freeze a T5 model (t5-small), but when I print parameters that require grad, there is still one parameter with the size <code>32121x512</code>. What is this? Is it the embeddings matrix? Should I freeze it too? It seems backward gradients affect this one remaining parameter</p>
<p>It seems I called <code>model.resize_token_embeddings(len(tokenizer))</code> after freezing parameters, and it can reset the embeddings require_grad to True</p>
56
T5 model
How to push NLP model trained with pytorch in hugging face?
https://stackoverflow.com/questions/69585470/how-to-push-nlp-model-trained-with-pytorch-in-hugging-face
<p>I trained fine tune T5 model with my dataset.</p> <p>Here is my model class</p> <pre><code>class QA_model(pl.LightningModule): def __init__(self): super().__init__() self.model=T5ForConditionalGeneration.from_pretrained(model_t5,return_dict=True) def forward(self,input_ids, attention_mask,labels=None): output=self.model(input_ids=input_ids, attention_mask=attention_mask,labels=labels) return output.loss, output.logits def training_step(self,batch,batch_idx): input_ids=batch['input_ids'] attention_mask=batch['attention_mask'] labels=batch['labels'] loss,outputs=self.forward(input_ids, attention_mask,labels) self.log('train_loss',loss,prog_bar=True,logger=True) return loss def validation_step(self,batch,batch_idx): input_ids=batch['input_ids'] attention_mask=batch['attention_mask'] labels=batch['labels'] loss,outputs=self.forward(input_ids, attention_mask,labels) self.log('val_loss',loss,prog_bar=True,logger=True) return loss def test_step(self,batch,batch_idx): input_ids=batch['input_ids'] attention_mask=batch['attention_mask'] labels=batch['labels'] loss,outputs=self.forward(input_ids, attention_mask,labels) self.log('test_loss',loss,prog_bar=True,logger=True) return loss def configure_optimizers(self): return AdamW(self.parameters(),lr=0.0001) </code></pre> <p>I initialize my model:</p> <pre><code>my_model=QA_model() from pytorch_lightning.callbacks import ModelCheckpoint checkpoint_callback=ModelCheckpoint( dirpath='checkpoints', filename='best-checkpoints', save_top_k=1, verbose=True, monitor='val_loss', mode='min' ) </code></pre> <p>from pytorch_lightning.loggers import TensorBoardLogger</p> <pre><code>logger=TensorBoardLogger('training-logs',name='QA_model') trainer=pl.Trainer( logger=logger, checkpoint_callback=checkpoint_callback, max_epochs=N_EPOCHS, gpus=1, progress_bar_refresh_rate=30 ) </code></pre> <p>and after I trained:</p> <pre><code>trainer.fit(my_model,data_module) </code></pre> <p>As I fine-tuned model, I want to put it in hugging face. I am using the following command:</p> <pre><code>trainer.save_pretrained(&quot;my_account/t5-base-finetuned-legal_data&quot;) trainer.push_to_hub(&quot;my_account/t5-base-finetuned-legal_data&quot;) </code></pre> <p>but it gives the error:</p> <pre><code>QA_model model does not have save_pretrained/push_to_hub attribute. </code></pre> <p>I am using the following versions:</p> <pre><code>!pip install --quiet transformers==4.1.1 !pip install --quiet pytorch-lightning==1.1.3 !pip install --quiet tokenizers==0.9.4 !pip install --quiet sentencepiece==0.1.94 </code></pre>
57
T5 model
T5 fine tuned model outputs &lt;unk&gt; instead of curly braces and other special characters
https://stackoverflow.com/questions/75851029/t5-fine-tuned-model-outputs-unk-instead-of-curly-braces-and-other-special-char
<p>First of I'll start with saying that I'm a beginner when it comes to machine learning as a whole and transformers so my apologies if it's a dumb question. I've been fine tuning t5 for the task of generating mongodb queries, but I was met with this strange output that doesn't look like as the intended one.</p> <pre><code>inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, return_tensors=&quot;pt&quot;) output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10, max_length=64) decoded_output = tokenizer.batch_decode(output, skip_special_tokens=False)[0] print(decoded_output) predicted_Query = nltk.sent_tokenize(decoded_output.strip())[0] print(predicted_Query) </code></pre> <p>Gives the following output:</p> <pre><code>&lt;pad&gt; db.movies.find(&lt;unk&gt;&quot;title&quot;: &quot;The Poor Little Rich Girl&quot;&lt;unk&gt;, &lt;unk&gt;&quot;writers&quot;: 1&lt;unk&gt;)&lt;/s&gt; &lt;pad&gt; db.movies.find(&lt;unk&gt;&quot;title&quot;: &quot;The Poor Little Rich Girl&quot;&lt;unk&gt;, &lt;unk&gt;&quot;writers&quot;: 1&lt;unk&gt;)&lt;/s&gt; </code></pre> <p>The query is correct for the most part, I assume that the <code>&lt;unk&gt;</code> token is supposed to be curly braces but the model wasn't able to understand them (as in OOV case). Note that the dataset that was used to fine tune it contain curly braces in the output so I'm confused on how it couldn't recognize it during the testing. Would it be a problem with the tokenizer? If it's the case, could I expend the vocab by adding some new tokens ? I'm not asking for an answer (although it's welcomed) but some guidance would be appreciated. Thank you for your time.</p> <p>I tested if the tokenizer can handle curly braces and it showed it can. Again I'm new to this so I'm not really sure if I understand the problem well.</p>
<p>After some research I've found a solution. T5 tokenizer vocab was missing a few characters like curly braces and others, so I used the following to add them.</p> <pre><code>from transformers import AutoModel new_words = ['{', '}'] model = AutoModel.from_pretrained(&quot;t5-base&quot;) tokenizer.add_tokens(new_words) model.resize_token_embeddings(len(tokenizer)) </code></pre>
58
T5 model
How to load a saved model for a Hoggingface T5 model where the tokenizer was extended in the training phase?
https://stackoverflow.com/questions/75075829/how-to-load-a-saved-model-for-a-hoggingface-t5-model-where-the-tokenizer-was-ext
<p>I use the following code to load the saved model:</p> <pre><code> config = T5Config.from_pretrained( model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) config.train_task_adapters = adapter_args.train_task_adapters # Set tokenizer tokenizer = AutoTokenizer.from_pretrained( model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) # Initialize the model model = T5ForConditionalGeneration.from_pretrained( model_name_or_path, from_tf=bool(&quot;.ckpt&quot; in model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, adapter_config=adapter_config ) </code></pre> <p>However I recieive the following error:</p> <pre><code>RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration: size mismatch for encoder.model_embeddings.weight: copying a param with shape torch.Size([32128, 768]) from checkpoint, the shape in current model is torch.Size([32138, 768]). size mismatch for decoder.model_embeddings.weight: copying a param with shape torch.Size([32128, 768]) from checkpoint, the shape in current model is torch.Size([32138, 768]). exit 1 </code></pre>
<p>Usually you don't encounter any problems when loading the model for which you've added some extra tokens during the training. In my case, it was the <code>pad_to_multiple_of</code> parameter that caused the trouble. It is claimed to do some Nvidia magic for a more efficient utilization of modern GPUs, so I used it when I created the model for training and then happily forgot about it:</p> <pre><code>model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=16) </code></pre> <p>But as it seems, the current API (4.33.0.dev0) struggles to load such models. The workaround would be:</p> <pre><code>MODEL_CHECKPOINT = '' # your directory here config_path = path.join(MODEL_CHECKPOINT, 'config.json') weights_path = path.join(MODEL_CHECKPOINT, 'pytorch_model.bin') tokenizer = GPT2Tokenizer.from_pretrained(MODEL_CHECKPOINT) config = AutoConfig.from_pretrained(config_path) model = T5ForConditionalGeneration(config) model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=16) model.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) </code></pre> <p>Which outputs: &lt;All keys matched successfully&gt;</p>
59
T5 model
flan-t5-xxl: ValueError: Need either a `state_dict` or a `save_folder` containing offloaded weights
https://stackoverflow.com/questions/76617863/flan-t5-xxl-valueerror-need-either-a-state-dict-or-a-save-folder-containin
<p>I tried to run flan-t5-xxx model from Hugging Face both in my Mac M1 and Google Colab, both have the same error:</p> <p><code>ValueError: Need either a state_dict or a save_folder containing offloaded weights.</code></p> <p>The code from Model Card:</p> <pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained(&quot;google/flan-t5-xxl&quot;) model = T5ForConditionalGeneration.from_pretrained(&quot;google/flan-t5-xxl&quot;, device_map=&quot;auto&quot;) input_text = &quot;translate English to German: How old are you?&quot; input_ids = tokenizer(input_text, return_tensors=&quot;pt&quot;).input_ids.to(&quot;cuda&quot;) outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) </code></pre>
<p>For who need:</p> <p>Create a folder (name save_folder for example). Then update:</p> <pre><code>model = T5ForConditionalGeneration.from_pretrained(&quot;google/flan-t5-xxl&quot;, device_map=&quot;auto&quot;) </code></pre> <p>to</p> <pre><code>model = T5ForConditionalGeneration.from_pretrained(&quot;google/flan-t5-xxl&quot;, device_map=&quot;auto&quot;, offload_folder=&quot;save_folder&quot;) </code></pre>
60
T5 model
Can I add configuration of &#39;dropout_rate&#39; to Seq2SeqTrainer?
https://stackoverflow.com/questions/76281106/can-i-add-configuration-of-dropout-rate-to-seq2seqtrainer
<p>I'am trying to train T5 model using Seq2SeqTrainer. I found out that the Config of T5 model is like below.</p> <pre><code>T5Config { &quot;_name_or_path&quot;: &quot;allenai/tk-instruct-base-def-pos&quot;, &quot;architectures&quot;: [ &quot;T5ForConditionalGeneration&quot; ], &quot;attention_probs_dropout_prob&quot;: 0.5, &quot;d_ff&quot;: 2048, &quot;d_kv&quot;: 64, &quot;d_model&quot;: 768, &quot;decoder_start_token_id&quot;: 0, &quot;dense_act_fn&quot;: &quot;gelu_new&quot;, &quot;dropout_rate&quot;: 0.1, &quot;eos_token_id&quot;: 1, &quot;feed_forward_proj&quot;: &quot;gated-gelu&quot;, &quot;hidden_dropout_prob&quot;: 0.5, &quot;initializer_factor&quot;: 1.0, &quot;is_encoder_decoder&quot;: true, &quot;is_gated_act&quot;: true, &quot;layer_norm_epsilon&quot;: 1e-06, &quot;model_type&quot;: &quot;t5&quot;, &quot;num_decoder_layers&quot;: 12, &quot;num_heads&quot;: 12, &quot;num_layers&quot;: 12, &quot;output_past&quot;: true, &quot;pad_token_id&quot;: 0, &quot;relative_attention_max_distance&quot;: 128, &quot;relative_attention_num_buckets&quot;: 32, &quot;tie_word_embeddings&quot;: false, &quot;torch_dtype&quot;: &quot;bfloat16&quot;, &quot;transformers_version&quot;: &quot;4.28.0&quot;, &quot;use_cache&quot;: true, &quot;vocab_size&quot;: 32100 } </code></pre> <p>But there is no configuration 'dropout_rate' in the Seq2SeqTrainer. I wrote codes of training T5Generator like below.</p> <pre><code>def train(self, tokenized_datasets, **kwargs): &quot;&quot;&quot; Train the generative model. &quot;&quot;&quot; #Set training arguments args = Seq2SeqTrainingArguments( **kwargs ) # Define trainer object trainer = Seq2SeqTrainer( self.model, args, ...) </code></pre> <p>Is there a way to change configuration about dropout? Or could you tell me some configurations that can make similar effects?</p> <p>Here's what I've tried:</p> <pre><code> training_args = { 'output_dir':model_out_path, 'evaluation_strategy':&quot;epoch&quot;, 'learning_rate':5e-5, 'lr_scheduler_type':'cosine', 'per_device_train_batch_size':8, 'per_device_eval_batch_size':16, 'num_train_epochs':4, 'weight_decay':0.01, 'warmup_ratio':0.1, 'save_strategy':'no', 'load_best_model_at_end':False, 'push_to_hub':False, 'eval_accumulation_steps':1, 'predict_with_generate':True, 'use_mps_device':use_mps_, &quot;dropout_rate&quot;: 0.3 } </code></pre> <p>And I also tried to use this solution, <a href="https://stackoverflow.com/questions/64947064/transformers-pretrained-model-with-dropout-setting">Transformers pretrained model with dropout setting</a> but, I couldn't understand where I should attach that codes...</p>
61
T5 model
Size Mismatch in MultiModal Feedback Model Using T5 + Audio/Visual Features - The size of tensor a (48) must match the size of tensor b (4) with T5
https://stackoverflow.com/questions/79588847/size-mismatch-in-multimodal-feedback-model-using-t5-audio-visual-features-th
<p>I’m working on a multimodal model that combines audio and visual features with a T5-based encoder for a feedback generation task. However, I’m facing an issue with batch size mismatch between the projected audio/visual features and the encoder outputs, which leads to the error:</p> <p>❌ Error in batch 1: The size of tensor a (48) must match the size of tensor b (4) at non-singleton dimension 0</p> <pre><code>import torch import torch.nn as nn from transformers import T5ForConditionalGeneration class MultiModalFeedbackModel(nn.Module): def __init__(self, t5_model_name=&quot;t5-base&quot;, audio_dim=13, visual_dim=3): super().__init__() self.audio_proj = nn.Linear(audio_dim, 768) self.visual_proj = nn.Linear(visual_dim, 768) self.t5 = T5ForConditionalGeneration.from_pretrained(t5_model_name) self.score_head = nn.Sequential( nn.Linear(self.t5.config.d_model, 64), nn.ReLU(), nn.Linear(64, 1) ) def forward(self, input_ids, attention_mask, audio_features, visual_features, labels=None, return_score=False): device = input_ids.device # Ensure device compatibility audio_embed = self.audio_proj(audio_features).to(device) visual_embed = self.visual_proj(visual_features).to(device) # Debug prints print(f&quot;Audio batch shape: {audio_embed.shape}&quot;, flush=True) print(f&quot;Visual batch shape: {visual_embed.shape}&quot;, flush=True) # Get encoder outputs from T5 encoder_outputs = self.t5.encoder(input_ids=input_ids, attention_mask=attention_mask) encoder_hidden = encoder_outputs.last_hidden_state # Combine encoder output with projected audio and visual features combined_hidden = encoder_hidden.clone() # Expand audio and visual features across sequence length audio_embed = audio_embed.unsqueeze(1).expand(-1, combined_hidden.size(1), -1) visual_embed = visual_embed.unsqueeze(1).expand(-1, combined_hidden.size(1), -1) # Add features to encoder hidden states combined_hidden[:, 0] += audio_embed[:, 0] # Add audio to first token combined_hidden[:, 1] += visual_embed[:, 1] # Add visual to second token if return_score: pooled = combined_hidden.mean(dim=1) score = torch.sigmoid(self.score_head(pooled)) * 100 return score if labels is not None: decoder_input_ids = labels[:, :-1] decoder_labels = labels[:, 1:].clone() outputs = self.t5( inputs_embeds=combined_hidden, decoder_input_ids=decoder_input_ids, labels=decoder_labels ) return outputs else: return self.t5.generate(inputs_embeds=combined_hidden, max_length=64, attention_mask=attention_mask) </code></pre> <p><strong>What I’ve Tried:</strong></p> <ul> <li>I tried reshaping the encoder outputs and the feature embeddings to match dimensions before addition, but the error still persists.</li> <li>I’ve tried expanding the embeddings across the sequence length, but the batch size still doesn’t align.</li> <li>I also used expand and repeat to align the batch dimensions, but the error still occurs when adding the tensors.</li> </ul> <p><strong>What I Need Help With:</strong></p> <ul> <li>Why is the batch size of the encoder outputs (48) not matching the batch size of the audio and visual features (4)?</li> <li>How can I properly align the encoder outputs with the audio/visual features for addition?</li> <li>What changes should I make to fix the batch size mismatch and properly combine the audio/visual features with the encoder output?</li> </ul> <p>Any guidance on this would be highly appreciated. Thank you!</p>
62
T5 model
AttributeError: Adam object has no attribute &#39;_decayed_lr&#39; when fine-tuning T5
https://stackoverflow.com/questions/76987203/attributeerror-adam-object-has-no-attribute-decayed-lr-when-fine-tuning-t5
<p>I am fine-tuning T5 LLM, with the model based on this Colab notebook from Google: <a href="https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb#scrollTo=dEutWnhiWRAq" rel="nofollow noreferrer">TF-T5- Training.ipynb</a>. My current model was defined like this:</p> <pre><code>class SnapthatT5(TFT5ForConditionalGeneration): def __init__(self, *args, log_dir=None, cache_dir= None, **kwargs): super().__init__(*args, **kwargs) self.loss_tracker= tf.keras.metrics.Mean(name='loss') @tf.function def train_step(self, data): x = data y = x[&quot;labels&quot;] y = tf.reshape(y, [-1, 1]) with tf.GradientTape() as tape: outputs = self(x, training=True) loss = outputs[0] logits = outputs[1] loss = tf.reduce_mean(loss) grads = tape.gradient(loss, self.trainable_variables) self.optimizer.apply_gradients(zip(grads, self.trainable_variables)) lr = self.optimizer._decayed_lr(tf.float32) self.loss_tracker.update_state(loss) self.compiled_metrics.update_state(y, logits) metrics = {m.name: m.result() for m in self.metrics} metrics.update({'lr': lr}) return metrics def test_step(self, data): x = data y = x[&quot;labels&quot;] y = tf.reshape(y, [-1, 1]) output = self(x, training=False) loss = output[0] loss = tf.reduce_mean(loss) logits = output[1] self.loss_tracker.update_state(loss) self.compiled_metrics.update_state(y, logits) return {m.name: m.result() for m in self.metrics} class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule): def __init__(self, warmup_steps=1e4): super().__init__() self.warmup_steps = tf.cast(warmup_steps, tf.float32) def __call__(self, step): step = tf.cast(step, tf.float32) m = tf.maximum(self.warmup_steps, step) m = tf.cast(m, tf.float32) lr = tf.math.rsqrt(m) return lr </code></pre> <p>My training process looks like this though (I'm doing it with Google Colab):</p> <pre><code>learning_rate = CustomSchedule() # learning_rate = 0.001 # Instead set a static learning rate optimizer = tf.keras.optimizers.Adam(learning_rate) model = SnapthatT5.from_pretrained(&quot;t5-base&quot;) model.compile(optimizer=optimizer, metrics=metrics) epochs_done = 0 model.fit(tf_train_ds, epochs=5, steps_per_epoch=steps, callbacks=callbacks, validation_data=tf_valid_ds, validation_steps=valid_steps, initial_epoch=epochs_done) </code></pre> <p>However, when I tried to train the model (with TensorFlow 2.12.0), I got this error from Colab: <code>AttributeError: 'Adam' object has no attribute '_decayed_lr'</code>. I tried to change <code>_decayed_lr</code> to <code>lr</code>, but this was not recognized by Colab.</p> <p>So, what could I do to get the decayed learning rate to the training step, and get the above problem fixed?</p>
63
T5 model
Low score and wrong answer for Flan-T5-XXL &quot;question-answering&quot; task
https://stackoverflow.com/questions/76963864/low-score-and-wrong-answer-for-flan-t5-xxl-question-answering-task
<p>I'm trying to run Flan-T5-XXL model for a &quot;question-answering&quot; task. Here's how I loaded and executed the model:</p> <pre><code>model_id = &quot;~/Downloads/test_LLM/flan-t5-xxl&quot; tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForQuestionAnswering.from_pretrained(model_id, return_dict=False).to(DEVICE) qa_T5XXL = pipeline(&quot;question-answering&quot;, model=model, tokenizer=tokenizer) question = &quot;What is 42?&quot; context = &quot;42 is the answer to life, the universe and everything&quot; result = qa_T5XXL({ &quot;question&quot;: question, &quot;context&quot;: context }) </code></pre> <p>However, I get a low score and a wrong answer:</p> <pre><code>{'score': 0.03840925544500351, 'start': 0, 'end': 2, 'answer': '42'} </code></pre> <p>Could you please help me make changes to achieve the correct answer? Thanks in advance.</p>
<p><strong>Pre/Script:</strong> This is more of a science experiment design or product development question than a programming question, so most probably someone will flag to close this question on Stackoverflow eventually. But here's an attempt to answer.</p> <h1>In Short</h1> <p>There is a couple of things to consider before someone can help to answer the question.</p> <ul> <li>What is ultimate goal of getting the answers right?</li> <li>How do you determine what is the right answers?</li> <li>How do you measure what is right? Is there a metric you use? Or a fix test dataset of question, context, answer triplets?</li> </ul> <h1>In Long</h1> <p>Here's a few QnA to clarify somethings about LLM and QnA.</p> <h2>Q: What does the &quot;start&quot; and &quot;end&quot; mean in the results?</h2> <p>A: Given these,</p> <p>[in]:</p> <pre><code>question = &quot;What is 42?&quot; context = &quot;42 is the answer to life, the universe and everything&quot; </code></pre> <p>[out]:</p> <pre><code>results = {'score': 0.03840925544500351, 'start': 0, 'end': 2, 'answer': '42'} </code></pre> <p>We see the &quot;start&quot; and &quot;end&quot; indices. That indicates that the model you are using is an extractive QnA system, i.e. given a question and context find the answer inside the context string.</p> <p>So, the &quot;answer&quot; in the result dictionary is:</p> <pre><code>context[results['start']:result['end'] # i.e. &quot;42&quot; </code></pre> <h2>Q: Then, what does the &quot;score&quot; mean in the results?</h2> <p>A: From <a href="https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L46" rel="nofollow noreferrer">https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L46</a>, we see:</p> <pre class="lang-py prettyprint-override"><code>def decode_spans( start: np.ndarray, end: np.ndarray, topk: int, max_answer_len: int, undesired_tokens: np.ndarray ) -&gt; Tuple: &quot;&quot;&quot; Take the output of any `ModelForQuestionAnswering` and will generate probabilities for each span to be the actual answer. In addition, it filters out some unwanted/impossible cases like answer len being greater than max_answer_len or answer end position being before the starting position. The method supports output the k-best answer through the topk argument. Args: start (`np.ndarray`): Individual start probabilities for each token. end (`np.ndarray`): Individual end probabilities for each token. topk (`int`): Indicates how many possible answer span(s) to extract from the model output. max_answer_len (`int`): Maximum size of the answer to extract from the model's output. undesired_tokens (`np.ndarray`): Mask determining tokens that can be part of the answer &quot;&quot;&quot; # Ensure we have batch axis if start.ndim == 1: start = start[None] if end.ndim == 1: end = end[None] # Compute the score of each tuple(start, end) to be the real answer outer = np.matmul(np.expand_dims(start, -1), np.expand_dims(end, 1)) # Remove candidate with end &lt; start and end - start &gt; max_answer_len candidates = np.tril(np.triu(outer), max_answer_len - 1) # Inspired by Chen &amp; al. (https://github.com/facebookresearch/DrQA) scores_flat = candidates.flatten() if topk == 1: idx_sort = [np.argmax(scores_flat)] elif len(scores_flat) &lt; topk: idx_sort = np.argsort(-scores_flat) else: idx = np.argpartition(-scores_flat, topk)[0:topk] idx_sort = idx[np.argsort(-scores_flat[idx])] starts, ends = np.unravel_index(idx_sort, candidates.shape)[1:] desired_spans = np.isin(starts, undesired_tokens.nonzero()) &amp; np.isin(ends, undesired_tokens.nonzero()) starts = starts[desired_spans] ends = ends[desired_spans] scores = candidates[0, starts, ends] return starts, ends, scores </code></pre> <h3>Q: Showing <code>np.matmul(np.expand_dims(start, -1), np.expand_dims(end, 1))</code> doesn't tell me anything about what the scores mean... What happens when we allow more than 1 answers?</h3> <p><code>flan-t5-xxl</code> is too big demonstrate the results, so lets try tiny,</p> <pre><code>from transformers import pipeline pipe = pipeline(&quot;question-answering&quot;, model=&quot;google/flan-t5-small&quot;) question = &quot;What is 42?&quot; context = &quot;42 is the answer to life, the universe and everything&quot; result = pipe({ &quot;question&quot;: question, &quot;context&quot;: context }, top_k=5) </code></pre> <p>[out]:</p> <pre><code>[{'score': 0.011151721701025963, 'start': 25, 'end': 29, 'answer': ' the'}, {'score': 0.01089030597358942, 'start': 5, 'end': 29, 'answer': ' the answer to life, the'}, {'score': 0.0108568724244833, 'start': 0, 'end': 29, 'answer': '42 is the answer to life, the'}, {'score': 0.010814748704433441, 'start': 16, 'end': 29, 'answer': ' to life, the'}, {'score': 0.01060018502175808, 'start': 19, 'end': 29, 'answer': ' life, the'}] </code></pre> <h3>Q: Linda umzuzu! (Wait a minute!), does that mean it will compute possibilities of any possible span within the context?</h3> <p>Yes, kinda! That's the goal of extractive QnA. The answer is in the context, the brute force way is to score all possible spans to get the best answer. The <code>top_k</code> argument controls how many candidates to consider.</p> <pre><code>question = &quot;What is 42?&quot; context = &quot;42 is the answer to life, the universe and everything&quot; result = pipe({ &quot;question&quot;: question, &quot;context&quot;: context }, top_k=30) </code></pre> <p>[out]:</p> <pre><code>[{'score': 0.011151721701025963, 'start': 25, 'end': 29, 'answer': ' the'}, {'score': 0.01089030597358942, 'start': 5, 'end': 29, 'answer': ' the answer to life, the'}, {'score': 0.0108568724244833, 'start': 0, 'end': 29, 'answer': '42 is the answer to life, the'}, {'score': 0.010814748704433441, 'start': 16, 'end': 29, 'answer': ' to life, the'}, {'score': 0.01060018502175808, 'start': 19, 'end': 29, 'answer': ' life, the'}, {'score': 0.010392689146101475, 'start': 19, 'end': 29, 'answer': ' life, the'}, {'score': 0.010242749936878681, 'start': 9, 'end': 29, 'answer': ' answer to life, the'}, {'score': 0.009692603722214699, 'start': 5, 'end': 16, 'answer': ' the answer'}, {'score': 0.00966284703463316, 'start': 0, 'end': 16, 'answer': '42 is the answer'}, {'score': 0.009410168044269085, 'start': 2, 'end': 29, 'answer': ' is the answer to life, the'}, {'score': 0.00911626499146223, 'start': 9, 'end': 16, 'answer': ' answer'}, {'score': 0.00905834324657917, 'start': 25, 'end': 38, 'answer': ' the universe'}, {'score': 0.008912604302167892, 'start': 29, 'end': 38, 'answer': ' universe'}, {'score': 0.008845999836921692, 'start': 5, 'end': 38, 'answer': ' the answer to life, the universe'}, {'score': 0.008818842470645905, 'start': 0, 'end': 38, 'answer': '42 is the answer to life, the universe'}, {'score': 0.008786008693277836, 'start': 38, 'end': 53, 'answer': ' and everything'}, {'score': 0.008784625679254532, 'start': 16, 'end': 38, 'answer': ' to life, the universe'}, {'score': 0.00861033983528614, 'start': 19, 'end': 38, 'answer': ' life, the universe'}, {'score': 0.008441794663667679, 'start': 19, 'end': 38, 'answer': ' life, the universe'}, {'score': 0.008398951031267643, 'start': 0, 'end': 5, 'answer': '42 is'}, {'score': 0.008392956107854843, 'start': 5, 'end': 25, 'answer': ' the answer to life,'}, {'score': 0.008388272486627102, 'start': 5, 'end': 9, 'answer': ' the'}, {'score': 0.008375249803066254, 'start': 2, 'end': 16, 'answer': ' is the answer'}, {'score': 0.008367189206182957, 'start': 0, 'end': 25, 'answer': '42 is the answer to life,'}, {'score': 0.008362519554793835, 'start': 0, 'end': 9, 'answer': '42 is the'}, {'score': 0.00833472516387701, 'start': 16, 'end': 25, 'answer': ' to life,'}, {'score': 0.00832000095397234, 'start': 9, 'end': 38, 'answer': ' answer to life, the universe'}, {'score': 0.008312774822115898, 'start': 25, 'end': 53, 'answer': ' the universe and everything'}, {'score': 0.00820864923298359, 'start': 42, 'end': 53, 'answer': ' everything'}, {'score': 0.008179032243788242, 'start': 29, 'end': 53, 'answer': ' universe and everything'}] </code></pre> <h3>Q: Does that mean that I'll not always get the right answer?</h3> <p>A: Yes, correct. LLM or pre-trained models are tuned to whatever the data is.</p> <p>First, we have to ask:</p> <ul> <li><strong>What exactly is the desired answer?</strong> <ul> <li>In the context of your question, I guess you are expecting &quot;the answer to life, the universe and everything&quot;</li> <li>But consider the valid statement of a tautology, &quot;X is X&quot;, so the answer &quot;42 is 42&quot; is a valid for the &quot;What is 42?&quot; question.</li> </ul> </li> </ul> <p>Next, we need to ask:</p> <ul> <li>Does the training data use to fine-tune or train the model contains the expected behavior?</li> <li>Or does it contain tautology such that the model learns to emulate that during inference?</li> </ul> <p>Given that for most models you'll see in the wild, it's hard to determine the above questions, your next practical question would be:</p> <h2>Q: Okay, so no model is perfect. How do I make the model output what I need?</h2> <p>A: First, you'll need to consider,</p> <ul> <li>Is the pre-trained model you chosen tuned our your task or domain? Question and Answer is a very wide task, knowing how to answer on stackoverflow don't make you an expert on answering legal questions or pop-culture questions.</li> <li>Have you fine-tuned the model to your task or domain? Did it perform better than the original pre-trained model?</li> </ul> <p>Then you need to consider:</p> <ul> <li>Does one failed inference instance affect the system you want to build? What is the &quot;fidelity&quot; of the system you are building? E.g. if it's a medical domain, would you kill someone if the model answer wrongly?</li> <li>How exactly is the model going to be deployed and how you want to measure the success? <ul> <li>Is it just based on (i) random anecdotal examples, or (ii) is there a specific metric that the model should improve towards? <ul> <li>if (i), do you have a large enough sample of anecdotes examples to test on?</li> <li>if (ii), would you be able to overlook the fact that the score outperforms the original pre-trained model but fail on some anecdotal examples.</li> </ul> </li> </ul> </li> </ul> <h2>Q: Yeah, yeah, I know all you are insinuating but I just want to %^&amp;*-ing solve this question to give the right answer?</h2> <p>A: Consider, some business logic. Many a times, big tech handles common Q with manually edited A. Using that you need to fine-tune the model or using some caching mechanism to achieve the desired result if you don't want to spend time/effort re-training / fine-tuning the model.</p> <p>i.e.</p> <pre><code>if &quot;question&quot; == &quot;What is 42?&quot; result = {&quot;answer&quot;: &quot;answer to life, the universe and everything&quot;} else: result = qa_T5XXL({ &quot;question&quot;: question, &quot;context&quot;: context }) </code></pre> <p>For reference, see:</p> <ul> <li><a href="https://stackoverflow.com/questions/59956670/parsing-city-of-origin-destination-city-from-a-string">Parsing city of origin / destination city from a string</a></li> <li><a href="https://hackernoon.com/tmnt-translation-memory-and-neural-translation" rel="nofollow noreferrer">https://hackernoon.com/tmnt-translation-memory-and-neural-translation</a></li> </ul> <h2>Q: I don't want to have any rule-base, just tell me how to hack the code to get the right answer already.</h2> <p>A: You will most probably not get the right answer in the top-1 answer easily. You have to manage the noise in the top_k outputs and try to extend the candidate possibilities to everything possible in the context, e.g.</p> <pre><code>from transformers import pipeline pipe = pipeline(&quot;question-answering&quot;, model=&quot;google/flan-t5-small&quot;) question = &quot;What is 42?&quot; context = &quot;42 is the answer to life, the universe and everything&quot; result = pipe({ &quot;question&quot;: question, &quot;context&quot;: context }, top_k=500, max_answer_len=len(context)) which_rank = [(i,r) for i, r in enumerate(result) if r['answer'].strip() == &quot;the answer to life, the universe and everything&quot;] </code></pre> <p>[out]:</p> <pre><code>[(34, {'score': 0.008117909543216228, 'start': 5, 'end': 53, 'answer': ' the answer to life, the universe and everything'})] </code></pre> <p>Voila, the right answer is ranked 34 out of 500!</p> <h3>Q: But that is meaningless, I want it to be the top answer. If not, how do I evaluate how good the model is?</h3> <p>A: Now, that is a good question. If you have a test set with the right answers and you are not getting the right answers in the top-1 or even top-10 candidates. You might need to reconsider how you evaluate the model.</p> <p>Depending on the ultimate goal,</p> <ul> <li><p>if the goal is to improve the model until the correct answer goes up to the top-1, then consider evaluating the model in terms of rank reciprocal, e.g. MRR, NDCG, see <a href="https://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-average-precision" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-average-precision</a></p> </li> <li><p>if the goal is to just make sure that the model works off-the-shelf, then I'll consider more customized solutions, e.g.</p> <ul> <li>pay more for better closed-APIs that has a service level agreement you can negotiate to get the right answers you want, or</li> <li>somehow get some service to train/fine-tune the model</li> <li>try prompt engineering for &quot;one-shot&quot;, &quot;few-shot&quot; learning, if you google around these days, there should be quite a lot to advise how to go around that, e.g. <a href="https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/" rel="nofollow noreferrer">https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/</a></li> </ul> </li> <li><p>if the goal is to ultimately get a model with that correctly answers on any questions you have in mind, and you have the compute, human and time resources, consider fine-tuning the model on relevant domain data, then re-evaluate the accuracy score, with something like <code>seqeval</code> or rogue scores.</p> </li> </ul>
64
T5 model
Cannot use SparkNLP pre-trained T5Transformer, executor fails with error &quot;No Operation named [encoder_input_ids] in the Graph&quot;
https://stackoverflow.com/questions/66207669/cannot-use-sparknlp-pre-trained-t5transformer-executor-fails-with-error-no-ope
<p>Downloaded T5-small model from SparkNLP website, and using this code (almost entirely from the examples):</p> <pre><code> import com.johnsnowlabs.nlp.SparkNLP import com.johnsnowlabs.nlp.annotators.seq2seq.T5Transformer import org.apache.spark.sql.SparkSession val spark = SparkSession.builder() .config(&quot;spark.serializer&quot;, &quot;org.apache.spark.serializer.KryoSerializer&quot;) .config(&quot;spark.kryoserializer.buffer.max&quot;, &quot;500M&quot;) .master(&quot;local&quot;).getOrCreate() SparkNLP.start() val testData = spark.createDataFrame(Seq( (1, &quot;Google has announced the release of a beta version of the popular TensorFlow machine learning library&quot;), (2, &quot;The Paris metro will soon enter the 21st century, ditching single-use paper tickets for rechargeable electronic cards.&quot;) )).toDF(&quot;id&quot;, &quot;text&quot;) val documentAssembler = new DocumentAssembler() .setInputCol(&quot;text&quot;) .setOutputCol(&quot;documents&quot;) val t5 = T5Transformer.load(&quot;/tmp/t5-small&quot;) .setTask(&quot;summarize:&quot;) .setInputCols(Array(&quot;documents&quot;)) .setOutputCol(&quot;summaries&quot;) new Pipeline().setStages(Array(documentAssembler, t5)) .fit(testData) .transform(testData) .select(&quot;summaries.result&quot;).show(truncate = false) </code></pre> <p>I get this error from the executor:</p> <pre><code>Caused by: java.lang.IllegalArgumentException: No Operation named [encoder_input_ids] in the Graph at org.tensorflow.Session$Runner.operationByName(Session.java:384) at org.tensorflow.Session$Runner.parseOutput(Session.java:398) at org.tensorflow.Session$Runner.feed(Session.java:132) at com.johnsnowlabs.ml.tensorflow.TensorflowT5.process(TensorflowT5.scala:76) </code></pre> <p>Initially run with Spark-2.3.0, but the issue also reproduced with spark-2.4.4. Other SparkNLP features work well, only this T5 model fails. The model on disk:</p> <pre><code>$ ll /tmp/t5-small drwxr-xr-x@ 6 XXX XXX 192 Dec 25 12:36 metadata -rw-r--r--@ 1 XXX XXX 791656 Dec 22 18:32 t5_spp -rw-r--r--@ 1 XXX XXX 175686374 Dec 22 18:32 t5_tensorflow $ cat /tmp/t5-small/metadata/part-00000 {&quot;class&quot;:&quot;com.johnsnowlabs.nlp.annotators.seq2seq.T5Transformer&quot;,&quot;timestamp&quot;:1608475002145, &quot;sparkVersion&quot;:&quot;2.4.4&quot;,&quot;uid&quot;:&quot;T5Transformer_1e0a16435680&quot;,&quot;paramMap&quot;:{}, &quot;defaultParamMap&quot;:{&quot;task&quot;:&quot;&quot;,&quot;lazyAnnotator&quot;:false,&quot;maxOutputLength&quot;:200}} </code></pre> <p>I'm new to SparkNLP, so I'm not sure if this is an actual issue or am I doing something wrong. Will appreciate any help.</p>
<p>The offline model of T5 - <code>t5_base_en_2.7.1_2.4_1610133506835</code> - was trained on SparkNLP 2.7.1, and there was a <a href="https://github.com/JohnSnowLabs/spark-nlp/commit/3fead96ec9274ddf3b79fb853afa0d03da4ed393#diff-761d0a1c998acc94c5759dee89ee674e82f4b6e02f7a9b8c2f64dfd272482d2f" rel="nofollow noreferrer">breaking change</a> in 2.7.2.</p> <p>Solved by downloading and re-saving the new version with</p> <pre><code># dev: T5Transformer().pretrained(&quot;t5_small&quot;).save(...) # prod: T5Transformer.load(path) </code></pre>
65
T5 model
git push error:fatal: unable to access .....Port number ended with &#39;a&#39;
https://stackoverflow.com/questions/69604479/git-push-errorfatal-unable-to-access-port-number-ended-with-a
<p>I finetuned the t5 model and I want to upload it on my hugging face library. I have my directory, where I save tokenizer and model.</p> <pre><code>tokenizer.save_pretrained('my-t5-qa-legal') trained_model.model.save_pretrained('my-t5-qa-legal') </code></pre> <p>Here are files in my directory:</p> <pre><code>!ls config.json special_tokens_map.json tokenizer_config.json pytorch_model.bin spiece.model </code></pre> <p>I logged in my account with:</p> <pre><code>from huggingface_hub import notebook_login notebook_login() !sudo apt-get install git-lfs !transformers-cli repo create my-t5-qa-legal !git config --global user.email my email !git config --global user.name &quot;my_user_name&quot; !git init !git add . !git commit -m &quot;Initial commit&quot; !git remote add origin https://my_user_name:password@huggingface.co/my_user_name/my-t5-qa-legal !git push --set-upstream origin master </code></pre> <p>but this gives me the error:</p> <pre><code>fatal: unable to access .....Port number ended with 'a' </code></pre>
66
T5 model
TypeError: _get_dataset_for_single_task() got an unexpected keyword argument &#39;sequence_length&#39; #790
https://stackoverflow.com/questions/67027987/typeerror-get-dataset-for-single-task-got-an-unexpected-keyword-argument-se
<p>I got the following error in the evaluation of a <a href="https://github.com/google-research/text-to-text-transfer-transformer" rel="nofollow noreferrer">t5</a> model:</p> <pre><code> model.batch_size = train_batch_size * 4 model.eval( mixture_or_task_name=&quot;trivia_all&quot;, checkpoint_steps=-1 #&quot;all&quot; ) </code></pre> <pre><code> Traceback (most recent call last): File &quot;train.py&quot;, line 140, in &lt;module&gt; checkpoint_steps=-1 #&quot;all&quot; File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/t5/models/mtf_model.py&quot;, line 267, in eval self._model_dir, dataset_fn, summary_dir, checkpoint_steps) File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/mesh_tensorflow/transformer/utils.py&quot;, line 2025, in eval_model for d in decode(estimator, input_fn, vocabulary, checkpoint_path) File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/mesh_tensorflow/transformer/utils.py&quot;, line 2024, in &lt;listcomp&gt; d.decode(&quot;utf-8&quot;) if isinstance(d, bytes) else d File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/mesh_tensorflow/transformer/utils.py&quot;, line 1114, in decode for i, result in enumerate(result_iter): File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py&quot;, line 3132, in predict rendezvous.raise_errors() File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/error_handling.py&quot;, line 150, in raise_errors six.reraise(typ, value, traceback) File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/six.py&quot;, line 703, in reraise raise value File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py&quot;, line 3126, in predict yield_single_examples=yield_single_examples): File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py&quot;, line 611, in predict input_fn, ModeKeys.PREDICT) File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py&quot;, line 1007, in _get_features_from_input_fn result = self._call_input_fn(input_fn, mode) File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py&quot;, line 3041, in _call_input_fn return input_fn(**kwargs) File &quot;/home/pouramini/miniconda3/lib/python3.7/site-packages/mesh_tensorflow/transformer/utils.py&quot;, line 1182, in input_fn ds = dataset.dataset_fn(sequence_length=sequence_length) TypeError: _get_dataset_for_single_task() got an unexpected keyword argument 'sequence_length' </code></pre> <p>There is a similar issue but I didn't get the solution which is one line.</p> <p><a href="https://github.com/google-research/text-to-text-transfer-transformer/issues/631" rel="nofollow noreferrer">https://github.com/google-research/text-to-text-transfer-transformer/issues/631</a></p>
<p>I had installed <code>t5 v0.6.0</code>, which wasn't the newest version. When I installed v0.9.0, the problem was resolved.</p> <p><code>pip install t5==0.9.0</code></p>
67
T5 model
why does huggingface t5 tokenizer ignore some of the whitespaces?
https://stackoverflow.com/questions/72214408/why-does-huggingface-t5-tokenizer-ignore-some-of-the-whitespaces
<p>I am using T5 model and tokenizer for a downstream task. I want to add certain whitespaces to the tokenizer like line ending <code>(\t)</code> and tab <code>(\t)</code>. Adding these tokens work but somehow the tokenizer always ignores the second whitespace. So, it tokenizes the sequence <code>“\n\n”</code> as a single line ending and the sequence <code>&quot;\n\n\n\n&quot;</code> is tokenized as two line endings and so on. See below to reproduce.</p> <pre><code>from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained(&quot;t5-large&quot;) tokenizer.add_tokens([&quot;\n&quot;]) tokenizer.encode(&quot;\n&quot;) # returns [32100, 1] as expected tokenizer.encode(&quot;\n\n&quot;) # returns [32100, 1] but expected would be [32100, 32100, 1] tokenizer.encode(&quot;\n\n\n\n&quot;) # returns [32100, 32100, 1] but expected would be [32100, 32100, 32100, 32100, 1] </code></pre> <p>what is the reasoning behind this behaviour? Is it a bug or something related to how tokenizer works? I noticed that this only happens for added whitespaces but not for other characters.</p> <p>Is there way to prevent tokenizer from ignoring the repeated whitespaces?</p>
<p>The behaviour is explained by how the <code>tokenize</code> method in <code>T5Tokenizer</code> strips tokens by default. What one can do is adding the token '<code>\n</code>' as a special token to the tokenizer. Because the special tokens are never seperated, it works as expected.</p> <p>It is a bit hacky but seems to work.</p> <pre><code>from tokenizers import AddedToken tokenizer.add_special_tokens({&quot;additional_special_tokens&quot;: [AddedToken(&quot;\n&quot;)]}) print(tokenizer.special_tokens_map) </code></pre> <p>Then it tokenizes the <code>'\n'</code> without skipping any occurences. Note that AddedToken is important because somehow the following does <strong>NOT</strong> work.</p> <pre><code>tokenizer.add_special_tokens({&quot;additional_special_tokens&quot;: [&quot;\n&quot;]}) </code></pre> <h1>Edit</h1> <p>After spending more time on it, I actually found a way to add it as a normal token without using special tokens. The main reason for the issue is the normalization process that happens behind the scenes even before the tokenization. When you add a new token, you can specify if it should be normalized or not. By setting normalize to False, you avoid the tokenizer from stripping consecutive occurrences of the added token.</p> <pre><code>from tokenizers import AddedToken tokenizer.add_tokens(AddedToken(&quot;\n&quot;, normalized=False)) </code></pre> <p>You can find more information on this link: <a href="https://huggingface.co/course/en/chapter6/4?fw=pt" rel="nofollow noreferrer">https://huggingface.co/course/en/chapter6/4?fw=pt</a></p>
68
T5 model
How to use Flan-T5 with LoRA on multi gpu system?
https://stackoverflow.com/questions/79135659/how-to-use-flan-t5-with-lora-on-multi-gpu-system
<p>When using Huggingface with 'google/flan-t5-base' model, I get the error:</p> <pre><code>RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument index in method wrapper__index_select) </code></pre> <p>I've found in the transformers.py file in _wrap_model, the model becomes <code>model = nn.DataParallel(model)</code>, which seems ok b/c I have 3 gpus on the machine. However, it seems like the data is not setup to work with this (hence the cuda device error I mentioned above).</p> <p>Is this a known issue?</p> <p>I'm using transformers library version 4.44.0.</p> <p>Here is a snippet of the code:</p> <pre><code>model_name='google/flan-t5-base' model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) dataset = load_dataset(“knkarthick/dialogsum”) tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize_function(example): start_prompt = 'Summarize the following conversation.\n\n' end_prompt = '\n\nSummary: ' prompt = [start_prompt + dialogue + end_prompt for dialogue in example[&quot;dialogue&quot;]] example['input_ids'] = torch.tensor(tokenizer(prompt, padding=&quot;max_length&quot;, truncation=True, return_tensors=&quot;pt&quot;).input_ids) example['labels'] = torch.tensor(tokenizer(example[&quot;summary&quot;], padding=&quot;max_length&quot;, truncation=True, return_tensors=&quot;pt&quot;).input_ids) return example tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets = tokenized_datasets.remove_columns(['id', 'topic', 'dialogue', 'summary',]) tokenized_datasets = tokenized_datasets.filter(lambda example, index: index % 100 == 0, with_indices=True) lora_config = LoraConfig(r=32,lora_alpha=32,target_modules=[&quot;q&quot;, &quot;v&quot;],lora_dropout=0.05,bias=&quot;none&quot;,task_type=TaskType.SEQ_2_SEQ_LM # FLAN-T5) peft_model = get_peft_model(model, lora_config) peft_training_args = TrainingArguments(output_dir=&quot;./output&quot;, learning_rate=1e-3, num_train_epochs=1,logging_steps=1,max_steps=1) peft_trainer = Trainer(model=peft_model,args=peft_training_args,train_dataset=tokenized_datasets[&quot;train&quot;],) peft_trainer.train() </code></pre>
69
T5 model
Why did my fine-tuning T5-Base Model for a sequence-to-sequence task has short incomplete generation?
https://stackoverflow.com/questions/78448914/why-did-my-fine-tuning-t5-base-model-for-a-sequence-to-sequence-task-has-short-i
<p>I am trying to fine-tune a <code>t5-base</code> model for creating appropriate question against a compliance item. Compliance iteams are paragraph of texts and my question are in the past format of them. I have trained the model, saved it and loaded it back for future usecases.</p> <p>The problem is when I am trying to use the model for creating new questions on unknown statements the response is coming as incomplete.</p> <p>Code:</p> <pre><code>import pandas as pd import torch from datasets import Dataset import transformers from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer, T5Tokenizer df = pd.read_csv(r'/content/questionsgenerator.csv', encoding='unicode_escape') df.head() # Load pre-trained model and tokenizer model_name = &quot;t5-base&quot; tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # Define the training arguments training_args = Seq2SeqTrainingArguments( output_dir=&quot;./output_dir&quot;, per_device_train_batch_size=8, per_device_eval_batch_size=8, predict_with_generate=True, logging_steps=100, save_steps=5000, eval_steps=5000, num_train_epochs=3, learning_rate=1e-4, warmup_steps=1000, save_total_limit=3, ) # Define the training dataset train_dataset = Dataset.from_pandas(df.rename(columns={&quot;Compliance Item&quot;: &quot;input_text&quot;, &quot;Question&quot;: &quot;target_text&quot;})) # Define the function to preprocess the dataset def preprocess_function(examples): inputs = [f&quot;compliance item: {ci}&quot; for ci in examples[&quot;input_text&quot;]] targets = [f&quot;{question} &lt;/s&gt;&quot; for question in examples[&quot;target_text&quot;]] model_inputs = tokenizer(inputs, max_length=512, padding=&quot;max_length&quot;, truncation=True) with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=32, padding=&quot;max_length&quot;, truncation=True) model_inputs[&quot;labels&quot;] = labels[&quot;input_ids&quot;] return model_inputs # Preprocess the dataset train_dataset = train_dataset.map(preprocess_function, batched=True) # Define the trainer trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, ) # Fine-tune the model on the dataset trainer.train() model.save_pretrained(&quot;./fine_tuned_model_question_generation&quot;) tokenizer = T5Tokenizer.from_pretrained(&quot;t5-large&quot;) model = transformers.AutoModelForSeq2SeqLM.from_pretrained(&quot;./fine_tuned_model_question_generation&quot;) context = 'When the Installment Due Date falls on a non-business day, the Mortgagee must consider a Borrower’s Notice of Intent to Prepay or the receipt of the prepayment amount for a Mortgage closed before January 21, 2015 timely if received on the next business day.' encoding = tokenizer.encode_plus(context, return_tensors=&quot;pt&quot;) input_ids = encoding[&quot;input_ids&quot;] attention_mask = encoding[&quot;attention_mask&quot;] output = model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=1000) decoded_output = tokenizer.decode(output[0], skip_special_tokens=True) decoded_output </code></pre> <p>Here the response is: <code>When the Installment Due Date fell on a non-business day, was the Borrower’s Notice of Intent to Prepay or the receipt of the prepayment amount for</code> which is obviously incomplete.</p> <p>So my question is what do i need to do increase the output?</p> <ol> <li>Should I increase the epochs?</li> <li>Or is there a better model for this task?</li> </ol> <p>Please help in this.</p>
<p>Because of:</p> <pre><code>labels = tokenizer(targets, max_length=32, padding=&quot;max_length&quot;, truncation=True) </code></pre> <p>Most probably your model has learnt to just output/generate outputs that are ~32 tokens.</p> <p>Try:</p> <pre><code>labels = tokenizer(targets, max_length=512, padding=&quot;max_length&quot;, truncation=True) </code></pre>
70
T5 model
Formatting a numbered list into a cohesive prose paragraph using Hugging Face Inference API
https://stackoverflow.com/questions/79111263/formatting-a-numbered-list-into-a-cohesive-prose-paragraph-using-hugging-face-in
<p>I am playing with Hugging Face Inference API and I am trying to convert a numbered list into a cohesive prose paragraph. I have tried multiple models but I am not able to get things working.</p> <p>I have tried GPT-2, BLOOM and T5 models, but in each case I am failing to get the prose paragraph output that I am seeking.</p> <p><strong>GPT-2:</strong></p> <pre><code>curl -X POST https://api-inference.huggingface.co/models/gpt2 -H &quot;Authorization: Bearer hf_mykey&quot; -H &quot;Content-Type: application/json&quot; -d '{&quot;inputs&quot;: &quot;Please rewrite the following numbered list into a cohesive prose paragraph: 1. Wake up early 2. Eat a healthy breakfast 3. Exercise regularly 4. Stay organized 5. Get enough sleep&quot;}' [{&quot;generated_text&quot;:&quot;Please rewrite the following numbered list into a cohesive prose paragraph: 1. Wake up early 2. Eat a healthy breakfast 3. Exercise regularly 4. Stay organized 5. Get enough sleep to last five hours 26 30. Shoot for the top of their list 27 30. Go to a movies and events 32 32. Plan to watch another movie 33 35. Get a good apartment and a positive attitude much like living on the street 37 37. Find stuff to write about, and then set it up using whatever tools you have available 20 40. Don't listen to opinions of outliers (even just average folk)\n) 20 40. Expect endings often but will always leave wits about&quot;}] </code></pre> <p><strong>BLOOM:</strong></p> <pre><code>curl -X POST https://api-inference.huggingface.co/models/bigscience/bloom -H &quot;Authorization: Bearer hf_mykey&quot; -H &quot;Content-Type: application/json&quot; -d '{&quot;inputs&quot;: &quot;Please rewrite the following numbered list into a cohesive prose paragraph: 1. Wake up early 2. Eat a healthy breakfast 3. Exercise regularly 4. Stay organized 5. Get enough sleep&quot;}' [{&quot;generated_text&quot;:&quot;Please rewrite the following numbered list into a cohesive prose paragraph: 1. Wake up early 2. Eat a healthy breakfast 3. Exercise regularly 4. Stay organized 5. Get enough sleep 6. Take breaks 7. Take time to relax 8. Take time to socialize 9. Take&quot;}] </code></pre> <p><strong>T5:</strong></p> <pre><code>curl -X POST https://api-inference.huggingface.co/models/t5-small -H &quot;Authorization: Bearer hf_mykey&quot; -H &quot;Content-Type: application/json&quot; -d '{&quot;inputs&quot;: &quot;Please rewrite the following numbered list into a cohesive prose paragraph: 1. Wake up early 2. Eat a healthy breakfast 3. Exercise regularly 4. Stay organized 5. Get enough sleep&quot;}' {&quot;error&quot;:&quot;Model google-t5/t5-small is currently loading&quot;,&quot;estimated_time&quot;:20.0} </code></pre>
71
T5 model
problem for change batch size in my model
https://stackoverflow.com/questions/74652127/problem-for-change-batch-size-in-my-model
<p>When I train my model(my model is a transformer that its input is featured extracted from T5 model and Vit ) I have problem for set batch_size more than 2 number</p> <pre><code>number of image is 25000 for training. GPU is GTX 3090(24 gpu ram). 24 core multithreading CPU. number of total parameter =363M seq_len=512 max-step=100000/2 iter=100000 img:torch.Size([3, 384, 500]) tokens:torch.Size([512]) </code></pre> <p>I want to increase batch_size from 2 to 3,4,... but I can't. and I see error for example when I set batch_size=4, I have this error CUDA out of memoryTried to allocat.... (I attache image for error) But when I decrease to 2 I have not this error . What's I wrong? <a href="https://i.sstatic.net/uBKcH.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>The problem is as stated. You are running out of GPU memory. If you want to increase batch size and you are using pytorch lightning, try to use half-precision to consume less memory. <a href="https://pytorch-lightning.readthedocs.io/en/latest/common/precision_basic.html" rel="nofollow noreferrer">https://pytorch-lightning.readthedocs.io/en/latest/common/precision_basic.html</a></p>
72
T5 model
How to run .py directly in a python package?
https://stackoverflow.com/questions/70421506/how-to-run-py-directly-in-a-python-package
<p>I am working on transformer and I am interested in one of the models and would like to run the source code.</p> <p>For example, in the T5 model file, such import appears.</p> <pre><code>from ...activations import ACT2FN from ...file_utils import ( DUMMY_INPUTS, DUMMY_MASK, add_start_docstrings, add_start_docstrings_to_model_forward, is_torch_fx_proxy, replace_return_docstrings, ) </code></pre> <p>activations.py and file_utiles.py are two python file in this transformers fold.</p> <p>But it always gives:</p> <pre><code>ImportError: attempted relative import with no known parent package </code></pre>
73
T5 model
fastchat-t5-3b-v1.0 gives truncated /incomplete answers
https://stackoverflow.com/questions/76692329/fastchat-t5-3b-v1-0-gives-truncated-incomplete-answers
<p>I have used following embeddings:</p> <ol> <li>sentence-transformers/all-mpnet-base-v2</li> <li>hkunlp/instructor-xl</li> </ol> <p>to get embedding</p> <pre><code>def getEmbedding(): device = &quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot; return HuggingFaceEmbeddings(model_name=&quot;sentence-transformers/all-mpnet-base-v2&quot;, model_kwargs={&quot;device&quot;: device}) </code></pre> <p>and tried with following LLMs:</p> <ol> <li>lmsys/fastchat-t5-3b-v1.0</li> <li>google/flan-t5-base</li> </ol> <p>to get LLM</p> <pre><code>def getLLM(): return pipeline( task=&quot;text2text-generation&quot;, model = &quot;lmsys/fastchat-t5-3b-v1.0&quot;, min_new_tokens=100, max_new_tokens=256, model_kwargs={&quot;device_map&quot;: &quot;auto&quot;, &quot;load_in_8bit&quot;: False, &quot;max_length&quot;: 512, &quot;temperature&quot;: 0.} ) # to get the text def get_pdf_text(pdf_path): text = &quot;&quot; documents = [] for pdf in pdf_path: with NamedTemporaryFile(delete=False, suffix='.pdf') as tmp: shutil.copyfileobj(pdf, tmp) tmp_path = Path(tmp.name) #print(tmp_path) loader = PyPDFLoader(str(tmp_path)) documents.extend(loader.load()) return documents # to split the document which we have gotten from the pdfs into tokens def get_text_chunks(documents): text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0) texts = text_splitter.split_documents(documents) text_splitter = TokenTextSplitter(chunk_size=100, chunk_overlap=10) # This the encoding for text-embedding-ada-002 texts = text_splitter.split_documents(texts) return texts # Creating Chroma vector DB and persisting it def vector_db_pdf(pdf_path): #if PDF is not present then load from persist directory else condition otherwise use pdf to generate persist vector DB if len(pdf_path)&gt;0: documents=get_pdf_text(pdf_path) texts =get_text_chunks(documents) vector_db=Chroma.from_documents(documents=texts, embedding=getEmbedding(), persist_directory=&quot;storage&quot;) vector_db.persist() else: #Use from persist vector_db=Chroma(persist_directory=&quot;storage&quot;, embedding_function=getEmbedding()) return vector_db def retreival_qa_chain(): llm=getLLM() vectordb=vector_db_pdf([]) hf_llm = HuggingFacePipeline(pipeline=llm,model_id=&quot;lmsys/fastchat-t5-3b-v1.0&quot;) qa = RetrievalQA.from_chain_type(llm=hf_llm, chain_type=&quot;stuff&quot;,retriever=retriever) retriever = vectordb.as_retriever(search_kwargs={&quot;k&quot;:3}) </code></pre> <p><a href="https://replit.com/join/lxaofshjga-kvmukilan" rel="nofollow noreferrer">full code here</a></p> <p>Some extra info:</p> <p>Input: a legal containing 8-10 pages transformers==4.29.2, sentence-transformers==2.2.2, lang chain= 0.0.189, huggingface-hub==0.14.1.</p> <p>I have trained LLM on my PDF file now I am asking questions related to same. But the output which is being generated is always truncated and stops in between. Model giving incomplete sentences.</p> <p>In LLM pipeline I have tried parameters like <code>early_stopping=False</code> setting <code>min_new tokens</code> and increasing <code>max_new_tokens</code> but nothing seems to work. how these parameters affect length of output?</p>
74
T5 model
Get accelerate package to log test results with huggingface Trainer
https://stackoverflow.com/questions/77758645/get-accelerate-package-to-log-test-results-with-huggingface-trainer
<p>I am fine-tuning a T5 model on a specific dataset and my code looks like this:</p> <pre class="lang-py prettyprint-override"><code>accelerator = Accelerator(log_with='wandb') tokenizer = T5Tokenizer.from_pretrained('t5-base') model = T5ForConditionalGeneration.from_pretrained('t5-base') accelerator.init_trackers( project_name='myProject', config={ # My configs } ) # Then I do some preparations towards the fine-tuning trainer_arguments = transformers.Seq2SeqTrainingArguments( # Here I pass many arguments ) trainer = transformers.Seq2SeqTrainer( # Here I pass the arguments along side other needed arguments ) # THEN FINALLY I TRAIN, EVALUATE AND TEST LIKE SO: trainer.train() trainer.evaluate( #evaluation parameters# ) trainer.predict( #test arguments# ) </code></pre> <p>Now my main issue, when I check the <code>wandb</code> site for my project, I only see logging for the <code>trainer.train()</code> phase but not the <code>trainer.evaluate()</code> or <code>trainer.predict()</code> phases.<br><br></p> <p>I've scoured the web trying to find a solution but could not find any.<br></p> <p>How do I get wandb/accelerate to log all of my phases? <br> Thanks!</p> <p>For the full code, you can see it here: <a href="https://github.com/zbambergerNLP/principled-pre-training/blob/master/fine_tune_t5.py" rel="nofollow noreferrer">https://github.com/zbambergerNLP/principled-pre-training/blob/master/fine_tune_t5.py</a></p>
<p>Unfortunately, Evaluation and Prediction metrics are not logged automatically like they do for Training on <code>wandb</code>. But there are ways to push them on <code>wandb</code>.</p> <h4>Solution 01</h4> <p>You can log evaluation and prediction metrics manually, after each phases:</p> <pre><code># After evaluation eval_metrics = trainer.evaluate() wandb.log({&quot;evaluation&quot;: eval_metrics}) # After prediction predictions = trainer.predict(test_dataset) wandb.log({&quot;predictions&quot;: predictions.metrics}) </code></pre> <h4>Solution 02</h4> <p>You can also set a callback that will log your metrics automatically, after evaluation and prediction:</p> <pre><code>from transformers import TrainerCallback class WandbLoggingCallback(TrainerCallback): def on_evaluate(self, args, state, control, **kwargs): # Log evaluation metrics metrics = kwargs.get(&quot;metrics&quot;, {}) wandb.log({&quot;eval&quot;: metrics}) def on_predict(self, args, state, control, **kwargs): # Log prediction metrics metrics = kwargs.get(&quot;metrics&quot;, {}) wandb.log({&quot;predictions&quot;: metrics}) # Use this callback in your trainer trainer = transformers.Seq2SeqTrainer( # Your arguments callbacks=[WandbLoggingCallback], # Other needed arguments ) </code></pre> <p>Here's a <a href="https://colab.research.google.com/drive/1kYKcKjzZ47bUQP0mXkSurQCrva5SiiHU?usp=sharing" rel="nofollow noreferrer">simple colab notebook</a> that I borrowed from <code>HuggingFace</code> and modified it with ways to push evaluation and prediction metrics after training. You will find both manual and automatics approach there.</p>
75
T5 model
what is the issue when i tried to add a pretuned t5 model to the project from github
https://stackoverflow.com/questions/78298468/what-is-the-issue-when-i-tried-to-add-a-pretuned-t5-model-to-the-project-from-gi
<p>I git-cloned a <a href="https://github.com/farazkh80/SearchEngine" rel="nofollow noreferrer">project</a> and did all the installation. The website also launched. But I think I couldn't connect to the two files (<code>t5-base-full-seeded.ckpt</code> and <code>t5-small-full-seeded.ckpt</code>) because the project is not showing any results when I try to summarize or search.<br /> Should I make/add my local path of the files in any of the files (<code>app.py</code> or <code>finetuner.py</code>) how do I resolve this, connect the ckpt files, and make the app function properly? I have downloaded the two <strong>ckpt</strong> files and added them to the root of the repository.</p> <p>I expect someone to rectify the above problem so that it works properly. It's kinda an emergency.</p>
76
T5 model
How many and which layers to Freeze while using Transfer Learning?
https://stackoverflow.com/questions/76827306/how-many-and-which-layers-to-freeze-while-using-transfer-learning
<p>I am fine tuning the T5 model for QA task (NLP).</p> <p>I would like to use transfer learning, to see if I can get better results.</p> <p>Initially, I used the model like:</p> <pre><code>class QAModel(pl.LightningModule): def __init__(self): super().__init__() self.model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True) </code></pre> <p>Now, I want to change it and freeze some layers.</p> <p>What is the best way to do it? Which layers should I freeze, and how many?</p> <p>Thanks a lot!</p>
77
T5 model
Conversational Bot with Flan-T5
https://stackoverflow.com/questions/75913490/conversational-bot-with-flan-t5
<p>I am building a chat bot using flan-T5 model. The bot has a text window where one can give instructions like:</p> <ul> <li>Summarize this for me &quot;big text goes here&quot;</li> </ul> <p>Or, one might dump the text first in the chat window and then say</p> <ul> <li>Summarize the above text (or something similar to that)</li> </ul> <p>Or, one might dump a bunch of domain specific facts in the chat window and then ask questions about those.</p> <p><strong>Question:</strong></p> <ol> <li>How can I form the context data for the bot so it has the knowledge about whatever info was passed to be before if it were to summarize something from before or answer quetsions from text that was passed before</li> <li>How can I create a prompt which detects whether the intent is to <code>ASK QUESTION</code> or <code>CREATE SUMMARY</code> or just <code>INFO ADDITION</code> (in case we are just feeding info to use for asking questions or creating sumarry later.</li> </ol>
78
T5 model
ModuleNotFoundError: No module named &#39;gin.tf&#39;,pip install gin-config==0.1.1 not work
https://stackoverflow.com/questions/75971192/modulenotfounderror-no-module-named-gin-tf-pip-install-gin-config-0-1-1-not
<p>I downloaded the T5 model .py file(text-to-text-transfer-transformer) on the official website.</p> <p>when i run the mtf_model.py,I found the code:</p> <pre><code>import gin.tf </code></pre> <p>introduce the error:ModuleNotFoundError: No module named 'gin.tf'</p> <p>but the code:</p> <pre><code>import gin </code></pre> <p>doesn't.</p> <p>My computer is Macos.When i press command+click,I can enter the gin.tf file,when I click,I enter the tf/<strong>init</strong> file.</p> <p>I thought it may count to that I have installed the gin,but i can't import gin.tf. It is so strange,and I tried the &quot;pip install gin-config==0.1.1&quot;(<a href="https://stackoverflow.com/questions/56298451/modulenotfounderror-no-module-named-gin">ModuleNotFoundError: No module named &#39;gin&#39;</a>),it is useless.</p> <p>Does anyone has solutions? Thanks in advance.</p> <p>additional info: when I run the mtf_model.py(which is inside 'text-to-text/t5/models/mtf_model.py'),the error 'ModuleNotFoundError: No module named 'gin.tf'' was introduced. But when I run the main.py(which is inside 'text-to-text/main.py',and it was created by myself),it's not get an error. the two file both have the code:&quot;import gin.tf&quot;.</p> <p>the source file website: <a href="https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/mtf_model.py" rel="nofollow noreferrer">https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/mtf_model.py</a></p> <p>It is part of a package,but have no requirements.txt file.</p>
<p><a href="https://pypi.org/project/gin-config/" rel="nofollow noreferrer"><code>gin-config</code></a> is now at version 0.5.0. At the command line, enter</p> <pre class="lang-none prettyprint-override"><code>pip install -U gin-config </code></pre> <p>to update it to the most recent version, and you should be all set.</p> <hr /> <p>The file you are running, <a href="https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/mtf_model.py" rel="nofollow noreferrer"><code>mtf_model.py</code></a>, is intended to be imported from <code>t5.models</code> <em>after</em> <code>t5</code> has been installed via <code>pip</code>. It is not intended to be run directly. The problem is that there is a <code>gin</code> directory inside <code>t5/models</code> with an <code>__init__.py</code> in it, but it does not export a module called <code>tf</code>, nor is there a <code>tf.py</code> file within that directory. When you try to run <code>mtf_model.py</code> directly from that folder, it is trying to import <code>gin.tf</code> locally, not from the installed <code>gin-config</code> module.</p> <p>Once you install <code>t5</code> properly, you'll be able to write your own program (<strong>not</strong> in the installed <code>t5</code> directory) where you can have</p> <pre><code>import gin import gin.tf from t5.models.mtf_model import MtfModel </code></pre> <p>and it <em>should</em> all work.</p>
79
T5 model
A prepand *Paraphrase* is showig all the time after runnig the code instead of actual paraphrased Sentence
https://stackoverflow.com/questions/78673465/a-prepand-paraphrase-is-showig-all-the-time-after-runnig-the-code-instead-of-a
<p>I am creating an URDU TEXT PARAPHRASING tool for my semester.</p> <p>I have used T5 Model and fine tuned it.</p> <p>Now when im running this code:</p> <pre><code>**import torch from transformers import T5ForConditionalGeneration, T5Tokenizer # Define your model and tokenizer model_name = &quot;t5-small&quot; model = T5ForConditionalGeneration.from_pretrained(model_name) tokenizer = T5Tokenizer.from_pretrained(model_name) def paraphrase_urdu_sentence(sentence, model, tokenizer): try: # Prepend &quot;paraphrase: &quot; to sentence **input_text = &quot;paraphrase: &quot; + sentence** # Tokenize input text input_ids = tokenizer.encode(input_text, return_tensors=&quot;pt&quot;, max_length=128, truncation=True) # Generate paraphrased text generated_ids = model.generate(input_ids, max_length=128, num_beams=4, early_stopping=True) # Decode the generated paraphrased text paraphrased_sentence = tokenizer.decode(generated_ids[0], skip_special_tokens=True) return paraphrased_sentence except Exception as e: print(f&quot;Error occurred during paraphrasing: {e}&quot;) return None def main(): input_sentence = input(&quot;Insert Your Text in Urdu: &quot;) paraphrased_sentence = paraphrase_urdu_sentence(input_sentence, model, tokenizer) if paraphrased_sentence: print(&quot;Original sentence:&quot;, input_sentence) print(&quot;Paraphrased sentence:&quot;, paraphrased_sentence) else: print(&quot;Failed to paraphrase the sentence.&quot;) if __name__ == &quot;__main__&quot;: main()** </code></pre> <p>The output should be the paraphrased sentence of what urdu sentence i insert. But its giving me this result:</p> <pre class="lang-none prettyprint-override"><code>Insert Your Text in Urdu: آپ کو لگتا ہے کہ آپ کہاں جا رہے ہیں۔ Original sentence: آپ کو لگتا ہے کہ آپ کہاں جا رہے ہیں۔ Paraphrased sentence: Paraphrase: </code></pre> <p>how to get the paraphrased sentence ? please help its urgennt</p> <p>I have tried all possible ssolutions suggested by chatgpt nothing works</p>
80
T5 model
Are there any potential issues training a T5-small from scratch on a task with very limited vocabulary?
https://stackoverflow.com/questions/78949531/are-there-any-potential-issues-training-a-t5-small-from-scratch-on-a-task-with-v
<p>Suppose you would like to train a sequence-to-sequence model like T5-small <strong>from scratch</strong> on a task where the vocabulary is quite limited compared to the tokenizer of T5 which was trained on much larger vocabulary.</p> <p>For instance, the data have the following format:</p> <pre><code>Can you please add A and B? e.g. Can you please add 45 and 56? Can you please add 87 and 34? </code></pre> <p><code>A</code> and <code>B</code> are just placeholders for integer numbers.</p> <p>Instead the tokenizer of T5 was trained to represent a vocabulary of approximately something like 32-50K tokens.</p> <p>What would be some consideration and issues taken into account since in the data only a few tokens change every time?</p> <p>Basically only tokens <code>A</code> and <code>B</code> change every time.</p> <p>Is that still possible?</p>
81
T5 model
ImportError: attempted relative import with no known parent package in ONNX Library
https://stackoverflow.com/questions/66238009/importerror-attempted-relative-import-with-no-known-parent-package-in-onnx-libr
<p>I am converting the T5 model from pytorch to ONNX using <a href="https://github.com/onnx/models/blob/master/text/machine_comprehension/t5/dependencies/T5-export.py" rel="nofollow noreferrer">this</a> script, but I am running into an Import module error. I don't think this a structural problem as this script has been used before by other people with no issues. Any ideas as to why I might be getting this error?</p> <p>Here is the full error:</p> <pre><code>Traceback (most recent call last): File &quot;c:GitHub\models\text\machine_comprehension\t5\dependencies\T5-export.py&quot;, line 2, in &lt;module&gt; from .models import CombinedDecoder, SimplifiedT5Encoder ImportError: attempted relative import with no known parent package </code></pre>
82
T5 model
Your fast tokenizer does not have the necessary information to save the vocabulary for a slow tokenizer
https://stackoverflow.com/questions/74529986/your-fast-tokenizer-does-not-have-the-necessary-information-to-save-the-vocabula
<p>I'm trying to fine tune a t5 model for paraphrasing Farsi sentences. I'm using <a href="https://huggingface.co/erfan226/persian-t5-paraphraser" rel="nofollow noreferrer">this</a> model as my base. My dataset is a paired sentence dataset which each row is a pair of paraphrased sentences. I want to fine tune the model on this dataset. The problem is after each epoch I want to save the vocabulary and save the pretrained in order to use them later. However I get this error:</p> <pre><code>ValueError: Your fast tokenizer does not have the necessary information to save the vocabulary for a slow tokenizer. </code></pre> <p>When I tried my code on the t5-base model, it worked fine. But this model didn't work.</p> <p>I have searched in google for this problem, but I haven't got any related answers. Here is my code:</p> <pre><code>!pip install pytorch_lightning==1.7.7 !pip install transformers !pip install sentencepiece </code></pre> <pre><code>import os import pytorch_lightning as pl from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import json from tqdm import tqdm import torch from torch.utils.data import TensorDataset, random_split from transformers.optimization import AdamW from pytorch_lightning.callbacks import Callback </code></pre> <pre><code>save_path = './Models/paraphrase' !mkdir -p $save_path </code></pre> <pre><code>class ParaphraseGenerator(pl.LightningModule): def __init__(self): super().__init__() model_name = 'erfan226/persian-t5-paraphraser' self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForSeq2SeqLM.from_pretrained(model_name) self.batch_size = 16 self.lr = 4e-5 def encode_text(self, data_path): with open(data_path, 'r', encoding='utf-8') as r: data = json.load(r) for item in tqdm(data): # tokenizing original and paraphrase: source = self.tokenizer( item['sentence_1'], max_length=80, truncation=True, padding='max_length', return_tensors='pt') target = self.tokenizer( item['sentence_2'], max_length=200, truncation=True, padding='max_length', return_tensors='pt') yield source['input_ids'], target['input_ids'] def to_tensor(self, source_ids, target_ids): source_ids = torch.cat(source_ids, dim=0) target_ids = torch.cat(target_ids, dim=0) data = TensorDataset(source_ids, target_ids) return random_split(data, [len(data), 0])[0] def prepare_data(self): train_path = &quot;./train_dataset.json&quot; test_path = &quot;./test_dataset.json&quot; source_ids, target_ids = list( zip(*tuple(self.encode_text(train_path)))) self.train_ds = self.to_tensor(source_ids, target_ids) source_ids, target_ids = list( zip(*tuple(self.encode_text(test_path)))) self.test_ds = self.to_tensor(source_ids, target_ids) def forward(self, batch, batch_idx): source_ids, target_ids = batch[:2] return self.model(input_ids=source_ids, labels=target_ids) def training_step(self, batch, batch_idx): loss = self(batch, batch_idx)[0] self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): loss = self(batch, batch_idx)[0] self.log('val_loss', loss) def train_dataloader(self): return torch.utils.data.DataLoader(self.train_ds, batch_size=self.batch_size, drop_last=True, shuffle=True, num_workers=0) def val_dataloader(self): return torch.utils.data.DataLoader(self.test_ds, batch_size=self.batch_size, drop_last=False, shuffle=False, num_workers=0) def configure_optimizers(self): return AdamW(self.parameters(), lr=self.lr, weight_decay=0.01) </code></pre> <pre><code>class SaveCallback(Callback): def on_epoch_start(self, trainer, pl_module): if pl_module.current_epoch &gt; 0: current_epoch = str(pl_module.current_epoch) fn = f'epoch_{current_epoch}' new_path = f&quot;{save_path}/{fn}/&quot; if fn not in os.listdir(save_path): os.mkdir(new_path) pl_module.tokenizer.save_vocabulary(new_path) pl_module.model.save_pretrained(new_path) </code></pre> <pre><code>trainer = pl.Trainer( default_root_dir='logs', min_epochs=4, gpus=-1, max_epochs=5, val_check_interval=0.5, callbacks=[SaveCallback()], logger=pl.loggers.TensorBoardLogger('logs/', name='paraphrase', version=0) ) para_model = ParaphraseGenerator() trainer.fit(para_model) </code></pre> <p>In the SaveCallback function when I try to save_vocabulary the error occurs. Also, I'm using google colab to run this code.</p>
83
T5 model
Using Google&#39;s T5 for translation from German to English
https://stackoverflow.com/questions/66797042/using-googles-t5-for-translation-from-german-to-english
<p>I am trying to use Google's T5 for language translation. However, it is not working for German to English.</p> <p>English to German works fine:</p> <pre><code>self.tokenizer = AutoTokenizer.from_pretrained(&quot;t5-small&quot;) self.model = AutoModelForSeq2SeqLM.from_pretrained(&quot;t5-small&quot;) inputs = self.tokenizer.encode(&quot;translate English to German: &quot; + text, return_tensors=&quot;pt&quot;, max_length=512, truncation=True) summary_ids = self.model.generate(inputs, max_length=512, min_length=5, length_penalty=5., num_beams=2) summary = self.tokenizer.decode(summary_ids[0]) </code></pre> <p>However, changing encoding to &quot;German to English&quot; is not working.</p> <p>Is this model not intended to be able to translate German to English, or am I using it wrong?</p>
<p>Probably very late to the party, but I just printed out model.config as @acivic2nv suggested. It seems like the Google T5 model only supports translations from English to other languages, specifically: French, German, and Romanian.</p> <p>In case you are still looking for a model that is able to translate from German to English, try using the Helsinki-NLP/opus-mt-de-en! I have had some encouraging results with it.</p> <p>Here is a quick working example in case it helps:</p> <pre><code>from transformers import pipeline translator = pipeline(&quot;translation&quot;, model=&quot;Helsinki-NLP/opus-mt-de-en&quot;) text1 = &quot;Dies ist ein Beispieltext.&quot; text2 = &quot;Das ist ein weiterer Beispieltext.&quot; text = [text1, text2] translated = translator(text) print(translated) </code></pre>
84
T5 model
Undefined symbol error when trying to load Huggingface&#39;s T5
https://stackoverflow.com/questions/75597709/undefined-symbol-error-when-trying-to-load-huggingfaces-t5
<h2>Issue</h2> <p>I tried to load T5 models from the Huggingface <code>transformers</code> library in python as follows</p> <pre><code>import pytorch import transformers from transformers import AutoModelForSeq2SeqLM plm = AutoModelForSeq2SeqLM.from_pretrained('t5-small') </code></pre> <p>The <code>AutoModel</code> line results in an error:</p> <pre><code>File &quot;main.py&quot;, line 64, in main plm = AutoModelForSeq2SeqLM.from_pretrained(args.checkpoint) File &quot;/home/abr247/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py&quot;, line 463, in from_pretrained return model_class.from_pretrained( File &quot;/home/abr247/.local/lib/python3.8/site-packages/transformers/modeling_utils.py&quot;, line 2351, in from_pretrained model = cls(config, *model_args, **model_kwargs) File &quot;/home/abr247/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py&quot;, line 1499, in __init__ self.encoder = T5Stack(encoder_config, self.shared) File &quot;/home/abr247/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py&quot;, line 861, in __init__ [T5Block(config, has_relative_attention_bias=bool(i == 0)) for i in range(config.num_layers)] File &quot;/home/abr247/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py&quot;, line 861, in &lt;listcomp&gt; [T5Block(config, has_relative_attention_bias=bool(i == 0)) for i in range(config.num_layers)] File &quot;/home/abr247/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py&quot;, line 646, in __init__ self.layer.append(T5LayerSelfAttention(config, has_relative_attention_bias=has_relative_attention_bias)) File &quot;/home/abr247/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py&quot;, line 577, in __init__ self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon) File &quot;/home/abr247/.local/lib/python3.8/site-packages/apex/normalization/fused_layer_norm.py&quot;, line 364, in __init__ fused_layer_norm_cuda = importlib.import_module(&quot;fused_layer_norm_cuda&quot;) File &quot;/usr/lib/python3.8/importlib/__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1014, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 991, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 975, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 657, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 556, in module_from_spec File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1166, in create_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 219, in _call_with_frames_removed ImportError: /usr/local/lib/python3.8/dist-packages/fused_layer_norm_cuda.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN8pybind116detail11type_casterIN3c108ArrayRefIlEEvE4loadENS_6handleEb </code></pre> <p>I am able to minimally reproduce this error with <code>import fused_layer_norm_cuda</code>, which yields the error</p> <pre><code>Traceback (most recent call last): File &quot;main.py&quot;, line 3, in &lt;module&gt; import fused_layer_norm_cuda ImportError: /usr/local/lib/python3.8/dist-packages/fused_layer_norm_cuda.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN8pybind116detail11type_casterIN3c108ArrayRefIlEEvE4loadENS_6handleEb </code></pre> <h2>Some details</h2> <ul> <li>OS: Debian (on a cluster I don't have admin privileges on)</li> <li>I'm using a Singularity <ul> <li>provided by NVIDIA (<a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12" rel="nofollow noreferrer">https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12</a>)</li> <li>bootstrapped from docker container</li> <li>python 3.8</li> <li>CUDA 11.8</li> <li>pytorch 1.12.1+cu102</li> </ul> </li> </ul> <h2>My attempts</h2> <p>I searched for this issue, and found <a href="https://stackoverflow.com/questions/67117097/c-cpython-38-x86-64-linux-gnu-so-undefined-symbol-zn6caffe28typemeta21-typem">this</a> similar error, but not about <code>fused_layer_norm_cuda</code>; the <a href="https://github.com/facebookresearch/fairseq/issues/4246" rel="nofollow noreferrer">same</a> error, but while using <code>fairseq</code>, and the answers were not helpful to me; and the <a href="https://github.com/NVIDIA/apex/issues/1533" rel="nofollow noreferrer">exact same issue</a> asked on the NVIDIA/Apex github issues section, but no response was given. ChatGPT suggested I had incompatible Apex.</p> <p>I tried installing pytorch compiled for a more recent CUDA and installing an up-to-date Apex, and neither solution worked. Here are the commands I used:</p> <pre><code>singularity exec --nv $container pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio -f https://download.pytorch.org/whl/torch_stable.html </code></pre> <pre><code>singularity exec --nv $container pip install git+https://github.com/NVIDIA/apex.git </code></pre> <p><strong>Does anyone have any suggestions for what the issue/solution could be?</strong></p>
<p>I had a similar problem and I found that <code>pip uninstall apex</code> to remove apex package solved my problem.</p> <p>More precisely, I had the excact same problem as with <code>fairseq</code> but the solution proposed did not work. When I compared to colab where the code was running, <code>apex</code> was not installed, so I assumed it was not necessary for my use.</p>
85
T5 model
Fine tuning T5 not converging
https://stackoverflow.com/questions/76932312/fine-tuning-t5-not-converging
<p>I am new in this world of transformers and NLP, and I am having a problem when fine tuning T5 for my specific use case.</p> <p>What I want to achieve, is that the model receives an input text, and outputs a JSON (as a string) of the relevant information in the text.</p> <p>There are 3 formats that the model can respond, below are some examples: Input: Hey, can you give one hundred dollars to John? Expected Output: '{&quot;action&quot;: &quot;T&quot;, &quot;data&quot;: {&quot;name&quot;: &quot;John&quot;, &quot;amount&quot;: 100, &quot;currency&quot;: &quot;USD&quot;}}'</p> <p>Input: I want to add Benjamin Franklin to my contacts. He has an account on citibank, with number 412389124. Expected Output: '{&quot;action&quot;: &quot;A&quot;, &quot;data&quot;: {&quot;name&quot;: &quot;Benjamin Franklin&quot;, &quot;account_no&quot;: 412389124, &quot;entity&quot;: &quot;Citibank&quot;, &quot;id_num&quot;: null}'</p> <p>Input: Hey, what's the weather gonna be tonight? Expected Output: '{&quot;accion&quot;: &quot;N&quot;, &quot;datos&quot;: {}}'</p> <p>I've built a Python script to generate the inputs and labels as random as possible. With that python script, I generated 20000 data points (I can generate less or more of that).</p> <p>Using T5 as my base model, I've trained it using the trainer from pytorch.</p> <p>Below is my code:</p> <pre><code>model_name_huggingface = &quot;google/t5-base&quot; tokenizer = T5Tokenizer.from_pretrained(model_name_huggingface) model = T5ForConditionalGeneration.from_pretrained(model_name_huggingface) </code></pre> <p>Then, after I tokenize my dataset.</p> <pre><code>batch_size = 16 training_args = Seq2SeqTrainingArguments( output_dir=&quot;models/chimi-mt5-base&quot;, evaluation_strategy=&quot;steps&quot;, eval_steps=100, logging_strategy=&quot;steps&quot;, logging_steps=100, save_strategy=&quot;steps&quot;, save_steps=200, # learning_rate=1e-4, optim=&quot;adafactor&quot;, learning_rate=5e-4, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, weight_decay=0.05, save_total_limit=3, num_train_epochs=2, metric_for_best_model=&quot;exact_match&quot;, # greater_is_better=False, load_best_model_at_end=True ) </code></pre> <pre><code>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=base_model) cer = evaluate.load(&quot;cer&quot;, module_type=&quot;metric&quot;) exact_match = evaluate.load(&quot;exact_match&quot;, module_type=&quot;metric&quot;) </code></pre> <pre><code>import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) result = {} # Compute CER result[&quot;cer&quot;] = cer.compute(predictions=decoded_preds, references=decoded_labels) # Compute Exact Match exact_match_res = exact_match.compute(predictions=decoded_preds, references=decoded_labels, ignore_case=True) result[&quot;exact_match&quot;] = exact_match_res[&quot;exact_match&quot;] return {k: round(v, 4) for k, v in result.items()} </code></pre> <pre><code>trainer = Seq2SeqTrainer( model=base_model, args=training_args, train_dataset=tokenized_chimi_dataset[&quot;train&quot;], eval_dataset=tokenized_chimi_dataset[&quot;validation&quot;], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) </code></pre> <pre><code>result = trainer.train() </code></pre> <p>That's the current code I am using to fine tune T5.</p> <p>The training loss goes down up to 0.054, and never improves. The validation loss goes down up 0.034, and never improves. The CER metric goes down up to 0.4875 and never improves after that. But, just to let you know, after the first 100 steps, it already has a CER of 0.583. The Exact Match Metric goes up to 0.3089, and that already happens after the 600th step.</p> <p>By testing, I see that it responds in the correct JSON format, and the action is responded correctly normally. But then, the data inside the JSON is not often correct.</p> <p>What can I do to improve this? I am stuck on this for a long time, and I am not really sure how to proceed. Any help is appreciated.</p> <p>Thanks in advance!</p> <p>I tried balancing my dataset, and tuning the hyperparameters, but it still didn't result in any relevant ups in performance.</p>
<p>It's been almost a year - did you solve this?</p> <p>I see two possible issues with your approach.</p> <p>(1) Trying to model randomness (?) You said:</p> <blockquote> <p>I've built a Python script to generate the inputs and labels as random as possible.</p> </blockquote> <p>If I understand correct - you are creating synthetic data that is totally random. (Is that correct?) That won't work.</p> <p>(2) Trying to output valid JSON You said</p> <blockquote> <p>Expected Output:</p> </blockquote> <pre class="lang-json prettyprint-override"><code>'{&quot;action&quot;: &quot;T&quot;, &quot;data&quot;: {&quot;name&quot;: &quot;John&quot;, &quot;amount&quot;: 100, &quot;currency&quot;: &quot;USD&quot;}}' </code></pre> <p>As far as I know curly braces {} are not in the t5 tokenizer vocabulary. see: <a href="https://github.com/huggingface/transformers/issues/21836" rel="nofollow noreferrer">https://github.com/huggingface/transformers/issues/21836</a></p>
86
T5 model
Semantic searching using Google flan-t5
https://stackoverflow.com/questions/75673222/semantic-searching-using-google-flan-t5
<p>I'm trying to use google flan t5-large to create embeddings for a simple semantic search engine. However, the generated embeddings cosine similarity with my query is very off. Is there something I'm doing wrong?</p> <pre class="lang-py prettyprint-override"><code>import torch from transformers import AutoTokenizer, AutoModel import torch from sklearn.metrics.pairwise import cosine_similarity from scipy.spatial.distance import euclidean tokenizer = AutoTokenizer.from_pretrained('google/flan-t5-large') model = AutoModel.from_pretrained('google/flan-t5-large') # Set the text to encode def emebddings_generate(text): all_embeddings = [] for i in text: input_ids = tokenizer.encode(i, return_tensors='pt') with torch.no_grad(): embeddings = model(input_ids, decoder_input_ids=input_ids).last_hidden_state.mean(dim=1) all_embeddings.append((embeddings,i)) return all_embeddings def run_query(query,corpus): input_ids = tokenizer.encode(query, return_tensors='pt') with torch.no_grad(): quer_emebedding=model(input_ids,decoder_input_ids=input_ids).last_hidden_state.mean(dim=1) similairtiy = [] for embeds in corpus: sim = euclidean(embeds[0].flatten(),quer_emebedding.flatten()) similairtiy.append((embeds[1],float(sim))) return similairtiy text = ['some sad song', ' a very happy song'] corpus = emebddings_generate(text) query = &quot;I'm feeling so sad rn&quot; similairtiy = run_query( query,corpus) for i in similairtiy: print(i) print(i[1],i[0]) </code></pre> <p>I've tried different pooling techniques as well as using other distance metrics.</p>
<p>The problem you face here is that you assume that FLAN's sentence embeddings are suited for similarity metrics, but that isn't the case. Jacob Devlin <a href="https://github.com/google-research/bert/issues/164#issuecomment-441324222" rel="noreferrer">wrote</a> once regarding BERT:</p> <blockquote> <p>I'm not sure what these vectors are, since BERT does not generate meaningful sentence vectors.</p> </blockquote> <p>But that isn't an issue, because <a href="https://arxiv.org/abs/2109.01652" rel="noreferrer">FLAN</a> is intended for other use cases. It was trained on different datasets with a suitable instruction prompt for that task to allow zero-shot prompting (i.e. performing tasks the model hasn't seen been trained on). That means you could perform your similarity task by formulating a proper prompt without any training. For example:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_id = &quot;google/flan-t5-large&quot; tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSeq2SeqLM.from_pretrained(model_id) prompt = &quot;&quot;&quot;Which song fits the query. QUERY: I'm feeling so sad rn OPTIONS -some sad song -a very happy song&quot;&quot;&quot; input_ids = tokenizer(prompt, return_tensors=&quot;pt&quot;).input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) </code></pre> <p>Output:</p> <pre><code>some sad song </code></pre> <p>Depending on your use case you might face issues when the number of options increases or when you want to work with the sentence embeddings. If this is the case, you should have a look at <a href="https://www.sbert.net/examples/applications/semantic-search/README.html" rel="noreferrer">sentence-transformers</a>. These are transformers that were trained to produce meaningful sentence embeddings and can therefore be used to calculate the cosine similarity of two sentences.</p>
87
T5 model
How to use Huggingface pretrained models to get the output of the dataset that was used to train the model?
https://stackoverflow.com/questions/69530032/how-to-use-huggingface-pretrained-models-to-get-the-output-of-the-dataset-that-w
<p>I am working on getting the abstractive summaries of the XSUM and the CNN DailyMail datasets using Huggingface's pre-trained BART, Pegasus, and T5 models.</p> <p>I am confused because there already exist checkpoints of models pre-trained on the same dataset.</p> <p>So even if I do:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(&quot;mwesner/pretrained-bart-CNN-Dailymail-summ&quot;) model = AutoModelForSeq2SeqLM.from_pretrained(&quot;mwesner/pretrained-bart-CNN-Dailymail-summ&quot;) </code></pre> <p>I can't understand how to get the summaries of either dataset since I don't have any new sentences that I can feed in.</p> <p>This is how a pretrained model is normally used:</p> <pre class="lang-py prettyprint-override"><code>from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn') tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn') ARTICLE_TO_SUMMARIZE = &quot;My friends are cool but they eat too many carbs.&quot; inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') # Generate Summary summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) </code></pre> <p>But I need the summaries generated by the pre-trained model on the dataset that was used to train them (XSUM and CNN DailyNews).</p>
88
T5 model
PEFT LoRA Trainer No executable batch size found
https://stackoverflow.com/questions/77404935/peft-lora-trainer-no-executable-batch-size-found
<p>I'm trying to fine tune the model weights from a FLAN-T5 model downloaded from hugging face. I'm trying to do this with PEFT and specifically LoRA. I'm using the code below. I'm getting an error &quot;No executable batch size found, reached zero&quot;. It seems to be related to the &quot;auto_find_batch_size&quot; parameter that gets passed to the peft_trainer. I'm running this on ubuntu server 18.04LTS with an invidia gpu that has 8GB of ram. Can anyone see what the issue might be and suggest how to solve it?</p> <p>code:</p> <pre><code>from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer import torch import time import evaluate import pandas as pd import numpy as np # # ### Load Dataset and LLM huggingface_dataset_name = &quot;knkarthick/dialogsum&quot; dataset = load_dataset(huggingface_dataset_name) dataset # Load the pre-trained [FLAN-T5 model](https://huggingface.co/docs/transformers/model_doc/flan-t5) and its tokenizer directly from HuggingFace. Using the [small version](https://huggingface.co/google/flan-t5-base) of FLAN-T5. Setting `torch_dtype=torch.bfloat16` specifies the memory type to be used by this model. model_name='google/flan-t5-base' original_model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_name) index = 200 dialogue = dataset['test'][index]['dialogue'] summary = dataset['test'][index]['summary'] prompt = f&quot;&quot;&quot; Summarize the following conversation. {dialogue} Summary: &quot;&quot;&quot; inputs = tokenizer(prompt, return_tensors='pt') output = tokenizer.decode( original_model.generate( inputs[&quot;input_ids&quot;], max_new_tokens=200, )[0], skip_special_tokens=True ) dash_line = '-'.join('' for x in range(100)) # updated 11/1/23 to ensure using gpu def tokenize_function(example): start_prompt = 'Summarize the following conversation.\n\n' end_prompt = '\n\nSummary: ' prompt = [start_prompt + dialogue + end_prompt for dialogue in example[&quot;dialogue&quot;]] example['input_ids'] = tokenizer(prompt, padding=&quot;max_length&quot;, truncation=True, return_tensors=&quot;pt&quot;).input_ids\ .cuda() example['labels'] = tokenizer(example[&quot;summary&quot;], padding=&quot;max_length&quot;, truncation=True, return_tensors=&quot;pt&quot;).input_ids\ .cuda() return example # The dataset actually contains 3 diff splits: train, validation, test. # The tokenize_function code is handling all data across all splits in batches. tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets = tokenized_datasets.remove_columns(['id', 'topic', 'dialogue', 'summary',]) # To save some time subsample the dataset: tokenized_datasets = tokenized_datasets.filter(lambda example, index: index % 100 == 0, with_indices=True) from peft import LoraConfig, get_peft_model, TaskType lora_config = LoraConfig( r=32, # Rank lora_alpha=32, target_modules=[&quot;q&quot;, &quot;v&quot;], lora_dropout=0.05, bias=&quot;none&quot;, task_type=TaskType.SEQ_2_SEQ_LM # FLAN-T5 ) # Add LoRA adapter layers/parameters to the original LLM to be trained. peft_model = get_peft_model(original_model, lora_config) # print(print_number_of_trainable_model_parameters(peft_model)) # # ### Train PEFT Adapter # # Define training arguments and create `Trainer` instance. # In[10]: output_dir = f'/path/LLM/PEFT/train_args/no_log_max_depth_{str(int(time.time()))}' peft_training_args = TrainingArguments( output_dir=output_dir, auto_find_batch_size=True, learning_rate=1e-3, # Higher learning rate than full fine-tuning. num_train_epochs=1 ) peft_trainer = Trainer( model=peft_model, args=peft_training_args, train_dataset=tokenized_datasets[&quot;train&quot;], ) # In[11]: peft_trainer.train() peft_model_path=&quot;/path/LLM/PEFT/peft-dialogue-summary-checkpoint-local&quot; peft_trainer.model.save_pretrained(peft_model_path) tokenizer.save_pretrained(peft_model_path) </code></pre> <p>error:</p> <pre><code>Found cached dataset csv (/home/username/.cache/huggingface/datasets/knkarthick___csv/knkarthick--dialogsum-cd36827d3490488d/0.0.0/6954658bab30a358235fa864b05cf819af0e179325c740e4bc853bcc7ec513e1) 100%|██████████| 3/3 [00:00&lt;00:00, 1134.31it/s] /home/username/anaconda3/envs/new_llm/lib/python3.10/site-packages/transformers/optimization.py:407: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( 0%| | 0/16 [00:00&lt;?, ?it/s] 0%| | 0/32 [00:00&lt;?, ?it/s] 0%| | 0/63 [00:00&lt;?, ?it/s] Traceback (most recent call last):it/s] File &quot;/home/username/stuff/username_storage/LLM/PEFT/offline_peft_train_no_log_max_depth.py&quot;, line 161, in &lt;module&gt; peft_trainer.train() File &quot;/home/username/anaconda3/envs/new_llm/lib/python3.10/site-packages/transformers/trainer.py&quot;, line 1664, in train return inner_training_loop( File &quot;/home/username/anaconda3/envs/new_llm/lib/python3.10/site-packages/accelerate/utils/memory.py&quot;, line 134, in decorator raise RuntimeError(&quot;No executable batch size found, reached zero.&quot;) RuntimeError: No executable batch size found, reached zero. 0%| | 0/125 [00:00&lt;?, ?it/s] </code></pre>
<p>It could be that <code>auto_find_batch_size</code> is not perfect in its process. It might have a value that would be too big to fit into the currently-available VRAM space, and so the training loop decides that it can't continue and errors out. I'm seeing this myself and that's the conclusion I've come to.</p> <p>It might be better to pin the batch size and see if the error is resolved. You would have to manually fiddle with the batch size to utilise the available VRAM as effectively as possible.</p>
89
T5 model
T5Tokenizer requires the SentencePiece library but it was not found in your environment
https://stackoverflow.com/questions/65445651/t5tokenizer-requires-the-sentencepiece-library-but-it-was-not-found-in-your-envi
<p>I am trying to explore <a href="https://huggingface.co/transformers/model_doc/t5.html#" rel="noreferrer">T5</a></p> <p>this is the code</p> <pre><code>!pip install transformers from transformers import T5Tokenizer, T5ForConditionalGeneration qa_input = &quot;&quot;&quot;question: What is the capital of Syria? context: The name &quot;Syria&quot; historically referred to a wider region, broadly synonymous with the Levant, and known in Arabic as al-Sham. The modern state encompasses the sites of several ancient kingdoms and empires, including the Eblan civilization of the 3rd millennium BC. Aleppo and the capital city Damascus are among the oldest continuously inhabited cities in the world.&quot;&quot;&quot; tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small') input_ids = tokenizer.encode(qa_input, return_tensors=&quot;pt&quot;) # Batch size 1 outputs = model.generate(input_ids) output_str = tokenizer.decode(outputs.reshape(-1)) </code></pre> <p>I got this error:</p> <pre><code>--------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-2-8d24c6a196e4&gt; in &lt;module&gt;() 5 kingdoms and empires, including the Eblan civilization of the 3rd millennium BC. Aleppo and the capital city Damascus are 6 among the oldest continuously inhabited cities in the world.&quot;&quot;&quot; ----&gt; 7 tokenizer = T5Tokenizer.from_pretrained('t5-small') 8 model = T5ForConditionalGeneration.from_pretrained('t5-small') 9 input_ids = tokenizer.encode(qa_input, return_tensors=&quot;pt&quot;) # Batch size 1 1 frames /usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in requires_sentencepiece(obj) 521 name = obj.__name__ if hasattr(obj, &quot;__name__&quot;) else obj.__class__.__name__ 522 if not is_sentencepiece_available(): --&gt; 523 raise ImportError(SENTENCEPIECE_IMPORT_ERROR.format(name)) 524 525 ImportError: T5Tokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment. -------------------------------------------------------------------------- </code></pre> <p>after that I install sentencepiece library as was suggested like this:</p> <pre><code>!pip install transformers !pip install sentencepiece from transformers import T5Tokenizer, T5ForConditionalGeneration qa_input = &quot;&quot;&quot;question: What is the capital of Syria? context: The name &quot;Syria&quot; historically referred to a wider region, broadly synonymous with the Levant, and known in Arabic as al-Sham. The modern state encompasses the sites of several ancient kingdoms and empires, including the Eblan civilization of the 3rd millennium BC. Aleppo and the capital city Damascus are among the oldest continuously inhabited cities in the world.&quot;&quot;&quot; tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small') input_ids = tokenizer.encode(qa_input, return_tensors=&quot;pt&quot;) # Batch size 1 outputs = model.generate(input_ids) output_str = tokenizer.decode(outputs.reshape(-1)) </code></pre> <p>but I got another issue:</p> <blockquote> <p>Some weights of the model checkpoint at t5-small were not used when initializing T5ForConditionalGeneration: ['decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight']</p> <ul> <li>This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).</li> <li>This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).</li> </ul> </blockquote> <p>so I did not understand what is going on, any explanation?</p>
<p>I used these two command and this working fine for me!</p> <pre><code>!pip install datasets transformers[sentencepiece] !pip install sentencepiece </code></pre>
90
T5 model
MT5 machine learning model for paraphrasing
https://stackoverflow.com/questions/74149057/mt5-machine-learning-model-for-paraphrasing
<p>I'm trying to create a machine learning model to paraphrase given Persian text. I was introduced to mt5 as a multilingual text-to-text model. However, I can't figure out how to implement this. I have gathered the data. Here's a sample of the data: <a href="https://i.sstatic.net/qz6MC.jpg" rel="nofollow noreferrer">Data sample</a></p> <p><strong>---UPDATE---</strong></p> <p>I have tried to paraphrase using the T5 model, and it works well for English. However, I can't get logical results from the MT5 model. Here is the T5 version code:</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(&quot;Vamsi/T5_Paraphrase_Paws&quot;) model = AutoModelForSeq2SeqLM.from_pretrained(&quot;Vamsi/T5_Paraphrase_Paws&quot;) </code></pre> <pre><code>sentence = sentence_1 text = &quot;paraphrase: &quot; + sentence + &quot; &lt;/s&gt;&quot; encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors=&quot;pt&quot;) input_ids, attention_masks = encoding[&quot;input_ids&quot;], encoding[&quot;attention_mask&quot;] outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, do_sample=True, top_k=120, top_p=0.95, early_stopping=False, num_return_sequences=5 ) print (&quot;\n&quot;) print(&quot;Origianl sentence:&quot;) print(sentence) print (&quot;\n&quot;) print(&quot;Paraphrasing:&quot;) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True) print(line) </code></pre> <p>When I give the following sentence to the model, it returns the following results:</p> <p><em>Original sentence:</em></p> <ul> <li>Washing your hands Properly will keep you away from COVID-19.</li> </ul> <p><em>Paraphrasing:</em></p> <ul> <li>By properly washing your hands, you will keep away from COVID-19.</li> <li>Washing your hands correctly will keep you away from COVID-19.</li> <li>Washing your hands correctly will keep you away from COVID-19.</li> <li>Washing your hands correctly will keep you from COVID-19.</li> <li>Washing your hands properly will keep you away from COVID-19.</li> </ul> <p>But when I change the model to the MT5-base, the results are absurd. Here is an example:</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(&quot;google/mt5-base&quot;) model = AutoModelForSeq2SeqLM.from_pretrained(&quot;google/mt5-base&quot;) </code></pre> <p><em>Original sentence:</em></p> <ul> <li>Washing your hands Properly will keep you away from COVID-19.</li> </ul> <p><em>Paraphrasing:</em></p> <ul> <li>&lt;extra_id_0&gt;, left.</li> <li>&lt;extra_id_0&gt;, also.</li> <li>&lt;extra_id_0&gt;. Comment</li> <li>&lt;extra_id_0&gt;.</li> <li>&lt;extra_id_0&gt;o.</li> </ul>
<p>IMHO mT5 can't be used for paraphrase generation out-of-the-box, just like the T5 can. You can find fine-tuned versions of the T5 model intended for paraphrase generation on the HuggingFace Hub, such as <a href="https://huggingface.co/secometo/mt5-base-turkish-question-paraphrase-generator" rel="nofollow noreferrer">this one</a>. There's a paper associated with the model and you may find the solution there. As far as I understand it, you need a labeled dataset with which you will fine-tune the T5 model to generate paraphrases in your language.</p>
91
T5 model
Training step not executing in pytorch lightning
https://stackoverflow.com/questions/66756245/training-step-not-executing-in-pytorch-lightning
<p>I am working to finetune a t5 model to summarize Amazon reviews. I am following this tutorial here: <a href="https://towardsdatascience.com/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81" rel="nofollow noreferrer">https://towardsdatascience.com/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81</a></p> <p>I noticed that the training_step in my code is never being executed as the training loss remains &quot;NaN&quot; throughout the epoch. However, the validation_step is computed fine.</p> <p>I already confirmed that there are no empty strings in the data and have tried multiple batch sizes.</p> <p>This is the error</p> <pre><code>RuntimeError Traceback (most recent call last) &lt;ipython-input-53-45d4afebefac&gt; in &lt;module&gt;() ----&gt; 1 trainer.fit(model) 8 frames &lt;ipython-input-46-00fddffa2209&gt; in training_epoch_end(self, outputs) 134 print(&quot;OUTPUTS&quot;) 135 print(outputs) --&gt; 136 avg_train_loss = torch.stack([x[&quot;loss&quot;] for x in outputs]).mean() 137 tensorboard_logs = {&quot;avg_train_loss&quot;: avg_train_loss} 138 return {&quot;avg_train_loss&quot;: avg_train_loss, &quot;log&quot;: tensorboard_logs, 'progress_bar': tensorboard_logs} RuntimeError: stack expects a non-empty TensorList </code></pre> <p>I found that the training_step function is never being executed by adding print statements inside the training_step function.</p> <p>Below is my code for the T5FineTuner class (sorry I can't be any more concise):</p> <pre><code>class T5FineTuner(pl.LightningModule): def __init__(self, hparams): super(T5FineTuner, self).__init__() self.hparams = hparams self.model = T5ForConditionalGeneration.from_pretrained(hparams.model_name_or_path) self.tokenizer = T5Tokenizer.from_pretrained(hparams.tokenizer_name_or_path) self.rouge_metric = load_metric('rouge') if self.hparams.freeze_embeds: self.freeze_embeds() if self.hparams.freeze_encoder: self.freeze_params(self.model.get_encoder()) assert_all_frozen(self.model.get_encoder()) n_observations_per_split = { &quot;train&quot;: self.hparams.n_train, &quot;validation&quot;: self.hparams.n_val, &quot;test&quot;: self.hparams.n_test, } self.n_obs = {k: v if v &gt;= 0 else None for k, v in n_observations_per_split.items()} def freeze_params(self, model): for par in model.parameters(): par.requires_grad = False def freeze_embeds(self): &quot;&quot;&quot;Freeze token embeddings and positional embeddings for bart, just token embeddings for t5.&quot;&quot;&quot; try: self.freeze_params(self.model.model.shared) for d in [self.model.model.encoder, self.model.model.decoder]: freeze_params(d.embed_positions) freeze_params(d.embed_tokens) except AttributeError: self.freeze_params(self.model.shared) for d in [self.model.encoder, self.model.decoder]: self.freeze_params(d.embed_tokens) def lmap(self, f, x): &quot;&quot;&quot;list(map(f, x))&quot;&quot;&quot; return list(map(f, x)) def is_logger(self): return True def parse_score(self, result): return {k: round(v.mid.fmeasure * 100, 4) for k, v in result.items()} def forward( self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None ): return self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=labels, ) def _step(self, batch): labels = batch[&quot;target_ids&quot;] labels[labels[:, :] == self.tokenizer.pad_token_id] = -100 # print(labels) outputs = self( input_ids=batch[&quot;source_ids&quot;], attention_mask=batch[&quot;source_mask&quot;], labels=labels, decoder_attention_mask=batch['target_mask'] ) # print(outputs) loss = outputs[0] return loss def ids_to_clean_text(self, generated_ids): gen_text = self.tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) return self.lmap(str.strip, gen_text) def _generative_step(self, batch) : t0 = time.time() generated_ids = self.model.generate( batch[&quot;source_ids&quot;], attention_mask=batch[&quot;source_mask&quot;], use_cache=True, decoder_attention_mask=batch['target_mask'], max_length=150, num_beams=2, repetition_penalty=2.5, length_penalty=1.0, early_stopping=False, ) preds = self.ids_to_clean_text(generated_ids) target = self.ids_to_clean_text(batch[&quot;target_ids&quot;]) gen_time = (time.time() - t0) / batch[&quot;source_ids&quot;].shape[0] loss = self._step(batch) # print(&quot;LOSS _generative_step&quot;) # print(loss) base_metrics = {'val_loss': loss} # rouge: Dict = self.calc_generative_metrics(preds, target) summ_len = np.mean(self.lmap(len, generated_ids)) base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=target) self.rouge_metric.add_batch(preds, target) # rouge_results = self.rouge_metric.compute() # rouge_dict = self.parse_score(rouge_results) # base_metrics.update(rouge1=rouge_dict['rouge1'], rougeL=rouge_dict['rougeL']) return base_metrics def training_step(self, batch, batch_idx): print(&quot;training_step&quot;) print(batch) loss = self._step(batch) tensorboard_logs = {&quot;train_loss&quot;: loss} print(&quot;LOSS&quot;) print(loss) return {&quot;loss&quot;: loss, &quot;log&quot;: tensorboard_logs} def training_epoch_end(self, outputs): print(&quot;OUTPUTS&quot;) print(outputs) avg_train_loss = torch.stack([x[&quot;loss&quot;] for x in outputs]).mean() tensorboard_logs = {&quot;avg_train_loss&quot;: avg_train_loss} return {&quot;avg_train_loss&quot;: avg_train_loss, &quot;log&quot;: tensorboard_logs, 'progress_bar': tensorboard_logs} def validation_step(self, batch, batch_idx): print(&quot;validation_step&quot;) return self._generative_step(batch) def validation_epoch_end(self, outputs): avg_loss = torch.stack([x[&quot;val_loss&quot;] for x in outputs]).mean() tensorboard_logs = {&quot;val_loss&quot;: avg_loss} rouge_results = self.rouge_metric.compute() rouge_dict = self.parse_score(rouge_results) tensorboard_logs.update(rouge1=rouge_dict['rouge1'], rougeL=rouge_dict['rougeL']) ## Clear out the lists for next epoch self.target_gen= [] self.prediction_gen=[] return {&quot;avg_val_loss&quot;: avg_loss, &quot;rouge1&quot; : rouge_results['rouge1'], &quot;rougeL&quot; : rouge_results['rougeL'], &quot;log&quot;: tensorboard_logs, 'progress_bar': tensorboard_logs} def configure_optimizers(self): &quot;Prepare optimizer and schedule (linear warmup and decay)&quot; model = self.model no_decay = [&quot;bias&quot;, &quot;LayerNorm.weight&quot;] optimizer_grouped_parameters = [ { &quot;params&quot;: [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], &quot;weight_decay&quot;: self.hparams.weight_decay, }, { &quot;params&quot;: [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], &quot;weight_decay&quot;: 0.0, }, ] optimizer = AdamW(optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon) self.opt = optimizer return [optimizer] def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None, using_native_amp=False, optimizer_closure=None, on_tpu=None, using_lbfgs=None): # if self.trainer.use_tpu: # xm.optimizer_step(optimizer) # else: optimizer.step() optimizer.zero_grad() self.lr_scheduler.step() def get_tqdm_dict(self): tqdm_dict = {&quot;loss&quot;: &quot;{:.3f}&quot;.format(self.trainer.avg_loss), &quot;lr&quot;: self.lr_scheduler.get_last_lr()[-1]} return tqdm_dict def train_dataloader(self): print(&quot;train_dataloader&quot;) n_samples = self.n_obs['train'] print(n_samples) dataloader = DataLoader(train_dataset, batch_size=self.hparams.train_batch_size, num_workers=4) print(len(dataloader.dataset)) print(self.hparams.train_batch_size * max(1, self.hparams.n_gpu)) print(self.hparams.gradient_accumulation_steps) print(float(self.hparams.num_train_epochs)) t_total = ( (len(dataloader.dataset) // (self.hparams.train_batch_size * max(1, self.hparams.n_gpu))) # // self.hparams.gradient_accumulation_steps * float(self.hparams.num_train_epochs) ) print(t_total) scheduler = get_linear_schedule_with_warmup( self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=t_total ) self.lr_scheduler = scheduler return dataloader def val_dataloader(self): n_samples = self.n_obs['validation'] # validation_dataset = get_dataset(tokenizer=self.tokenizer, type_path=&quot;validation&quot;, num_samples=n_samples, args=self.hparams) return DataLoader(validation_dataset, batch_size=self.hparams.eval_batch_size, num_workers=4) def test_dataloader(self): n_samples = self.n_obs['test'] # test_dataset = get_dataset(tokenizer=self.tokenizer, type_path=&quot;test&quot;, num_samples=n_samples, args=self.hparams) return DataLoader(test_dataset, batch_size=self.hparams.test_batch_size, num_workers=4) </code></pre> <p>Below are my parameters:</p> <pre><code>args_dict = dict( output_dir=&quot;&quot;, # path to save the checkpoints model_name_or_path='t5-small', tokenizer_name_or_path='t5-small', max_input_length=512, max_output_length=150, freeze_encoder=False, freeze_embeds=False, learning_rate=3e-4, weight_decay=0.0, adam_epsilon=1e-8, warmup_steps=0, train_batch_size=20, eval_batch_size=20, num_train_epochs=2, gradient_accumulation_steps=8, n_gpu=1, resume_from_checkpoint=None, val_check_interval = 0.05, n_val=1000, n_train=-1, n_test=-1, early_stop_callback=False, fp_16=False, # if you want to enable 16-bit training then install apex and set this to true opt_level='O1', # you can find out more on optimisation levels here https://nvidia.github.io/apex/amp.html#opt-levels-and-properties max_grad_norm=1.0, # if you enable 16-bit training then set this to a sensible value, 0.5 is a good default seed=42, ) </code></pre>
<p>It seems that this code is quite outdated. What makes this conflict is the <code>optimizer_step()</code> method. I just commented out this whole segment below and it worked for me. If you want to do any custom logic in this function, better to consult the latest code on <a href="https://github.com/PyTorchLightning/pytorch-lightning/blob/f29ecbfd909ff431ef837fcc8ebff451e897cb0b/tests/trainer/test_training_loop.py#L61" rel="nofollow noreferrer">GitHub</a>.</p> <pre><code>def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None, using_native_amp=False,on_tpu=None,using_lbfgs=None, optimizer_closure=None): if self.trainer.use_tpu: xm.optimizer_step(optimizer) else: optimizer.step(closure=optimizer_closure) optimizer.zero_grad() self.lr_scheduler.step() </code></pre>
92
T5 model
How does the finetune on transformer (t5) work?
https://stackoverflow.com/questions/71781813/how-does-the-finetune-on-transformer-t5-work
<p>I am using pytorch lightning to finetune t5 transformer on a specific task. However, I was not able to understand how the finetuning works. I always see this code :</p> <p><code>tokenizer = AutoTokenizer.from_pretrained(hparams.model_name_or_path) model = AutoModelForSeq2SeqLM.from_pretrained(hparams.model_name_or_path)</code></p> <p>I don't get how the finetuning is done, are they freezing the whole model and training the head only, (if so how can I change the head) or are they using the pre-trained model as a weight initializing? I have been looking for an answer for couple days already. Any links or help are appreciated.</p>
<p>If you are using PyTorch Lightning, then it won't freeze the head until you specify it do so. Lightning has a callback which you can use to freeze your backbone and training only the head module. See <a href="https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.BackboneFinetuning.html?highlight=finetunin#pytorch_lightning.callbacks.BackboneFinetuning" rel="nofollow noreferrer">Backbone Finetuning</a></p> <p>Also checkout <a href="https://lightning-flash.readthedocs.io/en/latest/reference/text_classification.html" rel="nofollow noreferrer">Ligthning-Flash</a>, it allows you to quickly build model for various text tasks and uses Transformers library for backbone. You can use the Trainer to specify which kind of finetuning you want to apply for your training.</p> <p>Thanks</p>
93
T5 model
What decoder_input_ids should be for sequence-to-sequence Transformer model?
https://stackoverflow.com/questions/63307009/what-decoder-input-ids-should-be-for-sequence-to-sequence-transformer-model
<p>I use the HuggingFace's Transformers library for building a <strong>sequence-to-sequence model</strong> based on BART and T5. I carefully read the documentation and the research paper and I can't find what the input to the decoder (decoder_input_ids) should be for sequence-to-sequence tasks.</p> <p>Should decoder input for both models (BART and T5) be same as lm_labels (output of the LM head) or should it be same as input_ids (input to the encoder)?</p>
<p>The decoder_input_ids (optional) corresponds to labels, and labels are the preferred way to provide decoder_input_ids. <a href="https://huggingface.co/transformers/glossary.html#decoder-input-ids" rel="nofollow noreferrer">https://huggingface.co/transformers/glossary.html#decoder-input-ids</a></p> <p>This is because internally if decoder_input_ids are None, they will be derived by shifting labels to the right, so you don't have to do the shifting yourself.</p>
94
T5 model
Avoid printing &#39;Generate config GenerationConfig { ... }&#39;
https://stackoverflow.com/questions/76902234/avoid-printing-generate-config-generationconfig
<p>I am facing an issue while training a t5 model. After each evaluation step, the following message is printed, which makes it impossible to maintain an overview. Do you have any ideas, on how I can avoid such behavior?</p> <blockquote> <p>***** Running Evaluation ***** Num examples = 819 Batch size = 32 Generate config GenerationConfig { &quot;decoder_start_token_id&quot;: 0,<br /> &quot;eos_token_id&quot;: 1, &quot;output_attentions&quot;: true,<br /> &quot;output_hidden_states&quot;: true, &quot;pad_token_id&quot;: 0,<br /> &quot;transformers_version&quot;: &quot;4.26.1&quot; }</p> <p>Generate config GenerationConfig { &quot;decoder_start_token_id&quot;: 0,<br /> &quot;eos_token_id&quot;: 1, &quot;output_attentions&quot;: true,<br /> &quot;output_hidden_states&quot;: true, &quot;pad_token_id&quot;: 0,<br /> &quot;transformers_version&quot;: &quot;4.26.1&quot; }</p> <p>Generate config GenerationConfig { &quot;decoder_start_token_id&quot;: 0,<br /> &quot;eos_token_id&quot;: 1, &quot;output_attentions&quot;: true,<br /> &quot;output_hidden_states&quot;: true, &quot;pad_token_id&quot;: 0,<br /> &quot;transformers_version&quot;: &quot;4.26.1&quot; } ...</p> </blockquote> <pre><code>from transformers import AutoModelForSeq2SeqLM model_id=&quot;google/flan-t5-base&quot; model = AutoModelForSeq2SeqLM.from_pretrained(model_id) repository_id = f&quot;{model_id.split('/')[1]}-{dataset_id}&quot; training_args = Seq2SeqTrainingArguments( output_dir=repository_id, #gradient_accumulation_steps = 8, per_device_train_batch_size=8, per_device_eval_batch_size=8, predict_with_generate=True, fp16=False, # Overflows with fp16 learning_rate=5e-6, num_train_epochs=5, optim = &quot;adamw_torch&quot;, logging_dir=f&quot;{repository_id}/logs&quot;, logging_strategy=&quot;steps&quot;, logging_steps=50, evaluation_strategy=&quot;steps&quot;, eval_steps=5, save_strategy=&quot;steps&quot;, save_total_limit=2, load_best_model_at_end=True, report_to=&quot;tensorboard&quot;, push_to_hub=False, hub_strategy=&quot;every_save&quot;, hub_model_id=repository_id, hub_token=HfFolder.get_token(), ) trainer = Seq2SeqTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=tokenized_dataset[&quot;train&quot;], eval_dataset=tokenized_dataset[&quot;test&quot;], compute_metrics=compute_metrics, ) trainer.train() </code></pre> <p>I tried to change the log level, but that does not help.</p> <pre><code>import os os.environ.TF_CPP_MIN_LOG_LEVEL=2 </code></pre>
95
T5 model
CUDA out of memory error during PEFT LoRA fine tuning
https://stackoverflow.com/questions/77406208/cuda-out-of-memory-error-during-peft-lora-fine-tuning
<p>I'm trying to fine-tune the model weights from a FLAN-T5 model downloaded from hugging face. I'm trying to do this with PEFT and specifically LoRA. I'm using the Python 3 code below. I'm running this on ubuntu server 18.04LTS with an invidia gpu that has 8GB of ram. I'm getting an error &quot;CUDA out of memory&quot;, the full error message is below. I've tried adding:</p> <pre><code>import os os.environ[&quot;PYTORCH_CUDA_ALLOC_CONF&quot;] = &quot;max_split_size_mb:512&quot; </code></pre> <p>but I'm still getting the same error message. The code and error message are below. Can anyone see what the issue might be and suggest how to solve it?</p> <p>Code:</p> <pre><code>from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer import torch import time import evaluate import pandas as pd import numpy as np # added to deal with memory allocation error import os os.environ[&quot;PYTORCH_CUDA_ALLOC_CONF&quot;] = &quot;max_split_size_mb:512&quot; # # ### Load Dataset and LLM huggingface_dataset_name = &quot;knkarthick/dialogsum&quot; dataset = load_dataset(huggingface_dataset_name) dataset # Load the pre-trained [FLAN-T5 model](https://huggingface.co/docs/transformers/model_doc/flan-t5) and its tokenizer directly from HuggingFace. Using the [small version](https://huggingface.co/google/flan-t5-base) of FLAN-T5. Setting `torch_dtype=torch.bfloat16` specifies the memory type to be used by this model. # In[17]: model_name='google/flan-t5-base' original_model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_name) index = 200 dialogue = dataset['test'][index]['dialogue'] summary = dataset['test'][index]['summary'] prompt = f&quot;&quot;&quot; Summarize the following conversation. {dialogue} Summary: &quot;&quot;&quot; inputs = tokenizer(prompt, return_tensors='pt') output = tokenizer.decode( original_model.generate( inputs[&quot;input_ids&quot;], max_new_tokens=200, )[0], skip_special_tokens=True ) dash_line = '-'.join('' for x in range(100)) # updated 11/1/23 to ensure using gpu def tokenize_function(example): start_prompt = 'Summarize the following conversation.\n\n' end_prompt = '\n\nSummary: ' prompt = [start_prompt + dialogue + end_prompt for dialogue in example[&quot;dialogue&quot;]] example['input_ids'] = tokenizer(prompt, padding=&quot;max_length&quot;, truncation=True, return_tensors=&quot;pt&quot;).input_ids\ .cuda() example['labels'] = tokenizer(example[&quot;summary&quot;], padding=&quot;max_length&quot;, truncation=True, return_tensors=&quot;pt&quot;).input_ids\ .cuda() return example # The dataset actually contains 3 diff splits: train, validation, test. # The tokenize_function code is handling all data across all splits in batches. tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets = tokenized_datasets.remove_columns(['id', 'topic', 'dialogue', 'summary',]) # To save some time subsample the dataset: tokenized_datasets = tokenized_datasets.filter(lambda example, index: index % 100 == 0, with_indices=True) # Check the shapes of all three parts of the dataset: # In[7]: # print(f&quot;Shapes of the datasets:&quot;) # print(f&quot;Training: {tokenized_datasets['train'].shape}&quot;) # print(f&quot;Validation: {tokenized_datasets['validation'].shape}&quot;) # print(f&quot;Test: {tokenized_datasets['test'].shape}&quot;) # # print(tokenized_datasets) # The output dataset is ready for fine-tuning. # # ### Perform Parameter Efficient Fine-Tuning (PEFT) # - use LoRA # # ### Setup the PEFT/LoRA model for Fine-Tuning # # - set up the PEFT/LoRA model for fine-tuning with a new layer/parameter adapter # - freezing the underlying LLM and only training the adapter # - LoRA configuration below # - Note the rank (`r`) hyper-parameter, which defines the rank/dimension of the adapter to be trained # In[8]: from peft import LoraConfig, get_peft_model, TaskType lora_config = LoraConfig( # r=4, # Rank # lora_alpha=4, r=32, # Rank lora_alpha=32, target_modules=[&quot;q&quot;, &quot;v&quot;], lora_dropout=0.05, bias=&quot;none&quot;, task_type=TaskType.SEQ_2_SEQ_LM # FLAN-T5 ) # Add LoRA adapter layers/parameters to the original LLM to be trained. # In[9]: peft_model = get_peft_model(original_model, lora_config) # print(print_number_of_trainable_model_parameters(peft_model)) # Enable gradient checkpointing in the model's configuration. # peft_model.config.gradient_checkpointing = True # # ### Train PEFT Adapter # # Define training arguments and create `Trainer` instance. # In[10]: output_dir = f'/home/username/stuff/username_storage/LLM/PEFT/train_args/no_log_max_depth_{str(int(time.time()))}' peft_training_args = TrainingArguments( output_dir=output_dir, # auto_find_batch_size=True, per_device_train_batch_size=4, learning_rate=1e-3, # Higher learning rate than full fine-tuning. num_train_epochs=1, # max_steps=1 ) peft_trainer = Trainer( model=peft_model, args=peft_training_args, train_dataset=tokenized_datasets[&quot;train&quot;], ) # In[11]: peft_trainer.train() peft_model_path=&quot;/home/username/stuff/username_storage/LLM/PEFT/peft-dialogue-summary-checkpoint-local&quot; peft_trainer.model.save_pretrained(peft_model_path) tokenizer.save_pretrained(peft_model_path) </code></pre> <p>error:</p> <pre><code>return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 7.79 GiB total capacity; 1.10 GiB already allocated; 17.31 MiB free; 1.11 GiB reserved in total by PyTorch) If reserved memory is &gt;&gt; allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 0%| | 0/32 [00:00&lt;?, ?it/s] </code></pre> <p>update:</p> <p>I tried going down to batch size 1 and I got the error message below</p> <pre><code>attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as( torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 7.79 GiB total capacity; 1.10 GiB already allocated; 11.31 MiB free; 1.12 GiB reserved in total by PyTorch) If reserved memory is &gt;&gt; allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF </code></pre>
<p>Have you tried with re-enabling gradient_checkpointing or enabling it at all? According to <a href="https://huggingface.co/docs/transformers/v4.18.0/en/performance" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/v4.18.0/en/performance</a> even if everything else is set up fine, it might happen that you run into out of memory issues due to gradient calculation overhead. During gradient calculation, all activations from the forward pass are saved and this can cause a huge memory consumption spike. In order to fix this issue, the Huggingface docs recommend to enable gradient_checkpointing in the TrainingArguments. I see that you have the line</p> <pre><code>peft_model.config.gradient_checkpointing = True </code></pre> <p>commented out. I haven't tried this, but maybe you should re-enable this. Otherwise try to follow the docs and just enable gradient_checkpointing in the TrainingArguments.</p> <p>I had the same problem when fine-tuning llama 2 with peft and LoRA with quantization. The models itself would fit easily onto one gpu (my setup is 2 gpus with 24GB each). However, during fine-tuning the memory consumption spiked extremely. A 4-bit quantized llama-2-13B would consume around 7-8 GB regularly and suddenly spike to above 24GB during fine-tuning. The approach mentioned in the Huggingface docs fixed the problem for me.</p>
96
T5 model
How to use Huggingface Transformers with PrimeQA model?
https://stackoverflow.com/questions/73357305/how-to-use-huggingface-transformers-with-primeqa-model
<p>Here is the model <a href="https://huggingface.co/PrimeQA/t5-base-table-question-generator" rel="nofollow noreferrer">https://huggingface.co/PrimeQA/t5-base-table-question-generator</a></p> <p>Hugging face says that I should use the following code to use the model in transformers:</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(&quot;PrimeQA/t5-base-table-question-generator&quot;) model = AutoModelForSeq2SeqLM.from_pretrained(&quot;PrimeQA/t5-base-table-question-generator&quot;) </code></pre> <p>It also provides a link to this documentation in the model <a href="https://github.com/primeqa/primeqa/blob/main/notebooks/qg/tableqg_inference.ipynb" rel="nofollow noreferrer">https://github.com/primeqa/primeqa/blob/main/notebooks/qg/tableqg_inference.ipynb</a></p> <p>It has the following code:</p> <pre><code>from primeqa.qg.models.qg_model import QGModel from tabulate import tabulate # only used to visualize table model_name = 'PrimeQA/t5-base-table-question-generator' table_qg_model = QGModel(model_name, modality='table') table_list = [ {&quot;header&quot;: [&quot;Player&quot;, &quot;No.&quot;, &quot;Nationality&quot;, &quot;Position&quot;, &quot;Years in Toronto&quot;, &quot;School Team&quot;], &quot;rows&quot;: [ [&quot;Antonio Lang&quot;, 21, &quot;United States&quot;, &quot;Guard-Forward&quot;, &quot;1999-2000&quot;, &quot;Duke&quot;], [&quot;Voshon Lenard&quot;, 2, &quot;United States&quot;, &quot;Guard&quot;, &quot;2002-03&quot;, &quot;Minnesota&quot;], [&quot;Martin Lewis&quot;, 32, &quot;United States&quot;, &quot;Guard-Forward&quot;, &quot;1996-97&quot;, &quot;Butler CC (KS)&quot;], [&quot;Brad Lohaus&quot;, 33, &quot;United States&quot;, &quot;Forward-Center&quot;, &quot;1996&quot;, &quot;Iowa&quot;], [&quot;Art Long&quot;, 42, &quot;United States&quot;, &quot;Forward-Center&quot;, &quot;2002-03&quot;, &quot;Cincinnati&quot;] ] } ] # [optional] include an id_list aligned with table_list id_list = [&quot;abcID123&quot;] print(tabulate(table_list[0]['rows'], headers=table_list[0]['header'], tablefmt='grid')) table_qg_model.generate_questions(table_list, num_questions_per_instance = 5, agg_prob = [1.,0,0,0,0,0], num_where_prob = [0,1.,0,0,0], ineq_prob = 0.0, id_list=id_list ) </code></pre> <p>How do I combine these two snippets? I did the following but I get errors:</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(&quot;PrimeQA/t5-base-table-question-generator&quot;) model_name = AutoModelForSeq2SeqLM.from_pretrained(&quot;PrimeQA/t5-base-table-question-generator&quot;) table_list = [ {&quot;header&quot;: [&quot;Player&quot;, &quot;No.&quot;, &quot;Nationality&quot;, &quot;Position&quot;, &quot;Years in Toronto&quot;, &quot;School Team&quot;], &quot;rows&quot;: [ [&quot;Antonio Lang&quot;, 21, &quot;United States&quot;, &quot;Guard-Forward&quot;, &quot;1999-2000&quot;, &quot;Duke&quot;], [&quot;Voshon Lenard&quot;, 2, &quot;United States&quot;, &quot;Guard&quot;, &quot;2002-03&quot;, &quot;Minnesota&quot;], [&quot;Martin Lewis&quot;, 32, &quot;United States&quot;, &quot;Guard-Forward&quot;, &quot;1996-97&quot;, &quot;Butler CC (KS)&quot;], [&quot;Brad Lohaus&quot;, 33, &quot;United States&quot;, &quot;Forward-Center&quot;, &quot;1996&quot;, &quot;Iowa&quot;], [&quot;Art Long&quot;, 42, &quot;United States&quot;, &quot;Forward-Center&quot;, &quot;2002-03&quot;, &quot;Cincinnati&quot;] ] } ] model_name.generate_questions(table_list, num_questions_per_instance = 5, agg_prob = [1.,0,0,0,0,0], num_where_prob = [0,1.,0,0,0], ineq_prob = 0.0 ) </code></pre> <p>It gives me the following error:</p> <pre><code>AttributeError: 'T5ForConditionalGeneration' object has no attribute 'generate_questions' </code></pre>
<p>You can load <code>PrimeQA/t5-base-table-question-generator</code> model using the Huggingface transformers library directly. However you cannot call the function <code>generate_questions</code>. This is because the function <code>generate_questions</code> is defined in the class <a href="https://github.com/primeqa/primeqa/blob/main/primeqa/qg/models/qg_model.py#L27" rel="nofollow noreferrer">QGModel</a>. <code>QGModel</code> is a wrapper around the Huggingface <code>AutoModelForSeq2SeqLM</code> class and provides additional functionalities like reduce hallucinations etc. These functions are not defined in the <code>AutoModelForSeq2SeqLM</code> class.</p> <p>So, you could load the model using <code>AutoModelForSeq2SeqLM</code> and perform generation using <code>model.generate()</code>. However, you cannot call the function <code>generate_questions()</code> or <code>prune_hallucinations()</code> as they are defined in the <code>QGModel</code> class.</p>
97
T5 model
pytorch model saved from TPU run on CPU
https://stackoverflow.com/questions/63419835/pytorch-model-saved-from-tpu-run-on-cpu
<p>I found interesting model - question generator, but can't run it. I got an error:</p> <pre><code>Traceback (most recent call last): File &quot;qg.py&quot;, line 5, in &lt;module&gt; model = AutoModelWithLMHead.from_pretrained(&quot;/home/user/ml-experiments/gamesgen/t5-base-finetuned-question-generation-ap/&quot;) File &quot;/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_auto.py&quot;, line 806, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File &quot;/home/user/.virtualenvs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py&quot;, line 798, in from_pretrained import torch_xla.core.xla_model as xm ModuleNotFoundError: No module named 'torch_xla' </code></pre> <p>I briefly googled and found that &quot;torch_xla&quot; is a something that is used to train pytorch model on TPU. But I would like to run it localy on cpu (for inference, of course) and got this error when pytorch tried to load tpu-bound tensors. How can I fix it?</p> <p>this is model, which I tried: <a href="https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap" rel="nofollow noreferrer">https://huggingface.co/mrm8488/t5-base-finetuned-question-generation-ap</a></p>
<p>As @cronoik suggested, I have installed <code>transformers</code> library form github. I clonned latest version, and executed <code>python3 setup.py install</code> in it's directory. This bug was fixed, but fix still not released in python's packets repository.</p>
98
T5 model
In the original T5 paper, what does &#39;step&#39; mean?
https://stackoverflow.com/questions/75432369/in-the-original-t5-paper-what-does-step-mean
<p>I have been reading the original T5 paper 'Exploring the limits of transfer learning with a unified text-to-text transformer.' On page 11, it says &quot;We pre-train each model for 2^19=524,288 <strong>steps</strong> on C4 before fine-tuning.&quot;</p> <p>I am not sure what the '<strong>steps</strong>' mean. Is it the same as epochs? Or the number of iterations per epoch?</p> <p>I guess 'steps'='iterations' in a single epoch.</p>
<p>A step is a single training iteration. In a step, the model is given a single batch of training instances. So if the batch size is <code>128</code>, then the model is exposed to 128 instances in a single step.</p> <p>Epochs aren't the same as steps. An epoch is a single pass over an entire training set. So if the training data contains for example 128,000 instances &amp; the batch size is <code>128</code>, an epoch amounts to 1,000 steps (128 × 1,000 = 128,000).</p> <p>The relationship between epochs &amp; steps is related to the size of the training data (see <a href="https://stackoverflow.com/q/38340311/11858455">this question</a> for a more detailed comparison). If the data size is changed, then the effective number of steps in an epoch changes as well, (keeping the batch size fixed). So a dataset of 1,280,000 instances would take more steps in an epoch, &amp; vice-versa for a dataset of 12,800 instances.</p> <p>For this reason, steps are typically reported, especially when it comes to pre-training models on large corpora, because there can be a direct comparison in terms of steps &amp; batch size, which isn't possible (or relatively harder to do) with epochs. So, if someone else wants to compare using an entirely different dataset with a different size, the model would &quot;see&quot; the same number of training instances, if the number of steps &amp; batch size are the same, ensuring that a model isn't unfairly favoured due to training on more instances.</p>
99
BERT model
Retrain a BERT Model
https://stackoverflow.com/questions/70080219/retrain-a-bert-model
<p>I have trained a BERT model using pytorch for about a million text data for a classification task. After testing this model with new data I get False Positives and False Negatives. Now I want retrain the existing model only with FN and FP. I do not want to append the FN and FP to the exisiting dataset and then train the entire model again. How do I retrain this bert model only with these FN and Fp over the previosuly trained model.</p>
<p>Without knowing the code for your train loop, the idea should look something like this after training:</p> <pre><code>results = model(data) wrong_datapoints = [] for i, result in enumerate(results) if result != labels[i]: wrong_datapoints.append((data[i],labels[i])) (data_new, labels_new) = list(zip(*wrong_datapoints)) model.train(data_new, labels_new) </code></pre> <p>If you want something more specific, you're going to have to provide code of your current train loop.</p>
100
BERT model
Unable to fit BERT model
https://stackoverflow.com/questions/66844078/unable-to-fit-bert-model
<p>I am trying to train a BERT model based on an online tutorial (Coursera). The objective is to use the Quora insincere questions data to train a BERT model. The entire code is shown below:</p> <pre><code># Function to fine-tune BERT for Text Classification # bert_layer = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2', trainable=True) def fine_tune(bert_layer, tokenizer, train_data, valid_data): model = create_model(bert_layer) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5), loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy()]) print(model.summary()) # Train the model epochs = 4 history = model.fit(train_data, validation_data=valid_data, epochs=epochs, verbose=1) </code></pre> <p>However, I get the following error when running the model at model.fit():</p> <pre><code>Epoch 1/4 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-5-21a1de6e4ed2&gt; in &lt;module&gt;() 166 train_model() 167 --&gt; 168 main() 3 frames &lt;ipython-input-5-21a1de6e4ed2&gt; in main() 164 def main(): 165 # check_version() --&gt; 166 train_model() 167 168 main() &lt;ipython-input-5-21a1de6e4ed2&gt; in train_model() 70 71 # Run the model ---&gt; 72 fine_tune(bert_layer, tokenizer, train_data, valid_data) 73 74 def prepare_bert(): &lt;ipython-input-5-21a1de6e4ed2&gt; in fine_tune(bert_layer, tokenizer, train_data, valid_data) 148 validation_data=valid_data, 149 epochs=epochs, --&gt; 150 verbose=1) 151 152 plot_graphs(history, 'binary_accuracy') /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1108 1109 if logs is None: -&gt; 1110 raise ValueError('Expect x to be a non-empty array or dataset.') 1111 epoch_logs = copy.copy(logs) 1112 ValueError: Expect x to be a non-empty array or dataset. </code></pre> <p>The train data is prepared as follows:</p> <pre><code>df = pd.read_csv('https://archive.org/download/fine-tune-bert-tensorflow-train.csv/train.csv.zip', compression='zip', low_memory=True) train_df, remaining = train_test_split(df, random_state=42, train_size=0.0075, stratify=df.target.values) train_data = tf.data.Dataset.from_tensors((train_df['question_text'].values, train_df['target'].values)) train_data = (train_data.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) # .cache() .shuffle(1000) .batch(32, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) </code></pre> <p>I get the following when I print train_data.element_spec:</p> <pre><code>({'input_word_ids': TensorSpec(shape=(32, 128), dtype=tf.int32, name=None), 'input_mask': TensorSpec(shape=(32, 128), dtype=tf.int32, name=None), 'input_type_ids': TensorSpec(shape=(32, 128), dtype=tf.int32, name=None)}, TensorSpec(shape=(32,), dtype=tf.int32, name=None)) </code></pre> <p>This would mean the train_data is non-empty.</p> <p>How can I resolve this problem? Many thanks in advance.</p>
101
BERT model
Bert model splits words by its own
https://stackoverflow.com/questions/76238212/bert-model-splits-words-by-its-own
<p>I am tokenizing the input words using bert model. The code is :</p> <pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased',do_lower_case = False) model = BertModel.from_pretrained(&quot;bert-base-multilingual-cased&quot;, add_pooling_layer=False, output_hidden_states=True, output_attentions=True) marked_text = text + &quot; [SEP]&quot; tokenized_text = tokenizer.tokenize(marked_text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) print(tokenized_text) print(indexed_tokens) </code></pre> <p>The model I used is from HuggingFace.</p> <p>My goal is to print the embedded vectors of all words Bert model has, so I searched and found that this model has 119296 tokens available.</p> <p>I don't know this number of the tokens is reason, but the model splits the words by its own, which is unwanted for me.</p> <p>for example,</p> <pre><code> only -&gt; [only] ONLY -&gt; [ON,L,Y] stradivarius -&gt; ['St', '##radi', '##vari', '##us'] </code></pre> <p>Is this natural Bert thing or I am doing something wrong ?</p>
<p>You are not doing anything wrong. Bert uses a so-called <a href="https://huggingface.co/docs/transformers/tokenizer_summary#wordpiece" rel="nofollow noreferrer">wordpiece</a> subword tokenizer as a compromise for meaningful embeddings and acceptable memory consumption between a character-level (small vocabulary) and a word-level tokenizer (large vocabulary).</p> <p>A common approach to retrieve word embeddings from a subword-based model is to take the mean of the respective tokens. The code below shows you have you can retrieve the word embeddings (non-contextualized and contextualized) by taking the mean. It uses a fasttokenizer to utilize the methods of the <a href="https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding" rel="nofollow noreferrer">BatchEncoding</a> object.</p> <pre class="lang-py prettyprint-override"><code>import torch from transformers import BertTokenizerFast, BertModel t = BertTokenizerFast.from_pretrained('bert-base-multilingual-cased') # whole model m = BertModel.from_pretrained(&quot;bert-base-multilingual-cased&quot;) # token embedding layer embedding_layer = m.embeddings.word_embeddings sample_sentence = 'This is an example with token-embeddings and word-embeddings' encoded = t([sample_sentence]) # The BatchEncoding object allows us to map the token back to the string indices print(*[(token_id, encoded.token_to_chars(idx)) for idx, token_id in enumerate(encoded.input_ids[0])], sep=&quot;\n&quot;) # And we can also check the mapping of word to token indices print(*[(word, encoded.word_to_tokens(idx)) for idx, word in enumerate(sample_sentence.split())], sep=&quot;\n&quot;) </code></pre> <p>Output:</p> <pre><code>(101, None) (10747, CharSpan(start=0, end=4)) (10124, CharSpan(start=5, end=7)) (10151, CharSpan(start=8, end=10)) (14351, CharSpan(start=11, end=18)) (10169, CharSpan(start=19, end=23)) (18436, CharSpan(start=24, end=27)) (10136, CharSpan(start=27, end=29)) (118, CharSpan(start=29, end=30)) (10266, CharSpan(start=30, end=32)) (33627, CharSpan(start=32, end=35)) (13971, CharSpan(start=35, end=39)) (10107, CharSpan(start=39, end=40)) (10111, CharSpan(start=41, end=44)) (12307, CharSpan(start=45, end=49)) (118, CharSpan(start=49, end=50)) (10266, CharSpan(start=50, end=52)) (33627, CharSpan(start=52, end=55)) (13971, CharSpan(start=55, end=59)) (10107, CharSpan(start=59, end=60)) (102, None) ('This', TokenSpan(start=1, end=2)) ('is', TokenSpan(start=2, end=3)) ('an', TokenSpan(start=3, end=4)) ('example', TokenSpan(start=4, end=5)) ('with', TokenSpan(start=5, end=6)) ('token-embeddings', TokenSpan(start=6, end=8)) ('and', TokenSpan(start=8, end=9)) ('word-embeddings', TokenSpan(start=9, end=13)) </code></pre> <p>To retrieve the word embeddings:</p> <pre><code>with torch.inference_mode(): token_embeddings = embedding_layer(encoded.convert_to_tensors(&quot;pt&quot;).input_ids).squeeze() # we need the attention mechanism of the whole model to get the contextualized token representations contextualized_token_embeddings = m(**encoded.convert_to_tensors(&quot;pt&quot;)).last_hidden_state.squeeze() def fetch_word_embeddings(sample_sentence:str, encoded, embeddings:torch.Tensor) -&gt; dict[str,torch.Tensor]: word_embeddings = {} for idx, word in enumerate(sample_sentence.split()): start, end = encoded.word_to_tokens(idx) word_embeddings[word] = embeddings[start:end].mean(dim=0) return word_embeddings word_embeddings = fetch_word_embeddings(sample_sentence, encoded, token_embeddings) contextualized_word_embeddings = fetch_word_embeddings(sample_sentence, encoded, contextualized_token_embeddings) print(word_embeddings[&quot;token-embeddings&quot;]) print(contextualized_word_embeddings[&quot;token-embeddings&quot;]) </code></pre> <p>Output:</p> <pre><code>tensor([ 1.2455e-02, -3.8478e-02, 8.0834e-03, ..., -1.8502e-02, 1.1511e-02, -6.5307e-02]) tensor([-5.1564e-01, -1.6266e-01, -3.9420e-01, ..., -5.9969e-02, 3.0784e-01, -3.4451e-01]) </code></pre>
102
BERT model
Add dense layer on top of Huggingface BERT model
https://stackoverflow.com/questions/64156202/add-dense-layer-on-top-of-huggingface-bert-model
<p>I want to add a dense layer on top of the bare BERT Model transformer outputting raw hidden-states, and then fine tune the resulting model. Specifically, I am using <a href="https://huggingface.co/dbmdz/bert-base-italian-xxl-cased" rel="noreferrer">this</a> base model. This is what the model should do:</p> <ol> <li>Encode the sentence (a vector with 768 elements for each token of the sentence)</li> <li>Keep only the first vector (related to the first token)</li> <li>Add a dense layer on top of this vector, to get the desired transformation</li> </ol> <p>So far, I have successfully encoded the sentences:</p> <pre><code>from sklearn.neural_network import MLPRegressor import torch from transformers import AutoModel, AutoTokenizer # List of strings sentences = [...] # List of numbers labels = [...] tokenizer = AutoTokenizer.from_pretrained(&quot;dbmdz/bert-base-italian-xxl-cased&quot;) model = AutoModel.from_pretrained(&quot;dbmdz/bert-base-italian-xxl-cased&quot;) # 2D array, one line per sentence containing the embedding of the first token encoded_sentences = torch.stack([model(**tokenizer(s, return_tensors='pt'))[0][0][0] for s in sentences]).detach().numpy() regr = MLPRegressor() regr.fit(encoded_sentences, labels) </code></pre> <p>In this way I can train a neural network by feeding it with the encoded sentences. However, this approach clearly does not fine tune the base BERT model. Can anybody help me? How can I build a model (possibly in pytorch or using the Huggingface library) that can be entirely fine tuned?</p>
<p>There are two ways to do it: Since you are looking to fine-tune the model for a downstream task similar to classification, you can directly use:</p> <p><code>BertForSequenceClassification</code> class. Performs fine-tuning of logistic regression layer on the output dimension of 768.</p> <p>Alternatively, you can define a custom module, that created a bert model based on the pre-trained weights and adds layers on top of it.</p> <pre><code>from transformers import BertModel class CustomBERTModel(nn.Module): def __init__(self): super(CustomBERTModel, self).__init__() self.bert = BertModel.from_pretrained(&quot;dbmdz/bert-base-italian-xxl-cased&quot;) ### New layers: self.linear1 = nn.Linear(768, 256) self.linear2 = nn.Linear(256, 3) ## 3 is the number of classes in this example def forward(self, ids, mask): sequence_output, pooled_output = self.bert( ids, attention_mask=mask) # sequence_output has the following shape: (batch_size, sequence_length, 768) linear1_output = self.linear1(sequence_output[:,0,:].view(-1,768)) ## extract the 1st token's embeddings linear2_output = self.linear2(linear1_output) return linear2_output tokenizer = AutoTokenizer.from_pretrained(&quot;dbmdz/bert-base-italian-xxl-cased&quot;) model = CustomBERTModel() # You can pass the parameters if required to have more flexible model model.to(torch.device(&quot;cpu&quot;)) ## can be gpu criterion = nn.CrossEntropyLoss() ## If required define your own criterion optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters())) for epoch in epochs: for batch in data_loader: ## If you have a DataLoader() object to get the data. data = batch[0] targets = batch[1] ## assuming that data loader returns a tuple of data and its targets optimizer.zero_grad() encoding = tokenizer.batch_encode_plus(data, return_tensors='pt', padding=True, truncation=True,max_length=50, add_special_tokens = True) outputs = model(input_ids, attention_mask=attention_mask) outputs = F.log_softmax(outputs, dim=1) input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] loss = criterion(outputs, targets) loss.backward() optimizer.step() </code></pre>
103
BERT model
Load bert model in java
https://stackoverflow.com/questions/61000862/load-bert-model-in-java
<p>I have bert model for named entity recognition.(config.json, model.bin, vocab.txt). I can load model and get named entities from text with model in python</p> <pre><code>input_text = "I live in London" model_dir = "/content/gdrive/My Drive/models/v1" print (get_predictions(input_text, model_dir) ) </code></pre> <p>How can i load this model in Java and get named entities</p>
104
BERT model
Does Bert model need text?
https://stackoverflow.com/questions/70649831/does-bert-model-need-text
<p>Does Bert models need pre-processed text (Like removing special characters, stopwords, etc.) or I can directly pass my text as it is to Bert models. (HuggigFace libraries).</p> <p><em>note</em>: Follow up question to: <a href="https://stackoverflow.com/questions/70067901/string-cleaning-preprocessing-for-bert">String cleaning/preprocessing for BERT</a></p>
<p>Cleaning the input text for transformer models is not required. Removing stop words (which are considered as noise in conventional text representation like bag-of-words or tf-idf) can and <strong>probably will worsen</strong> the predictions of your BERT model.</p> <p>Since BERT is making use of the self-attention mechanism these 'stop words' are valuable information for BERT.</p> <p>Consider the following example: Python's NLTK library considers words like 'her' or 'him' as stop words. Let's say we want to process a text like: 'I told her about the best restaurants in town'. Removing stop words with NLTK would give us: 'I told best restaurants town'. As you can see a lot of information is being discarded. Sure, we could try and train a classic ML classifier (i.e. topic classification, here food) but BERT captures a lot more semantic information based on the surroundings of words.</p>
105
BERT model
Updating a BERT model through Huggingface transformers
https://stackoverflow.com/questions/58620282/updating-a-bert-model-through-huggingface-transformers
<p>I am attempting to update the pre-trained BERT model using an in house corpus. I have looked at the Huggingface transformer docs and I am a little stuck as you will see below.My goal is to compute simple similarities between sentences using the cosine distance but I need to update the pre-trained model for my specific use case. </p> <p>If you look at the code below, which is precisely from the Huggingface docs. I am attempting to "retrain" or update the model and I assumed that special_token_1 and special_token_2 represent "new sentences" from my "in house" data or corpus. Is this correct? In summary, I like the already pre-trained BERT model but I would like to update it or retrain it using another in house dataset. Any leads will be appreciated. </p> <pre><code>import tensorflow as tf import tensorflow_datasets from transformers import * model = BertModel.from_pretrained('bert-base-uncased') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') SPECIAL_TOKEN_1="dogs are very cute" SPECIAL_TOKEN_2="dogs are cute but i like cats better and my brother thinks they are more cute" tokenizer.add_tokens([SPECIAL_TOKEN_1, SPECIAL_TOKEN_2]) model.resize_token_embeddings(len(tokenizer)) #Train our model model.train() model.eval() </code></pre>
<p>BERT is pre-trained on 2 tasks: masked language modeling (MLM) and next sentence prediction (NSP). The most important of those two is MLM (it turns out that the next sentence prediction task is not really that helpful for the model's language understanding capabilities - RoBERTa for example is only pre-trained on MLM).</p> <p>If you want to further train the model on your own dataset, you can do so by using <a href="https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm" rel="noreferrer"><code>BERTForMaskedLM</code></a> in the Transformers repository. This is BERT with a language modeling head on top, which allows you to perform masked language modeling (i.e. predicting masked tokens) on your own dataset. Here's how to use it:</p> <pre><code>from transformers import BertTokenizer, BertForMaskedLM import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict=True) inputs = tokenizer(&quot;The capital of France is [MASK].&quot;, return_tensors=&quot;pt&quot;) labels = tokenizer(&quot;The capital of France is Paris.&quot;, return_tensors=&quot;pt&quot;)[&quot;input_ids&quot;] outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits </code></pre> <p>You can update the weights of BertForMaskedLM using <code>loss.backward()</code>, which is the main way of training PyTorch models. If you don't want to do this yourself, the Transformers library also provides a Python script which allows you perform MLM really quickly on your own dataset. See <a href="https://github.com/huggingface/transformers/tree/master/examples/language-modeling" rel="noreferrer">here</a> (section &quot;RoBERTa/BERT/DistilBERT and masked language modeling&quot;). You just need to provide a training and test file.</p> <p>You don't need to add any special tokens. Examples of special tokens are [CLS] and [SEP], which are used for sequence classification and question answering tasks (among others). These are added by the <code>tokenizer</code> automatically. How do I know this? Because <code>BertTokenizer</code> inherits from <code>PretrainedTokenizer</code>, and if you take a look at the documentation of its <code>__call__</code> method <a href="https://huggingface.co/transformers/master/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__" rel="noreferrer">here</a>, you can see that the <code>add_special_tokens</code> parameter defaults to True.</p>
106
BERT model
Loading pretrained BERT model issue
https://stackoverflow.com/questions/66442648/loading-pretrained-bert-model-issue
<p>I am using Huggingface to further train a BERT model. I saved the model using two methods: step (1) Saving the entire model using this code: <code>model.save_pretrained(save_location)</code>, and step (2) save the state_dict of the model using this code: <code>torch.save(model.state_dict(),'model.pth')</code> However, when I try to load this pretrained BERT model using the following code <code>bert_mask_lm = BertForMaskedLM.from_pretrained('save_location')</code> for step (1) and <code>torch.load('model.pth')</code> for step (2), I am getting this following error in both steps:</p> <pre><code>AttributeError Traceback (most recent call last) ~/anaconda3/lib/python3.6/site-packages/torch/serialization.py in _check_seekable(f) 307 try: --&gt; 308 f.seek(f.tell()) 309 return True AttributeError: 'torch._C.PyTorchFileReader' object has no attribute 'seek' During handling of the above exception, another exception occurred: </code></pre> <p>Detailed stacktrace of step (1) is as follows:</p> <pre><code>AttributeError Traceback (most recent call last) ~/anaconda3/lib/python3.6/site-packages/torch/serialization.py in _check_seekable(f) 307 try: --&gt; 308 f.seek(f.tell()) 309 return True AttributeError: 'torch._C.PyTorchFileReader' object has no attribute 'seek' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1037 try: -&gt; 1038 state_dict = torch.load(resolved_archive_file, map_location=&quot;cpu&quot;) 1039 except Exception: ~/anaconda3/lib/python3.6/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 593 return torch.jit.load(opened_file) --&gt; 594 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) 595 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) ~/anaconda3/lib/python3.6/site-packages/moxing/framework/file/file_io_patch.py in _load(f, map_location, pickle_module, **pickle_load_args) 199 --&gt; 200 _check_seekable(f) 201 f_should_read_directly = _should_read_directly(f) ~/anaconda3/lib/python3.6/site-packages/torch/serialization.py in _check_seekable(f) 310 except (io.UnsupportedOperation, AttributeError) as e: --&gt; 311 raise_err_msg([&quot;seek&quot;, &quot;tell&quot;], e) 312 return False ~/anaconda3/lib/python3.6/site-packages/torch/serialization.py in raise_err_msg(patterns, e) 303 + &quot; try to load from it instead.&quot;) --&gt; 304 raise type(e)(msg) 305 raise e AttributeError: 'torch._C.PyTorchFileReader' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) ~/work/algo-FineTuningBert3/FineTuningBert3.py in &lt;module&gt;() 1 #Model load checking ----&gt; 2 loadded_model = BertForMaskedLM.from_pretrained('/cache/raw_model/') ~/anaconda3/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1039 except Exception: 1040 raise OSError( -&gt; 1041 f&quot;Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' &quot; 1042 f&quot;at '{resolved_archive_file}'&quot; 1043 &quot;If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. &quot; OSError: Unable to load weights from pytorch checkpoint file for '/cache/raw_model/' at '/cache/raw_model/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. </code></pre> <p>I am using the latest torch (1.7.1) and transformers (4.3.3) packages. I do not clearly understand what causes this error and how to solve this issue.</p>
<p>I was going through the same thing. Turns out that this might be due to version indifference of both PyTorch and transformers. It has to be version-specific.</p> <p>I used the following without downloading the latest bert-base-uncased model :</p> <pre><code>pip install torch==1.5.1 pip install transformers==3.0.2 MODEL_NAME = 'bert-base-uncased' model = BertForTokenClassification.from_pretrained( MODEL_NAME ) </code></pre> <p>This will automatically download the pre-trained BERT model with respect to the suitable version of transformers NOTE: I downloaded the vocab.txt explicitly from the official site separately and used that with BERT tokenizer class.</p>
107
BERT model
Saving a &#39;fine-tuned&#39; bert model
https://stackoverflow.com/questions/59340061/saving-a-fine-tuned-bert-model
<p>I am trying to save a fine tuned bert model. I have ran the code correctly - it works fine, and in the ipython console I am able to call getPrediction and have it result the result. </p> <p>I have my weight files saved (highest being model.ckpt-333.data-00000-of-00001</p> <p>I have no idea how I would go about saving the model to be reuseable. </p> <p>I am using bert-tensorflow. </p> <pre><code>import json import pandas as pd import tensorflow as tf import tensorflow_hub as hub from datetime import datetime from sklearn.model_selection import train_test_split import os print("tensorflow version : ", tf.__version__) print("tensorflow_hub version : ", hub.__version__) #Importing BERT modules import bert from bert import run_classifier from bert import optimization from bert import tokenization #set output directory of the model OUTPUT_DIR = 'model' #@markdown Whether or not to clear/delete the directory and create a new one DO_DELETE = False #@param {type:"boolean"} if DO_DELETE: try: tf.gfile.DeleteRecursively(OUTPUT_DIR) except: pass tf.io.gfile.makedirs(OUTPUT_DIR) print('***** Model output directory: {} *****'.format(OUTPUT_DIR)) ### Load the data data = pd.read_csv("data/bbc-text.csv") data.columns = ['category', 'text'] print('*****Data Loaded: {} *****'.format(data.head())) #check to see if any null values are present. print('*****Empty Data: {} *****'.format(data[data.isnull().any(axis=1)])) #encode category variable into numeric data.category = pd.Categorical(data.category) data['code'] = data.category.cat.codes from sklearn.model_selection import train_test_split train, test = train_test_split(data, test_size=0.2, random_state=200) ## 2 -- Data Visualisation print(data.code.unique()) import matplotlib.pyplot as plt train['code'].value_counts().plot(kind = 'bar') DATA_COLUMN = 'text' LABEL_COLUMN = 'code' label_list = [0, 1, 2, 3, 4] plt.show() ## 2 -- Data Preprocessing train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, text_a = x[DATA_COLUMN], text_b = None, label = x[LABEL_COLUMN]), axis = 1) test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None, text_a = x[DATA_COLUMN], text_b = None, label = x[LABEL_COLUMN]), axis = 1) # This is a path to an uncased (all lowercase) version of BERT BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1" def create_tokenizer_from_hub_module(): """Get the vocab file and casing info from the Hub module.""" with tf.Graph().as_default(): bert_module = hub.Module(BERT_MODEL_HUB) tokenization_info = bert_module(signature="tokenization_info", as_dict=True) with tf.compat.v1.Session() as sess: vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"], tokenization_info["do_lower_case"]]) return bert.tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=do_lower_case) tokenizer = create_tokenizer_from_hub_module() # We'll set sequences to be at most 128 tokens long. MAX_SEQ_LENGTH = 128 # Convert our train and validation features to InputFeatures that BERT understands. train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer) test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer) #Example on first observation in the training set print("Example of train[0] as a training set") print("Sentence : ", train_InputExamples.iloc[0].text_a) print("-"*30) print("Tokens : ", tokenizer.tokenize(train_InputExamples.iloc[0].text_a)) print("-"*30) print("Input IDs : ", train_features[0].input_ids) print("-"*30) print("Input Masks : ", train_features[0].input_mask) print("-"*30) print("Segment IDs : ", train_features[0].segment_ids) ## 3. Creating a Multiclass Classifier def create_model(is_predicting, input_ids, input_mask, segment_ids, labels, num_labels): bert_module = hub.Module( BERT_MODEL_HUB, trainable=True) bert_inputs = dict( input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids) bert_outputs = bert_module( inputs=bert_inputs, signature="tokens", as_dict=True) # Use "pooled_output" for classification tasks on an entire sentence. # Use "sequence_outputs" for token-level output. output_layer = bert_outputs["pooled_output"] hidden_size = output_layer.shape[-1].value # Create our own layer to tune for politeness data. output_weights = tf.compat.v1.get_variable( "output_weights", [num_labels, hidden_size], initializer=tf.truncated_normal_initializer(stddev=0.02)) output_bias = tf.compat.v1.get_variable( "output_bias", [num_labels], initializer=tf.zeros_initializer()) with tf.compat.v1.variable_scope("loss"): # Dropout helps prevent overfitting output_layer = tf.nn.dropout(output_layer, keep_prob=0.9) logits = tf.matmul(output_layer, output_weights, transpose_b=True) logits = tf.nn.bias_add(logits, output_bias) log_probs = tf.nn.log_softmax(logits, axis=-1) # Convert labels into one-hot encoding one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32) predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32)) # If we're predicting, we want predicted labels and the probabiltiies. if is_predicting: return (predicted_labels, log_probs) # If we're train/eval, compute loss between predicted and actual label per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1) loss = tf.reduce_mean(per_example_loss) return (loss, predicted_labels, log_probs) #A function that adapts our model to work for training, evaluation, and prediction. # model_fn_builder actually creates our model function # using the passed parameters for num_labels, learning_rate, etc. def model_fn_builder(num_labels, learning_rate, num_train_steps, num_warmup_steps): """Returns `model_fn` closure for TPUEstimator.""" def model_fn(features, labels, mode, params): # pylint: disable=unused-argument """The `model_fn` for TPUEstimator.""" input_ids = features["input_ids"] input_mask = features["input_mask"] segment_ids = features["segment_ids"] label_ids = features["label_ids"] is_predicting = (mode == tf.estimator.ModeKeys.PREDICT) # TRAIN and EVAL if not is_predicting: (loss, predicted_labels, log_probs) = create_model( is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels) train_op = bert.optimization.create_optimizer( loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False) # Calculate evaluation metrics. def metric_fn(label_ids, predicted_labels): accuracy = tf.compat.v1.metrics.accuracy(label_ids, predicted_labels) true_pos = tf.compat.v1.metrics.true_positives( label_ids, predicted_labels) true_neg = tf.compat.v1.metrics.true_negatives( label_ids, predicted_labels) false_pos = tf.compat.v1.metrics.false_positives( label_ids, predicted_labels) false_neg = tf.compat.v1.metrics.false_negatives( label_ids, predicted_labels) return { "eval_accuracy": accuracy, "true_positives": true_pos, "true_negatives": true_neg, "false_positives": false_pos, "false_negatives": false_neg } eval_metrics = metric_fn(label_ids, predicted_labels) if mode == tf.estimator.ModeKeys.TRAIN: return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) else: return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metrics) else: (predicted_labels, log_probs) = create_model( is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels) predictions = { 'probabilities': log_probs, 'labels': predicted_labels } return tf.estimator.EstimatorSpec(mode, predictions=predictions) # Return the actual model function in the closure return model_fn # Compute train and warmup steps from batch size # These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb) BATCH_SIZE = 16 LEARNING_RATE = 2e-5 NUM_TRAIN_EPOCHS = 3.0 # Warmup is a period of time where the learning rate is small and gradually increases--usually helps training. WARMUP_PROPORTION = 0.1 # Model configs SAVE_CHECKPOINTS_STEPS = 300 SAVE_SUMMARY_STEPS = 100 # Compute train and warmup steps from batch size num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS) num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION) # Specify output directory and number of checkpoint steps to save run_config = tf.estimator.RunConfig( model_dir=OUTPUT_DIR, save_summary_steps=SAVE_SUMMARY_STEPS, save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS) # Specify output directory and number of checkpoint steps to save run_config = tf.estimator.RunConfig( model_dir=OUTPUT_DIR, save_summary_steps=SAVE_SUMMARY_STEPS, save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS) #Initializing the model and the estimator model_fn = model_fn_builder( num_labels=len(label_list), learning_rate=LEARNING_RATE, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps) estimator = tf.estimator.Estimator( model_fn=model_fn, config=run_config, params={"batch_size": BATCH_SIZE}) # Create an input function for training. drop_remainder = True for using TPUs. train_input_fn = bert.run_classifier.input_fn_builder( features=train_features, seq_length=MAX_SEQ_LENGTH, is_training=True, drop_remainder=False) # Create an input function for validating. drop_remainder = True for using TPUs. test_input_fn = run_classifier.input_fn_builder( features=test_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) # #Training the model print(f'Beginning Training!') current_time = datetime.now() estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) print("Training took time ", datetime.now() - current_time) #Evaluating the model with Validation set accuracy = estimator.evaluate(input_fn=test_input_fn, steps=None) # A method to get predictions def getPrediction(in_sentences): # A list to map the actual labels to the predictions labels = ["business", "entertainment", "politics", "sports", "tech"] # Transforming the test data into BERT accepted form input_examples = [run_classifier.InputExample(guid="", text_a=x, text_b=None, label=0) for x in in_sentences] # Creating input features for Test data input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer) # Predicting the classes predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) predictions = estimator.predict(predict_input_fn) return [(sentence, prediction['probabilities'], prediction['labels'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)] pred_sentences = list(test['text']) predictions = getPrediction(pred_sentences) enc_labels = [] act_labels = [] for i in range(len(predictions)): enc_labels.append(predictions[i][2]) act_labels.append(predictions[i][3]) pd.DataFrame(enc_labels, columns = ['category']).to_excel('data/submission_bert.xlsx', index = False) ## Random tester #Classifying random sentences tests = getPrediction(['Mr.Modi is the Indian Prime Minister', 'Gaming machines are powered by efficient micro processores and GPUs', 'That HBO TV series is really good', 'A trillion dollar economy ' ]) </code></pre>
<p>As the question clearly says to save the model, here is how it works:</p> <pre><code>import torch torch.save(model, 'path/to/model') saved_model = torch.load('path/to/model') </code></pre>
108
BERT model
Saving bert model at every epoch for further training
https://stackoverflow.com/questions/70922216/saving-bert-model-at-every-epoch-for-further-training
<p>I am using <code>bert_model.save_pretrained</code> for saving the model at end as this is the command that helps in saving the model with all configurations and weights but this cannot be used in model.fit command as in callbacks saving model at each epoch does not save with save_pretrained. Can anybody help me in saving bert model at each epoch since i cannot train whole bert model in one go?</p> <p><strong>Edit</strong></p> <p>Code for loading pre trained bert model</p> <pre><code>bert_model = TFAutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=num_classes) </code></pre> <p>Code for compiling the bert model</p> <pre><code>from tensorflow.keras import optimizers bert_model.compile(loss='categorical_crossentropy', optimizer=optimizers.Adam(learning_rate=0.00005), metrics=['accuracy']) bert_model.summary() </code></pre> <p>Code for training and saving the bert model</p> <pre><code>checkpoint_filepath_1 = 'callbacks_models/BERT1.{epoch:02d}- {val_loss:.2f}.h5' checkpoint_filepath_2 = 'callbacks_models/complete_best_BERT_model_1.h5' callbacks_1 = ModelCheckpoint( filepath=checkpoint_filepath_1, monitor='val_loss', mode='min', save_best_only=False, save_weights_only=False, save_freq='epoch') callbacks_2 = ModelCheckpoint( filepath=checkpoint_filepath_2, monitor='val_loss', mode='min', save_best_only=True) es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5) hist = bert_model.fit([train1_input_ids, train1_attention_masks], y_train1, batch_size=16, epochs=1,validation_data= ([val_input_ids, val_attention_masks], y_val), callbacks [es,callbacks_1,callbacks_2,history_logger]) min_val_score = min(hist.history['val_loss']) print (&quot;\nMinimum validation loss = &quot;, min_val_score) bert_model.save_pretrained(&quot;callbacks_models/Complete_BERT_model_1.h5&quot;) </code></pre>
109
BERT model
Training a BERT model and using the BERT embeddings
https://stackoverflow.com/questions/63476702/training-a-bert-model-and-using-the-bert-embeddings
<p>I've been reading up on BERT and using BERT embeddings for a classification task. I've read many articles but my understanding of it still is not 100% (I have self-taught myself NLP, so my access to resources can be a bit restricted). First I'll describe my task.</p> <p>I was planning on using BERT embeddings for classification because of how it encapsulates the meaning and language. Unfortunately, there are no BERT models in my language (Irish), so I looked into training my own. I know that BERT is basically an 'extension' of sorts to a Transformer Encoder.</p> <p>Here are my issues/questions:</p> <ul> <li><p>I presume this is fairly obvious, but to check, pre-trained BERT embeddings cannot be applied to different languages (the standard embedding model is trained on the wiki dataset for English, I presume it may not be used on other languages for obvious reasons)?</p> </li> <li><p>My datset contains about <strong>850k sentences</strong> in Irish (around <strong>22M words</strong>). Would that be enough to train a decent BERT model? I could find a bit more data but to get significantly more in Irish would be very hard.</p> </li> <li><p>Would one recommend to make a BERT model 'from scratch' in PyTorch or TensorFlow, or are models from the likes of Fairseq and OpenNMT good to use?</p> </li> </ul> <p>Apologies for such a disjointed question, but in summary, I'm all over the place trying to make complete sense of BERT, specifically the training process and tuning it just for embeddings. If I've got this all wrong, or just have advice, I'd appreciate the feedback.</p>
<p>I'm just like you, a self knowledge teacher of NLP. Since you have not started yet (you'll have such a quiet journey, don't you?), I would recommend you to check this proyect in <code>tensorflow</code> library since it's from Google and you'll have better access to all its power (just my opinion):</p> <p>First, you'll need a vocab file to tokenize with: it's a file (<code>txt</code>) which contains a fixed size of strings, one per line. <code>BERT</code> uses around 30.000, so think about your number aswell (higher number doesn't mean higher accuracy). <a href="https://www.tensorflow.org/text/guide/subwords_tokenizer" rel="nofollow noreferrer">This tutorial of tokenization will help you</a></p> <p>If you have a deep curiosity about how transformer works, <a href="https://www.tensorflow.org/text/tutorials/transformer" rel="nofollow noreferrer">then check out this one in addition</a></p> <p>In terms of training from scratch a new <code>BERT</code> model, take a look at this question: <a href="https://stackoverflow.com/questions/61826824/can-you-train-a-bert-model-from-scratch-with-task-specific-architecture">Can you train a BERT model from scratch with task specific architecture?</a></p> <p>You'll need a very powerfull computer to be able to do this. If not, you'll have a lot of memory issues. In the other hand, Tensorflow allows you to train its <code>hub</code> models (both preprocessing and encoder) so I think there is no need to reinvent the wheel. For this, use <code>tensorflow_hub</code> (also install <code>tensorflow_text</code> since I think you'll have dependency errors). I let you here <a href="https://tfhub.dev/google/collections/bert/1" rel="nofollow noreferrer">the link for tfdev hub of each preprocessing and encoder models of BERT</a> (if you download one, in asset folder you'll find the vocab file).</p> <p>850K sentences and 22M words may are less data than used in the original <code>BERT</code>. If your proyect it's just about curiosity, then it's big enough.</p> <p>Hope I've helped you! Good luck</p> <p>PD: I'm starting to use <code>BERT</code> aswell. Your question is from August last year so I think you have been able to make some progress. I'll appreciate and be interested in read it!</p>
110
BERT model
Compare cosine similarity of word with BERT model
https://stackoverflow.com/questions/69784408/compare-cosine-similarity-of-word-with-bert-model
<p>Hi I am looking to generate similar words for a word using BERT model, the same approach we use in gensim to generate most_similar word, I found the approach as:</p> <pre><code>from transformers import BertTokenizer, BertModel import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') word = &quot;Hello&quot; inputs = tokenizer(word, return_tensors=&quot;pt&quot;) outputs = model(**inputs) word_vect = outputs.pooler_output.detach().numpy() </code></pre> <p>Okay, now this gives me the embedding for input word given by user, so can we compare this embedding with complete BERT model for cosine similarity to find top N embeddings that are closest match with that word, and then convert the embeddings to word using the vocab.txt file in the model? is it possible?</p>
<p>Seems like you need to store embeddings for all word from your vocabulary. After that, you can use some tools to find the closest embedding to the target embedding. For example, you can use <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html#sklearn.neighbors.NearestNeighbors" rel="nofollow noreferrer">NearestNeighbors</a> from scikit-learn. Another option you might like to consider is HNSW, which is the data structure specially designed to perform fast approximate nearest neighbour search. <a href="https://faiss.ai/" rel="nofollow noreferrer">Faiss</a> is a quite good implementation of HNSW by Facebook.</p>
111
BERT model
Evaluate BERT Model param.requires_grad
https://stackoverflow.com/questions/72687276/evaluate-bert-model-param-requires-grad
<p>I have a doubt regarding the evaluation on the test set of my bert model. During the eval part param.requires_grad is suppose to be True or False? indipendently if I did a full fine tuning during training or not. My model is in model.eval() mode but I want to be sure to not force nothing wrong in the Model() class when i call it for evaluation. Thanks !</p> <pre><code> if freeze_bert == 'True': for param in self.bert.parameters(): param.requires_grad = False #logging.info('freeze_bert: {}'.format(freeze_bert)) #logging.info('param.requires_grad: {}'.format(param.requires_grad)) if freeze_bert == 'False': for param in self.bert.parameters(): param.requires_grad = True </code></pre>
<p>If you freeze your model then the parameter of the corresponding modules must not be updated, <em>i.e.</em> they should not require gradient computation: <code>requires_grad=False</code>.</p> <p>Note <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html" rel="nofollow noreferrer"><code>nn.Module</code></a> also has a <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.requires_grad_" rel="nofollow noreferrer"><code>requires_grad_</code></a> method:</p> <pre><code>if freeze_bert == 'True': self.bert.requires_grad_(False) elif freeze_bert == 'False: self.bert.requires_grad_(True) </code></pre> <p>Ideally <code>freeze_bert</code> would be a boolean and you would simply do:</p> <pre><code>self.bert.requires_grad_(not freeze_bert) </code></pre>
112
BERT model
BERT finetuning : Is it right to train BERT Classification model at once?
https://stackoverflow.com/questions/68524992/bert-finetuning-is-it-right-to-train-bert-classification-model-at-once
<p>I have a question about training BERT classification(or pretrained model).</p> <p>BERT classifier model usually constructed 2 models. BERT model and classifier.</p> <p>Many BERT fine tuning example code is training BERT model and classifier layer at once. But I think, classifier is training first and BERT weight should not updated. After classifier trained, training all model layers.</p> <p>Example</p> <pre class="lang-py prettyprint-override"><code>import torch from transformers import BertForSequenceClassification model = BertForSequenceClassification() ... # training1 for name, param in model.named_parameters(): if 'classifier' in name: param.requires_grad = True # only classifier update else: param.requires_grad = False # tied other layer ... # And after training1, we can using BERT model that is trained only classfier. model = BertForSequenceClassification() model.load_state_dict(torch.load({model only trained classifier}) for name, param in model.named_parameters(): param.requires_grad = True # training all # training BERT Classification model </code></pre> <p>Why BERT Classification model training at once? Thank you.</p>
<p>There are two ways to train a BERT-based classification model:</p> <ol> <li><p><strong>Finetuning</strong>: Which is the practice of training your classifier along with your text encoder (BERT in this case, but it can be any other text encoder, e.g., RoBERTa, ALBERT...). In this setting, the encoder and the classifier are both trained at the same time.</p> </li> <li><p><strong>BERT as an embedding model</strong>: Here you freeze the weights of BERT, and you only train the classifier. At the end of such a setting, BERT would be exactly the same as before training.</p> </li> </ol> <p>Research has shown that finetuning delivers slightly better results than when using BERT as embeddings. You can find the original research paper where these results are discussed <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">here</a>.</p> <p>What you are suggesting though is a tradeoff between the two, and it is interesting. I never tried that out myself, but I suspect you will run into overfitting since you train your classifier twice on the same data. So I suppose you will have better results than freezing BERT, but your model will have harder time generalizing to unseen data than the finetuning method.</p> <p>Yacine</p>
113
BERT model
Getting PermissionError while saving BERT model checkpoint
https://stackoverflow.com/questions/77875404/getting-permissionerror-while-saving-bert-model-checkpoint
<p>I am finetuning a BERT model for classification task, Following code is used for the training the model.</p> <pre><code>from transformers import Trainer, TrainingArguments, AutoConfig from transformers import Trainer batch_size = 8 logging_steps = len(emotions_encoded['train']) // batch_size print(len(emotions_encoded['train'])) print(logging_steps) model_name = f&quot;BERT-emotion-classification&quot; # Create a configuration object config = AutoConfig.from_pretrained(model_ckpt, output_hidden_states=True) # Save the configuration to a JSON file config.to_json_file(f&quot;{model_name}/config.json&quot;) training_args = TrainingArguments(output_dir=model_name, num_train_epochs=2, learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, evaluation_strategy=&quot;epoch&quot;, disable_tqdm=False, logging_steps=logging_steps, push_to_hub=True, log_level=&quot;error&quot;) trainer = Trainer(model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=emotions_encoded[&quot;train&quot;], eval_dataset=emotions_encoded[&quot;validation&quot;], tokenizer=tokenizer) trainer.train(); # Save the model using Trainer's save_model method trainer.save_model(f&quot;./{model_name}&quot;) </code></pre> <p>Train runs fine till the first checkpoint is created and then PermissionError is thrown.</p> <pre><code>{ &quot;name&quot;: &quot;PermissionError&quot;, &quot;message&quot;: &quot;[Errno 13] Permission denied: 'BERT-emotion-classification\\\\checkpoint-500'&quot;, &quot;stack&quot;: &quot;--------------------------------------------------------------------------- PermissionError Traceback (most recent call last) Cell In[34], line 7 1 from transformers import Trainer 2 trainer = Trainer(model=model, args=training_args, 3 compute_metrics=compute_metrics, 4 train_dataset=emotions_encoded[\&quot;train\&quot;], 5 eval_dataset=emotions_encoded[\&quot;validation\&quot;], 6 tokenizer=tokenizer) ----&gt; 7 trainer.train(); 9 # Save the model using Trainer's save_model method 10 trainer.save_model(f\&quot;./{model_name}\&quot;) PermissionError: [Errno 13] Permission denied: 'BERT-emotion-classification\\\\checkpoint-500'&quot; } </code></pre> <p>I checked the folder has the all the permission.</p>
<p>There is a <a href="https://discuss.huggingface.co/t/permission-issues-when-saving-model-checkpoint/70109/2" rel="nofollow noreferrer">discussion on HF</a> that solves this issue.</p> <blockquote> <p>I simply remove those lines of line. In this example, I basically removed lines 2417-2420. It appears to still work for me.</p> </blockquote>
114
BERT model
Convert a BERT Model to TFLite
https://stackoverflow.com/questions/60967842/convert-a-bert-model-to-tflite
<p>I have this code for semantic search engine built using the pre-trained bert model. I want to convert this model into tflite for deploying it to google mlkit. I want to know how to convert it. I want to know if its even possible to convert this into tflite. It might be because its mentioned on the official tensorflow site :<a href="https://www.tensorflow.org/lite/convert" rel="noreferrer">https://www.tensorflow.org/lite/convert</a>. But I dont know where to begin</p> <p>Code:</p> <pre><code> from sentence_transformers import SentenceTransformer # Load the BERT model. Various models trained on Natural Language Inference (NLI) https://github.com/UKPLab/sentence-transformers/blob/master/docs/pretrained-models/nli-models.md and # Semantic Textual Similarity are available https://github.com/UKPLab/sentence-transformers/blob/master/docs/pretrained-models/sts-models.md model = SentenceTransformer('bert-base-nli-mean-tokens') # A corpus is a list with documents split by sentences. sentences = ['Absence of sanity', 'Lack of saneness', 'A man is eating food.', 'A man is eating a piece of bread.', 'The girl is carrying a baby.', 'A man is riding a horse.', 'A woman is playing violin.', 'Two men pushed carts through the woods.', 'A man is riding a white horse on an enclosed ground.', 'A monkey is playing drums.', 'A cheetah is running behind its prey.'] # Each sentence is encoded as a 1-D vector with 78 columns sentence_embeddings = model.encode(sentences) print('Sample BERT embedding vector - length', len(sentence_embeddings[0])) print('Sample BERT embedding vector - note includes negative values', sentence_embeddings[0]) #@title Sematic Search Form # code adapted from https://github.com/UKPLab/sentence-transformers/blob/master/examples/application_semantic_search.py query = 'Nobody has sane thoughts' #@param {type: 'string'} queries = [query] query_embeddings = model.encode(queries) # Find the closest 3 sentences of the corpus for each query sentence based on cosine similarity number_top_matches = 3 #@param {type: "number"} print("Semantic Search Results") for query, query_embedding in zip(queries, query_embeddings): distances = scipy.spatial.distance.cdist([query_embedding], sentence_embeddings, "cosine")[0] results = zip(range(len(distances)), distances) results = sorted(results, key=lambda x: x[1]) print("\n\n======================\n\n") print("Query:", query) print("\nTop 5 most similar sentences in corpus:") for idx, distance in results[0:number_top_matches]: print(sentences[idx].strip(), "(Cosine Score: %.4f)" % (1-distance)) </code></pre>
<p>First of all, you need to have your model in TensorFlow, the package you are using is written in PyTorch. Huggingface's <a href="https://github.com/huggingface/transformers" rel="nofollow noreferrer">Transformers</a> has TensorFlow models that you can start with. In addition, they also have <a href="https://github.com/huggingface/tflite-android-transformers" rel="nofollow noreferrer">TFLite-ready models</a> for Android.</p> <p>In general, you have a TensorFlow model first. Them, save it in the <code>SavedModel</code> format:</p> <pre class="lang-py prettyprint-override"><code>tf.saved_model.save(pretrained_model, "/tmp/pretrained-bert/1/") </code></pre> <p>You can run the converter on this.</p>
115
BERT model
Number of parameters in BERT model
https://stackoverflow.com/questions/78997967/number-of-parameters-in-bert-model
<p>Suppose you are pretraining a BERT model with 8 layers, 768-dim hidden states, 8 attention heads, and a sub-word vocabulary of size 40k. Also, your feed-forward hidden layer is of dimension 3072. What will be the number of parameters of the model? You can ignore the bias terms, and other parameters used corresponding to the final loss computation from the final encoder representation. The BERT model can take at most 512 tokens in the input.</p>
116
BERT model
BERT tokenizer &amp; model download
https://stackoverflow.com/questions/59701981/bert-tokenizer-model-download
<p>I`m beginner.. I'm working with Bert. However, due to the security of the company network, the following code does not receive the bert model directly.</p> <pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=False) model = BertForSequenceClassification.from_pretrained("bert-base-multilingual-cased", num_labels=2) </code></pre> <p>So I think I have to download these files and enter the location manually. But I'm new to this, and I'm wondering if it's simple to download a format like .py from github and put it in a location.</p> <p>I'm currently using the bert model implemented by hugging face's pytorch, and the address of the source file I found is:</p> <p><a href="https://github.com/huggingface/transformers" rel="noreferrer">https://github.com/huggingface/transformers</a></p> <p>Please let me know if the method I thought is correct, and if so, what file to get.</p> <p>Thanks in advance for the comment.</p>
<p>As described <a href="https://github.com/huggingface/transformers/issues/856" rel="noreferrer">here</a>, what you need to do are download <code>pre_train</code> and <code>configs</code>, then putting them in the same folder. Every model has a pair of links, you might want to take a look at lib code. </p> <p>For instance</p> <pre><code>import torch from transformers import * model = BertModel.from_pretrained('/Users/yourname/workplace/berts/') </code></pre> <p>with <code>/Users/yourname/workplace/berts/</code> refer to your folder</p> <p>Below are what I found</p> <p>at <code>src/transformers/configuration_bert.py</code> there are a list of models' configs</p> <pre><code>BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json", "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json", "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json", "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-config.json", "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-config.json", "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json", "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json", "bert-base-german-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-config.json", "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-config.json", "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json", "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json", "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-config.json", "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-config.json", "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json", "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json", "bert-base-japanese": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-config.json", "bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking-config.json", "bert-base-japanese-char": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-config.json", "bert-base-japanese-char-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-whole-word-masking-config.json", "bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/config.json", "bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/config.json", } </code></pre> <p>and at <code>src/transformers/modeling_bert.py</code> there are links to pre_trains</p> <pre><code>BERT_PRETRAINED_MODEL_ARCHIVE_MAP = { "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin", "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-pytorch_model.bin", "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin", "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-pytorch_model.bin", "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-pytorch_model.bin", "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-pytorch_model.bin", "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-pytorch_model.bin", "bert-base-german-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-pytorch_model.bin", "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin", "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-pytorch_model.bin", "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin", "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-pytorch_model.bin", "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-pytorch_model.bin", "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin", "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin", "bert-base-japanese": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-pytorch_model.bin", "bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking-pytorch_model.bin", "bert-base-japanese-char": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-pytorch_model.bin", "bert-base-japanese-char-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-char-whole-word-masking-pytorch_model.bin", "bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/pytorch_model.bin", "bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/pytorch_model.bin", } </code></pre>
117
BERT model
Tensorflow serving keras bert model issue
https://stackoverflow.com/questions/79154761/tensorflow-serving-keras-bert-model-issue
<p>I am trying to use tensorflow serving to serve a keras bert model, but I have problem to predict with rest api, below are informations. Can you please help me to resolve this problem.</p> <h1>predict output (ERROR)</h1> <p>{ &quot;error&quot;: &quot;Op type not registered 'TFText&gt;RoundRobinTrim' in binary running on ljh-my-keras-bert-model. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib (e.g. <code>tf.contrib.resampler</code>), accessing should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.&quot; }</p> <h1>local versions</h1> <pre><code>Python 3.10.14 tensorflow 2.18.0 tensorflow-datasets 4.9.6 tensorflow-io-gcs-filesystem 0.37.1 tensorflow-metadata 1.16.1 tensorflow-text 2.18.0 keras 3.6.0 keras-hub-nightly 0.16.1.dev202410210343 keras-nlp 0.17.0 </code></pre> <h1>model definition</h1> <pre><code>import os os.environ[&quot;KERAS_BACKEND&quot;] = &quot;tensorflow&quot; # &quot;jax&quot; or &quot;tensorflow&quot; or &quot;torch&quot; import tensorflow_datasets as tfds import keras_nlp imdb_train, imdb_test = tfds.load( &quot;imdb_reviews&quot;, split=[&quot;train&quot;, &quot;test&quot;], as_supervised=True, batch_size=16, ) import keras # Load a model. classifier = keras_nlp.models.BertClassifier.from_preset( &quot;bert_tiny_en_uncased&quot;, num_classes=2, activation=&quot;softmax&quot;, ) # Compile the model. classifier.compile( loss=&quot;sparse_categorical_crossentropy&quot;, optimizer=keras.optimizers.Adam(5e-5), metrics=[&quot;sparse_categorical_accuracy&quot;], jit_compile=True, ) # Fine-tune. classifier.fit(imdb_train.take(250), validation_data=imdb_test.take(250)) # Predict new examples. classifier.predict([&quot;What an amazing movie!&quot;, &quot;A total waste of my time.&quot;]) # expected output: array([[0.34156954, 0.65843046], [0.52648497, 0.473515 ]], dtype=float32) </code></pre> <h1>save the model to local path</h1> <pre><code>import tensorflow as tf import keras_nlp def preprocess(inputs): # Convert input strings to token IDs, padding mask, and segment IDs preprocessor = classifier.preprocessor encoded = preprocessor(inputs) return { 'token_ids': encoded['token_ids'], 'padding_mask': encoded['padding_mask'], 'segment_ids': encoded['segment_ids'] } @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)]) def serving_fn(inputs): preprocessed = preprocess(inputs) outputs = classifier(preprocessed) return outputs # Save the model model_export_path = &quot;/Users/jiahao.liu/tf_saved_models/my-keras-bert-model/1&quot; tf.saved_model.save( classifier, export_dir=model_export_path, signatures={&quot;serving_default&quot;: serving_fn} ) print(f&quot;Model saved to: {model_export_path}&quot;) </code></pre> <h1>build the tensorflow serving docker image</h1> <pre><code>FROM tensorflow/serving:latest COPY my-keras-bert-model /models/my_keras_bert_model # Set the model name environment variable ENV MODEL_NAME my_keras_bert_model # ENV OMP_NUM_THREADS 8 # ENV TF_NUM_INTEROP_THREADS 8 # ENV TF_NUM_INTRAOP_THREADS 8 # Start TensorFlow Serving CMD tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=${MODEL_NAME} --model_base_path=/models/${MODEL_NAME} </code></pre> <h1>predict request</h1> <p>POST http://localhost:8501/v1/models/my_keras_bert_model/versions/1:predict Content-Type: application/json</p> <p>{&quot;instances&quot;: [&quot;What an amazing movie!&quot;, &quot;A total waste of my time.&quot;]}</p>
118
BERT model
Bert model for word similarity
https://stackoverflow.com/questions/75531115/bert-model-for-word-similarity
<p>I'm quite new to NLP, and I want to calculate the similarity between a given word and each word in a given list. I have the following code</p> <pre><code># Load the BERT model model_name = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) # Encode the target word and the list of words target_word = &quot;apple&quot; word_list = [&quot;blackberry&quot;, &quot;iphone&quot;, &quot;microsoft&quot;, &quot;blueberry&quot;, &quot;pineapple&quot;] # Tokenization of the target word and the list of words target_word_encoded = tokenizer.encode(target_word, return_tensors='pt').unsqueeze(0) word_list_encoded = [tokenizer.encode(word, return_tensors='pt').unsqueeze(0) for word in word_list] # Pad each sequence so they have the same length max_len = max(target_word_encoded.shape[1], max(word_encoded.shape[1] for word_encoded in word_list_encoded)) target_word_encoded = torch.nn.functional.pad(target_word_encoded, (0, 0, 0, max_len - target_word_encoded.shape[1])) word_list_encoded = [torch.nn.functional.pad(word_encoded, (0, 0, 0, max_len - word_encoded.shape[1])) for word_encoded in word_list_encoded] # Calculate the similarities with torch.no_grad(): target_word_encoded = target_word_encoded.squeeze(0) target_word_embedding = model(input_ids=target_word_encoded)[1]['last_hidden_state'][0] similarities = [] for word_encoded in word_list_encoded: word_encoded = word_encoded.squeeze(0) word_embedding = model(input_ids=word_encoded)[1]['last_hidden_state'][0] similarity = torch.nn.functional.cosine_similarity(target_word_embedding, word_embedding).item() similarities.append(similarity) # Print the similarities for word, similarity in zip(word_list, similarities): print(f&quot;Similarity between '{target_word}' and '{word}': {similarity:.2f}&quot;) </code></pre> <p>with this code I got the following error <strong>too many indices for tensor of dimension 2</strong></p> <p>what does it means and how to fix it to get the result</p> <p>Thanks in advance</p> <p>I want to calculate the similarity of a given list of words using transformers &quot;the bert model&quot;.</p>
<p>First of all, the similarity is a tricky word because there are different types of similarities. Especially semantic and sentimental similarities are very different concepts. For example, while good and bad are sentimental opposite words, they are semantically similar words. The basic BERT model is trained to capture the semantic similarity of the language. Therefore if you want to measure sentimental similarity, you can use BERT models for sentiment analysis. I suggest other similarity techniques for your task, like glove-embedding.</p> <p>Regarding of your question, there are a couple of errors in your implementation.</p> <ol> <li>Output of the models is a dict. When you accessed the first item, you already accessed the 'last_hidden_state'. You don't need the [1] before the 'last_hidden_state'.</li> <li>Bert-type transformers use tokenizers that can split the word to multiple tokens. One solution for this issue, you can take the average of the tokens which is basically the average of the output except for first and last elements.</li> <li>Your cosine similarity function will give an error when you run the code.</li> </ol> <pre><code> # Calculate the similarities with torch.no_grad(): target_word_encoded = target_word_encoded.squeeze(0) target_word_embedding = torch.mean(model(input_ids=target_word_encoded)['last_hidden_state'][0][1:-1],dim=0) similarities = [] for word_encoded in word_list_encoded: word_encoded = word_encoded.squeeze(0) word_embedding = torch.mean(model(input_ids=word_encoded)['last_hidden_state'][0][1:-1],dim=0) similarity = torch.nn.functional.cosine_similarity(target_word_embedding.reshape(1,-1), word_embedding.reshape(1,-1)).item() similarities.append(similarity) </code></pre>
119
BERT model
Error loading quantized BERT model from local repository
https://stackoverflow.com/questions/68226106/error-loading-quantized-bert-model-from-local-repository
<p>After quantizing the BERT model, it works without any issue. But if I save the quantized model and load, it does not work. It shows an error message: 'LinearPackedParams' object has no attribute '_modules&quot;. I have used the same device to save and load the quantized model.</p> <pre><code>model = SentenceTransformer('bert-base-nli-mean-tokens') model.encode(sentences) quantized_model = torch.quantization.quantize_dynamic( model, {torch.nn.Linear}, dtype=torch.qint8) quantized_model.encode(sentences) ``` torch.save(quantized_model, &quot;/PATH/TO/DESTINATION/Base_bert_quant.pt&quot;) model=torch.load(&quot;/SAME/PATH/Base_bert_quant.pt&quot;) model.encode(sentences) #shows the error </code></pre>
120
BERT model
Need to Fine Tune a BERT Model to Predict Missing Words
https://stackoverflow.com/questions/60486655/need-to-fine-tune-a-bert-model-to-predict-missing-words
<p>I'm aware that BERT has a capability in predicting a missing word within a sentence, which can be syntactically correct and semantically coherent. Below is a sample code:</p> <pre><code>import torch from pytorch_pretrained_bert import BertTokenizer, BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMaskedLM.from_pretrained('bert-base-uncased') model.eval(); # turning off the dropout def fill_the_gaps(text): text = '[CLS] ' + text + ' [SEP]' tokenized_text = tokenizer.tokenize(text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [0] * len(tokenized_text) tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) with torch.no_grad(): predictions = model(tokens_tensor, segments_tensors) results = [] for i, t in enumerate(tokenized_text): if t == '[MASK]': predicted_index = torch.argmax(predictions[0, i]).item() predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0] results.append(predicted_token) return results print(fill_the_gaps(text = 'I bought an [MASK] because its rainy .')) print(fill_the_gaps(text = 'Im sad because you are [MASK] .')) print(fill_the_gaps(text = 'Im worried because you are [MASK] .')) print(fill_the_gaps(text = 'Im [MASK] because you are [MASK] .')) </code></pre> <p>Can someone explain to me, do I need to fine Tune a BERT Model to predict missing words or just use the pre-trained BERT model? Thanks.</p>
<p>BERT is a masked Language Model, meaning it is trained on exactly this task. That is why it can do it. So in that sense, no fine tuning is needed.</p> <p>However, if the text you will see at runtime is different than the text BERT was trained on, your performance may be much better if you fine tune on the type of text you expect to see. </p>
121
BERT model
How to feed the output of a finetuned bert model as inpunt to another finetuned bert model?
https://stackoverflow.com/questions/60297908/how-to-feed-the-output-of-a-finetuned-bert-model-as-inpunt-to-another-finetuned
<p>I finetuned two separate bert model (bert-base-uncased) on sentiment analysis and pos tagging tasks. Now, I want to feed the output of the pos tagger (batch, seqlength, hiddensize) as input to the sentiment model.The original bert-base-uncased model is in 'bertModel/' folder which contains 'model.bin' and 'config.json'. Here is my code:</p> <pre><code>class DeepSequentialModel(nn.Module): def __init__(self, sentiment_model_file, postag_model_file, device): super(DeepSequentialModel, self).__init__() self.sentiment_model = SentimentModel().to(device) self.sentiment_model.load_state_dict(torch.load(sentiment_model_file, map_location=device)) self.postag_model = PosTagModel().to(device) self.postag_model.load_state_dict(torch.load(postag_model_file, map_location=device)) self.classificationLayer = nn.Linear(768, 1) def forward(self, seq, attn_masks): postag_context = self.postag_model(seq, attn_masks) sent_context = self.sentiment_model(postag_context, attn_masks) logits = self.classificationLayer(sent_context) return logits class PosTagModel(nn.Module): def __init__(self,): super(PosTagModel, self).__init__() self.bert_layer = BertModel.from_pretrained('bertModel/') self.classificationLayer = nn.Linear(768, 43) def forward(self, seq, attn_masks): cont_reps, _ = self.bert_layer(seq, attention_mask=attn_masks) return cont_reps class SentimentModel(nn.Module): def __init__(self,): super(SentimentModel, self).__init__() self.bert_layer = BertModel.from_pretrained('bertModel/') self.cls_layer = nn.Linear(768, 1) def forward(self, input, attn_masks): cont_reps, _ = self.bert_layer(encoder_hidden_states=input, encoder_attention_mask=attn_masks) cls_rep = cont_reps[:, 0] return cls_rep </code></pre> <p>But I get the below error. I appreciate it if someone could help me. Thanks!</p> <pre><code> cont_reps, _ = self.bert_layer(encoder_hidden_states=input, encoder_attention_mask=attn_masks) result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states' </code></pre>
<p>To formulate this as an answer, too, and keep it properly visible for future visitors, the <code>forward()</code> call of transformers <a href="https://github.com/huggingface/transformers/blob/v2.1.1/transformers/modeling_bert.py#L201" rel="nofollow noreferrer">does not support these arguments in version 2.1.1</a>, or any earlier version, for that matter. note that the link in my comment is in fact pointing to a different forward function, but otherwise the point still holds.</p> <p>Passing <code>encoder_hidden_states</code> to <code>forward()</code> was <a href="https://github.com/huggingface/transformers/blob/v2.2.0/transformers/modeling_bert.py#L210" rel="nofollow noreferrer">first possible in version 2.2.0</a>.</p>
122
BERT model
Fine-tune a BERT model for context specific embeddigns
https://stackoverflow.com/questions/67136740/fine-tune-a-bert-model-for-context-specific-embeddigns
<p>I'm trying to find information on how to train a BERT model, possibly from the <a href="https://huggingface.co/transformers/index.html" rel="noreferrer">Huggingface Transformers</a> library, so that the embedding it outputs are more closely related to the context o the text I'm using.</p> <p>However, all the examples that I'm able to find, are about fine-tuning the model for another task, such as <a href="https://huggingface.co/transformers/training.html" rel="noreferrer">classification</a>.</p> <p>Would anyone happen to have an example of a BERT fine-tuning model for masked tokens or next sentence prediction, that outputs another raw BERT model that is fine-tuned to the context?</p> <p>Thanks!</p>
<p>Here is an example from the Transformers library on <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb#scrollTo=a3KD3WXU3l-O" rel="nofollow noreferrer">Fine tuning a language model for masked token prediction</a>.</p> <p>The model that is used is one of the BERTForLM familly. The idea is to create a dataset using the <a href="https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/language_modeling.py" rel="nofollow noreferrer">TextDataset</a> that tokenizes and breaks the text into chunks. Then use a <a href="https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb#scrollTo=a3KD3WXU3l-O" rel="nofollow noreferrer">DataCollatorForLanguageModeling</a> to randomly mask tokens in the chunks when traing, and pass the model, the data and the collator to the <a href="https://huggingface.co/transformers/training.html#trainer" rel="nofollow noreferrer">Trainer</a> to train and evaluate the results.</p>
123
BERT model
Bert Model not loading with pickle
https://stackoverflow.com/questions/74689390/bert-model-not-loading-with-pickle
<p>I trained a Bert Model for NER. It worked fine (obviously it took time to learn). I saved the model with <code>pickle</code> as</p> <pre><code>with open('model_pkl', 'wb') as file: pickle.dump(model, file) </code></pre> <p>When I am trying to load this saved model I am getting following error. <code>AttributeError: Can't get attribute 'BertModel' on &lt;module '__main__' from '&lt;input&gt;'&gt;</code>. This method works for lists, dictionaries etc but producing error on <code>pytorch</code> model. I am using <code>python 3.8.10</code></p>
<p>Try using <a href="https://pytorch.org/tutorials/beginner/basics/saveloadrun_tutorial.html#saving-and-loading-models-with-shapes" rel="nofollow noreferrer">torch.save and torch.load</a>.</p>
124
BERT model
Text classification using BERT model
https://stackoverflow.com/questions/67356712/text-classification-using-bert-model
<p>I have built and trained the BERT model, using <a href="https://github.com/curiousily/Deep-Learning-For-Hackers/blob/master/18.intent-recognition-with-BERT.ipynb" rel="nofollow noreferrer">this code</a>.</p> <p>Now I have this data:</p> <p><a href="https://i.sstatic.net/z6ALu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z6ALu.png" alt="enter image description here" /></a></p> <p>and using this built function to classify each row in the text column as <strong>0</strong> or <strong>1</strong>.</p> <pre><code>import random def predict_emotion(input_text): text = input_text pred_tokens = map(tokenizer.tokenize, text) pred_tokens = map(lambda tok: [&quot;[CLS]&quot;] + tok + [&quot;[SEP]&quot;], pred_tokens) pred_token_ids = list(map(tokenizer.convert_tokens_to_ids, pred_tokens)) pred_token_ids = map(lambda tids: tids +[0]*(data.max_seq_len-len(tids)),pred_token_ids) pred_token_ids = np.array(list(pred_token_ids)) predictions = model.predict(pred_token_ids).argmax(axis=-1) return predictions df['emotion']=df['text'].apply(predict_emotion) </code></pre> <p>However, if I test it on just 3 random rows I am getting this array [1,0,......], instead of binary outcome: 1,0. Does anybody know how to solve it? <a href="https://i.sstatic.net/1MyUV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1MyUV.png" alt="enter image description here" /></a></p>
125
BERT model
QnA model using Bert
https://stackoverflow.com/questions/76027510/qna-model-using-bert
<p>I'm trying to build a bert model containing document as input. As bert's limitation is 512 tokens, it's unable to give accurate answer. Now, I'm trying to find NLP model/way/algorithm which should help bert model to find the correct answer.</p> <p>I tried with document as input and was expecting accurate answer as it was giving with small passages.</p>
<p>In the <strong>Extractive Question Answering task</strong>, in order to extract answer from the context (in this case, your input document), it is usually solved by <strong>BERT-like</strong> model.</p> <p>According to your case, there has a limitation that you have a long document to extract an answer out. however, the model you used is <strong>bert-large</strong> which cannot handle that long document since its maximum tokens limited to be just only 512 tokens. That's why it cannot get an accurate answer because bert-large can take a look just only 512 tokens only.</p> <p><strong>To obtain higher accuracy</strong>, my recommendation is to use the BERT-like model that can process longer sequence in order to handle with long document as an input. You may consider:</p> <ol> <li><p><strong>Longformer</strong>, a BERT-like model for long documents, which can be used in the Extractive QA task. I have a quick look at the QA task on huggingface and pick the most downloaded Longformer model <a href="https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1" rel="nofollow noreferrer">LONGFORMER-BASE-4096 fine-tuned on SQuAD v1</a> which <strong>can handle up to 4096 tokens (~8x compared with original BERT)</strong>. Maybe you should try it first. if you interested about how it works, you may take a look at the paper here <a href="https://arxiv.org/pdf/2004.05150.pdf" rel="nofollow noreferrer">Longformer: The Long-Document Transformer</a> to get an idea of how attention mechanism of the model works.</p> </li> <li><p>I suggest you read more about <strong>TransformerXL</strong> paper <a href="https://arxiv.org/abs/1901.02860" rel="nofollow noreferrer">Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context</a>, one of the initial models in the task of processing long sequence, it is one of the few models that has no sequence length limit. It is much better to study how they designed an attention mechanism in order to handle such a long sequence.</p> </li> </ol> <p>Hope this help!</p>
126
BERT model
BERT model classification with many classes
https://stackoverflow.com/questions/64278341/bert-model-classification-with-many-classes
<p>I want to train a BERT model to perform a multiclass text classification. I use transformers and followed this tutorial (<a href="https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613" rel="nofollow noreferrer">https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613</a>) to train it on Google Colab.</p> <p>The issue is that I have a huge number of classes (about 600) and I feel like it affects the performance that is quite disappointing.</p> <p>I looked a bit on Stackoverflow and found this thread (<a href="https://stackoverflow.com/questions/54850657/intent-classification-with-large-number-of-intent-classes">Intent classification with large number of intent classes</a>) that answered my question but I don't know how to implement it.</p> <p>The answer to the similar question was: &quot;If you could classify your intents into some coarse-grained classes, you could train a classifier to specify which of these coarse-grained classes your instance belongs to. Then, for each coarse-grained class train another classifier to specify the fine-grained one. This hierarchical structure will probably improve the results. Also for the type of classifier, I believe a simple fully connected layer on top of BERT would suffice.&quot;</p> <p>Do I have to train my models separately and use &quot;if&quot; conditions to build tbhe workflow or is there a way to train all your BERT models simultaneously and have one unifying model ?</p> <p>Thanks in advance</p>
127
BERT model
Loading trained BERT models locally in Pycharm
https://stackoverflow.com/questions/76830364/loading-trained-bert-models-locally-in-pycharm
<p>I am working on a project that have to train a Bert model myself, and later adapts that into Pycharm for GUI and more logic. It's a binary classification model. I wrote the script, and successfully trained a Bert model in Google Colab. I did it by</p> <pre><code>trainer.save_model(models) </code></pre> <p>It generates a file named <code>bert-damaged-classification</code> with the following three files:</p> <p><a href="https://i.sstatic.net/rdVFh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rdVFh.png" alt="files it attached" /></a></p> <p>Now, I want to see if this model produces the same result as my Google colab script. Here is the code:</p> <pre><code>config = BertConfig.from_json_file(&quot;models/bert-damaged-classification/config.json&quot;) tokenizer = BertTokenizer.from_pretrained(os.path.dirname('models/bert-damaged-classification/.')) model = BertForSequenceClassification.from_pretrained('bert-damaged-classification') # Below is some random inputs I used for testing. inputs = [&quot;aaaa, bbbb aaccc vvakejdhqiuwdh92d7h2qdqwss!!&quot;] outputs = model(**inputs) predictions = torch.argmax(outputs.logits, dim=1) print(predictions) </code></pre> <p>It's giving me this error:</p> <pre><code>OSError: Can't load tokenizer for 'models/bert-damaged-classification'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'models/bert-damaged-classification' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer. </code></pre> <p>I tried changing that tokenizer line to pointing to both <code>.bin</code> files in my model, but still it does not work.</p> <p>Update: I discovered an error, that <code>from_pretrained</code> only takes os.path objects. I updated the error message and my code above.</p>
128
BERT model
Data classification doesn&#39;t work with BERT model
https://stackoverflow.com/questions/78936387/data-classification-doesnt-work-with-bert-model
<p>I need to train model with input CSV, that has error message and error classification. Then when tested with just error message, it should automatically classify.</p> <p>I've used BERT model and this is the code:</p> <pre><code>import pandas as pd import numpy as np import tensorflow as tf from transformers import BertTokenizer, TFBertModel from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint # Load the training data and print the columns training_data = pd.read_excel(&quot;C:\\Users\\foobar\\Training Data.xlsx&quot;) print(training_data.columns) # Print to verify column names # Ensure column names are stripped of spaces training_data.columns = training_data.columns.str.strip() # Use the correct column names based on the output error_messages = training_data[&quot;Error Message&quot;].dropna() error_classification = training_data[&quot;Error Classification&quot;].dropna() # Ensure that the lengths are consistent min_length = min(len(error_messages), len(error_classification)) error_messages = error_messages.iloc[:min_length] error_classification = error_classification.iloc[:min_length] # Convert labels to numeric using LabelEncoder label_encoder = LabelEncoder() error_classification_encoded = label_encoder.fit_transform(error_classification) # Split the data into training and validation sets train_messages, val_messages, train_labels, val_labels = train_test_split( error_messages, error_classification_encoded, test_size=0.2, random_state=42 ) # Initialize BERT tokenizer and model tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') bert_model = TFBertModel.from_pretrained('bert-base-uncased') # Preprocess the text data def preprocess_text(texts): return tokenizer( texts.tolist(), max_length=128, truncation=True, padding='max_length', return_tensors='tf' ) train_tokens = preprocess_text(train_messages) val_tokens = preprocess_text(val_messages) # Define the model architecture input_ids = tf.keras.Input(shape=(128,), dtype=tf.int32, name='input_ids') attention_mask = tf.keras.Input(shape=(128,), dtype=tf.int32, name='attention_mask') # Wrap the BERT model call in a Lambda layer and specify the output shape bert_output = tf.keras.layers.Lambda( lambda x: bert_model(input_ids=x[0], attention_mask=x[1]).pooler_output, output_shape=(768,) )([input_ids, attention_mask]) x = tf.keras.layers.Dense(64, activation=&quot;relu&quot;)(bert_output) x = tf.keras.layers.Dropout(0.3)(x) # Add dropout for regularization output = tf.keras.layers.Dense(len(label_encoder.classes_), activation=&quot;softmax&quot;)(x) model = tf.keras.Model(inputs=[input_ids, attention_mask], outputs=output) # Use a lower learning rate for fine-tuning optimizer = Adam(learning_rate=2e-5) # Lower learning rate # Compile the model model.compile(optimizer=optimizer, loss=&quot;sparse_categorical_crossentropy&quot;, metrics=[&quot;accuracy&quot;]) # Convert labels to NumPy arrays train_labels = np.array(train_labels).astype(int) val_labels = np.array(val_labels).astype(int) # Add callbacks for better training control callbacks = [ EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True), # Stop if validation loss doesn't improve for 3 epochs ModelCheckpoint('best_model.keras', save_best_only=True) # Save only the best model ] # Train the model history = model.fit( [train_tokens['input_ids'], train_tokens['attention_mask']], train_labels, validation_data=([val_tokens['input_ids'], val_tokens['attention_mask']], val_labels), epochs=10, batch_size=16, # Add batch size if appropriate callbacks=callbacks ) # Load the best model before making predictions model.load_weights('best_model.keras') # Load the testing data testing_data = pd.read_excel(&quot;C:\\Users\\foobar\\Testing Data.xlsx&quot;) test_messages = testing_data[&quot;Error Message&quot;] # Preprocess the testing data test_tokens = preprocess_text(test_messages) # Perform predictions on the testing data predictions = model.predict([test_tokens['input_ids'], test_tokens['attention_mask']]) predicted_labels = np.argmax(predictions, axis=1) # Decode the labels back to original format decoded_predictions = label_encoder.inverse_transform(predicted_labels) # Update the output CSV with the predicted error classifications output_data = pd.DataFrame({&quot;Error Message&quot;: test_messages, &quot;Error Classification&quot;: decoded_predictions}) output_data.to_csv(&quot;C:\\Users\\foobar\\Output.csv&quot;, index=False) </code></pre> <p>Training Input contains, HTTP Error message and classification: (SAMPLE)</p> <p><a href="https://i.sstatic.net/bwtgzKUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bwtgzKUr.png" alt="Sample Input" /></a></p> <p>And Testing Input is this:</p> <p><a href="https://i.sstatic.net/fY1tUd6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fY1tUd6t.png" alt="Testing Input" /></a></p> <p>But the Output.csv comes out incorrect for 5** error, it shows &quot;Client error&quot; incorrectly.</p> <p><a href="https://i.sstatic.net/j6se7XFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j6se7XFd.png" alt="OUtput.CSV is incorrect" /></a></p>
129
BERT model
google colab memory problem using bert model
https://stackoverflow.com/questions/78424613/google-colab-memory-problem-using-bert-model
<p>Question : i try to use a bert model for an essay to code my sequence with a NPL models , but it takes a lot of time and befor terminate his 1 epoch he get out of connection problem , and when i increase the batch size to 16 or 32 i get memory problem , this is my code so if i have a problem please tell me to solve it and so i can get out from this issue :</p> <p>Python Code:</p> <p>Here's the code snippet I'm using : import pandas as pd</p> <pre><code>from sklearn.model_selection import train_test_split from transformers import BertForSequenceClassification from transformers import AutoTokenizer import torch from torch.utils.data import Dataset, DataLoader from torch.optim import AdamW from Bio import SeqIO data = &quot;/content/dataset_Rfam_6320_13classes.fasta&quot; sequence_dict = {rec.id: \[str(rec.seq).upper(), rec.description.split()\[1\]\] for rec in SeqIO.parse(data, &quot;fasta&quot;)} data = pd.DataFrame.from_dict(sequence_dict, orient=&quot;index&quot;, columns=\[&quot;Seq&quot;, &quot;RNA_type&quot;\]) data\[&quot;length&quot;\] = data\[&quot;Seq&quot;\].map(len) def save_checkpoint(state, filename=&quot;my_checkpoint.pth.tar&quot;): print(&quot;=\&gt; saving checkpoint&quot;) torch.save(state, filename ) ` `def load_checkpoint(checkpoint): print(&quot;=\&gt; Loading checkpoint&quot;) model.load_state_dict(checkpoint\['state_dict'\]) train_X, test_X, train_Y, test_Y = train_test_split(data\['Seq'\], data\['RNA_type'\], train_size = 0.7, shuffle = 42) from transformers import ( AutoTokenizer, AutoModelForSequenceClassification, ) tokenizer = AutoTokenizer.from_pretrained( 'Lancelot53/rna-tokenizer-4096', do_lower_case=False ) def return_kmer(seq, K=3): kmer_list = \[\] for x in range(len(seq) - K + 1): # move a window of size K across the sequence kmer_list.append(seq\[x : x + K\]) kmer_seq = &quot; &quot;.join(kmer_list) return kmer_seq train_kmers = \[return_kmer(seq) for seq in train_X\] test_kmers = \[return_kmer(seq) for seq in test_X\] train_encodings = tokenizer.batch_encode_plus( train_kmers, max_length=512, # max len of BERT padding=True, truncation=True, return_attention_mask=True, return_tensors=&quot;pt&quot;, )\` \`test_encodings = tokenizer.batch_encode_plus( test_kmers, max_length=512, # max len of BERT padding=True, truncation=True, return_attention_mask=True, return_tensors=&quot;pt&quot;, ) class TokenData(Dataset): def __init__(self, train=False): if train: self.text_data = train_X self.tokens = train_encodings self.labels = list(train_Y) else: self.text_data = test_X self.tokens = test_encodings self.labels = list(test_Y) self.label_encoder = {label: i for i, label in enumerate(set(self.labels))} self.num_classes = len(self.label_encoder) def __len__(self): return len(self.text_data) def __getitem__(self, idx): sample = {} for k, v in self.tokens.items(): sample[k] = torch.tensor(v[idx]) label = self.labels[idx] encoded_label = self.label_encoder[label] sample['labels'] = torch.tensor(encoded_label, dtype=torch.long) return sample` \`batch_size = 8 train_dataset = TokenData(train = True) train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size) test_dataset = TokenData(train = False) test_loader = DataLoader(test_dataset, shuffle=True, batch_size=batch_size) optimizer = AdamW(bert_model.parameters(), lr=1e-5) loss_fn = torch.nn.CrossEntropyLoss() num_epochs = 3 # nombre de fois que vous souhaitez parcourir l'ensemble de vos données d'entraînement. device = &quot;cuda&quot; if torch.cuda.is_available() else &quot;mps&quot; if torch.backends.mps.is_available() else &quot;cpu&quot; # la variable device indique le dispositif (hardware) sur lequel le calcul sera effectué. bert_model.to(device) # Transfer model to GPU if available load_model= False if load_model: load_checkpoint(torch.load(&quot;my_checkpoint.ptch.tar&quot;))\` \`for epoch in range(num_epochs): if epoch \&gt;= 1: checkpoint ={'stat_dict': bert_model.state_dict()} save_checkpoint(checkpoint) print(&quot;Epoch: &quot;,(epoch + 1)) \# TRAINING BLOCK STARTS bert_model.train() for i,batch in enumerate(train_loader): batch = {k: v.to(device) for k, v in batch.items()} # Setting the gradients to zero optimizer.zero_grad() # Passing the data to the model outputs = bert_model(input_ids = batch['input_ids'], attention_mask = batch['attention_mask']) # The logits will be used for measuring the loss pred = outputs.logits loss = loss_fn(pred, batch['labels']) # Calculating the gradient for the loss function loss.backward() # Optimizing the parameters of the bert model optimizer.step() # Calculating the running loss for logging purposes train_batch_loss = loss.item() train_last_loss = train_batch_loss / batch_size print('Training batch {} last loss: {}'.format(i + 1, train_last_loss)) # Logging epoch-wise training loss print(f&quot;\nTraining epoch {epoch + 1} loss: &quot;,train_last_loss) # TRAINING BLOCK ENDS # TESTING BLOCK STARTS bert_model.eval() correct = 0 test_pred = [] for i, batch in enumerate(test_loader): batch = {k: v.to(device) for k, v in batch.items()} # We don't need gradients for testing with torch.no_grad(): outputs = bert_model(input_ids = batch['input_ids'], attention_mask = batch['attention_mask']) # Logits act as predictions logits = outputs.logits # Calculating total batch loss using the logits and labels loss = loss_fn(logits, batch['labels']) test_batch_loss = loss.item() # Calculating the mean batch loss test_last_loss = test_batch_loss / batch_size print('Testing batch {} loss: {}'.format(i + 1, test_last_loss)) # Comparing the predicted target with the labels in the batch correct += (logits.argmax(1) == batch['labels']).sum().item() print(&quot;Testing accuracy: &quot;,correct/((i + 1) * batch_size)) print(f&quot;\nTesting epoch {epoch + 1} last loss: &quot;,test_last_loss) # TESTING BLOCK ENDS ` </code></pre> <p>what i want : i just need a way to execute my model without get out of connection even if for one epoch , because after i try this bert model i will do another modification to the code so it will have another classifier part</p>
<p>I my opinion, you can try:</p> <ul> <li>Using &quot;adafactor&quot; optimizer [https://huggingface.co/docs/transformers/main_classes/optimizer_schedules][1]</li> <li>Gradient Accumulation [https://huggingface.co/docs/accelerate/usage_guides/gradient_accumulation][2]</li> <li>Training model with multi GPU to share parameters for each GPU.</li> <li>Using FT_16 training. For example: <code>training_args = TrainingArguments(per_device_train_batch_size=4, fp16=True, **default_args)</code></li> </ul>
130
BERT model
Print Bert model summary using Pytorch
https://stackoverflow.com/questions/71248696/print-bert-model-summary-using-pytorch
<p>Hi I would like to print the model summary of my BERT model for text classification. I am using command print(summary(model, inputsize=(channels, height, width)). I would like to know what would be the dimensions of input_size in case of text classification? I have use print(model) as well but the output is confusing and I want to see the output in the layered form. Below is my model summary.</p> <pre><code>BertClassifier( (bert): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(28996, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (1): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (2): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (3): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (4): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (5): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (6): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (7): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (8): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (9): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (10): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (11): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (dropout): Dropout(p=0.5, inplace=False) (linear1): Linear(in_features=768, out_features=256, bias=True) (linear2): Linear(in_features=256, out_features=141, bias=True) (relu): ReLU() ) </code></pre>
<p>I used torch-summary module-</p> <pre><code>pip install torch-summary summary(model,input_size=(768,),depth=1,batch_dim=1, dtypes=[‘torch.IntTensor’]) </code></pre>
131
BERT model
Why BERT model have to keep 10% MASK token unchanged?
https://stackoverflow.com/questions/64013808/why-bert-model-have-to-keep-10-mask-token-unchanged
<p>I am reading BERT model paper. In Masked Language Model task during pre-training BERT model, the paper said the model will choose 15% token ramdomly. In the chose token (Ti), 80% it will be replaced with [MASK] token, 10% Ti is unchanged and 10% Ti replaced with another word. I think the model just need to replace with [MASK] or another word is enough. Why does the model have to choose randomly a word and keep it unchanged? Does pre-training process predict only [MASK] token or it predict 15% a whole random token?</p>
<p>This is done because they want to pre-train a bidirectional model. Most of the time the network will see a sentence with a [MASK] token, and its trained to predict the word that is supposed to be there. But in fine-tuning, which is done after pre-training (fine-tuning is the training done by everyone who wants to use BERT on their task), there are no [MASK] tokens! (unless you specifically do masked LM).</p> <p>This mismatch between pre-training and training (sudden disappearence of the [MASK] token) is softened by them, with a probability of 15% the word is not replaced by [MASK]. The task is still there, the network has to predict the token, but it actually gets the answer already as input. This might seem counterintuitive but makes sense when combined with the [MASK] training.</p>
132
BERT model
How to load a fine-tuned BERT model
https://stackoverflow.com/questions/62448553/how-to-load-a-fine-tuned-bert-model
<p>I have fine tuned a BERT model on my data and saved the model using <code>model.save()</code></p> <p>now am trying to load using the below</p> <pre><code>from keras_radam import RAdam from keras.models import load_model from keras_bert import get_custom_objects custom_object = get_custom_objects() custom_object['RAdam'] = RAdam() model = load_model('bert_20news.h5', custom_objects=custom_object) </code></pre> <p>but I keep getting the below error</p> <pre><code>Traceback (most recent call last): File "D:/work/work spaces/pycharm/news_classification/predict.py", line 10, in &lt;module&gt; model = load_model('bert_20news.h5', custom_objects=custom_object) File "D:\work\work spaces\pycharm\news_classification\3.6venv\lib\site-packages\tensorflow\python\keras\saving\save.py", line 184, in load_model return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) File "D:\work\work spaces\pycharm\news_classification\3.6venv\lib\site-packages\tensorflow\python\keras\saving\hdf5_format.py", line 194, in load_model_from_hdf5 training_config, custom_objects)) File "D:\work\work spaces\pycharm\news_classification\3.6venv\lib\site-packages\tensorflow\python\keras\saving\saving_utils.py", line 209, in compile_args_from_training_config optimizer = optimizers.deserialize(optimizer_config) File "D:\work\work spaces\pycharm\news_classification\3.6venv\lib\site-packages\tensorflow\python\keras\optimizers.py", line 869, in deserialize printable_module_name='optimizer') File "D:\work\work spaces\pycharm\news_classification\3.6venv\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py", line 373, in deserialize_keras_object list(custom_objects.items()))) File "D:\work\work spaces\pycharm\news_classification\3.6venv\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py", line 859, in from_config return cls(**config) File "D:\work\work spaces\pycharm\news_classification\3.6venv\lib\site-packages\keras_radam\optimizers.py", line 34, in __init__ super(RAdam, self).__init__(**kwargs) TypeError: __init__() missing 1 required positional argument: 'name' </code></pre>
133