category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
fine-tuning
Optaplanner benchmarking and fine tuning
https://stackoverflow.com/questions/30700617/optaplanner-benchmarking-and-fine-tuning
<p>I am current tweaking and fine tuning my installer booking assignment optimizer, just recently upgraded my library to Optaplanner 6.2.0 Final. I am using the benchmarker to observe which optimization strategy (EntityTabu, SimulatedAnnealing, with or without TailChainSwapMove) I have a few questions: <br>1) I made an eventListener attached to my Solver, for displaying any improvements in scoring. Can I attached the eventListener to my benchmark? <br>2) For ChangeMove and SwapMove selector, can I use a filterClass in conjuction with an entitySelector, so I could utilize nearbyDistanceMeterClass? <br></p> <pre><code>&lt;solverBenchmark&gt; &lt;name&gt;Entity tabu w tailChainSwapMove&lt;/name&gt; &lt;solver&gt; &lt;localSearch&gt; &lt;unionMoveSelector&gt; &lt;changeMoveSelector&gt; &lt;filterClass&gt;com.tmrnd.pejal.opta.solver.move.InstallerChangeMoveFilter&lt;/filterClass&gt; &lt;/changeMoveSelector&gt; &lt;swapMoveSelector&gt; &lt;filterClass&gt;com.tmrnd.pejal.opta.solver.move.SamePttSwapMoveFilter&lt;/filterClass&gt; &lt;/swapMoveSelector&gt; &lt;tailChainSwapMoveSelector&gt; &lt;entitySelector id="entitySelector3"/&gt; &lt;valueSelector&gt; &lt;nearbySelection&gt; &lt;originEntitySelector mimicSelectorRef="entitySelector3"/&gt; &lt;nearbyDistanceMeterClass&gt;com.tmrnd.pejal.opta.solver.move.BookingNearbyDistanceMeter&lt;/nearbyDistanceMeterClass&gt; &lt;parabolicDistributionSizeMaximum&gt;20&lt;/parabolicDistributionSizeMaximum&gt; &lt;/nearbySelection&gt; &lt;/valueSelector&gt; &lt;/tailChainSwapMoveSelector&gt; &lt;/unionMoveSelector&gt; &lt;acceptor&gt; &lt;entityTabuRatio&gt;0.05&lt;/entityTabuRatio&gt; &lt;/acceptor&gt; &lt;forager&gt; &lt;acceptedCountLimit&gt;1000&lt;/acceptedCountLimit&gt; &lt;/forager&gt; &lt;/localSearch&gt; &lt;/solver&gt; </code></pre> <p></p>
<p>1) Do you mean like all the optional statistics that the benchmarker supports, such as the BEST_SCORE statistic (see docs) etc? All those statistics are nicely shown in the benchmark report.</p> <p>2) Try it out.</p>
1,134
fine-tuning
Fine-Tuning Keras model
https://stackoverflow.com/questions/43869553/fine-tuning-keras-model
<p>I'm working on facial expression recognition using CNN. I'm using Keras and Tensorflow as backend. My model is saved to h5 format.</p> <p>I want to retrain my network, and fine-tune my model with the VGG model.</p> <p>How can I do that with keras ? </p>
<p>Save your models architecture and weights: </p> <pre><code>json_string = model.to_json() model.save_weights('model_weights.h5') </code></pre> <p>Load model architecture and weights:</p> <pre><code>from keras.models import model_from_json model = model_from_json(json_string) model.load_weights('model_weights.h5') </code></pre> <p>Start training again from here for finetuning. I hope this helps.</p>
1,135
fine-tuning
Fine tuning deep autoencoder model for mnist
https://stackoverflow.com/questions/56131203/fine-tuning-deep-autoencoder-model-for-mnist
<p>I have developed a 3 layer deep autoencoder model for the mnist dataset as I am just practicing on this toy dataset as I am beginner in this fine-tuning paradigm</p> <p>Following is the code</p> <pre><code>from keras import layers from keras.layers import Input, Dense from keras.models import Model,Sequential from keras.datasets import mnist import numpy as np # Deep Autoencoder # this is the size of our encoded representations encoding_dim = 32 # 32 floats -&gt; compression factor 24.5, assuming the input is 784 floats # this is our input placeholder; 784 = 28 x 28 input_img = Input(shape=(784, )) my_epochs = 100 # "encoded" is the encoded representation of the inputs encoded = Dense(encoding_dim * 4, activation='relu')(input_img) encoded = Dense(encoding_dim * 2, activation='relu')(encoded) encoded = Dense(encoding_dim, activation='relu')(encoded) # "decoded" is the lossy reconstruction of the input decoded = Dense(encoding_dim * 2, activation='relu')(encoded) decoded = Dense(encoding_dim * 4, activation='relu')(decoded) decoded = Dense(784, activation='sigmoid')(decoded) # this model maps an input to its reconstruction autoencoder = Model(input_img, decoded) # Separate Encoder model # this model maps an input to its encoded representation encoder = Model(input_img, encoded) # Separate Decoder model # create a placeholder for an encoded (32-dimensional) input encoded_input = Input(shape=(encoding_dim, )) # retrieve the layers of the autoencoder model decoder_layer1 = autoencoder.layers[-3] decoder_layer2 = autoencoder.layers[-2] decoder_layer3 = autoencoder.layers[-1] # create the decoder model decoder = Model(encoded_input, decoder_layer3(decoder_layer2(decoder_layer1(encoded_input)))) # Train to reconstruct MNIST digits # configure model to use a per-pixel binary crossentropy loss, and the Adadelta optimizer autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') # prepare input data (x_train, y_train), (x_test, y_test) = mnist.load_data() # normalize all values between 0 and 1 and flatten the 28x28 images into vectors of size 784 x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) # Train autoencoder for 50 epochs autoencoder.fit(x_train, x_train, epochs=my_epochs, batch_size=256, shuffle=True, validation_data=(x_test, x_test), verbose=2) # after 100 epochs the autoencoder seems to reach a stable train/test lost value # Visualize the reconstructed encoded representations # encode and decode some digits # note that we take them from the *test* set encodedTrainImages=encoder.predict(x_train) encoded_imgs = encoder.predict(x_test) decoded_imgs = decoder.predict(encoded_imgs) # From here I want to fine tune just the encoder model model=Sequential() model=Sequential() for layer in encoder.layers: model.add(layer) model.add(layers.Flatten()) model.add(layers.Dense(20, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(10, activation='softmax')) </code></pre> <p><strong>Following is my encoder model which I want to fine-tune.</strong></p> <pre><code>encoder.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 784) 0 _________________________________________________________________ dense_1 (Dense) (None, 128) 100480 _________________________________________________________________ dense_2 (Dense) (None, 64) 8256 _________________________________________________________________ dense_3 (Dense) (None, 32) 2080 ================================================================= Total params: 110,816 Trainable params: 110,816 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p><strong>Problem:1</strong></p> <p>After building the autoencoder model I want to just use the encoder model and fine tune it for classification task in mnist dataset but I am getting errors.</p> <p>Error:</p> <pre><code>Traceback (most recent call last): File "C:\Users\samer\Anaconda3\envs\tensorflow-gpu\lib\site-packages\IPython\core\interactiveshell.py", line 3267, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-15-528c079e5325&gt;", line 3, in &lt;module&gt; model.add(layers.Flatten()) File "C:\Users\samer\Anaconda3\envs\tensorflow-gpu\lib\site-packages\keras\engine\sequential.py", line 181, in add output_tensor = layer(self.outputs[0]) File "C:\Users\samer\Anaconda3\envs\tensorflow-gpu\lib\site-packages\keras\engine\base_layer.py", line 414, in __call__ self.assert_input_compatibility(inputs) File "C:\Users\samer\Anaconda3\envs\tensorflow-gpu\lib\site-packages\keras\engine\base_layer.py", line 327, in assert_input_compatibility str(K.ndim(x))) ValueError: Input 0 is incompatible with layer flatten_4: expected min_ndim=3, found ndim=2 </code></pre> <p><strong>Problem 2:</strong> </p> <p>Similarly I would later use pre-trained model where each autoencoder would be trained in a greedy manner and then the final model would be fine tuned. Can somebody just guide me how to proceed further in my these two tasks. </p> <p>regards</p>
<h2>Problem 1</h2> <p>The problem is that you are trying to flatten a layer that is already flat: you encoder is made up of one-dimensional Desnse layers, which have shape <code>(batch_size, dim)</code>.</p> <p>The Flatten layer is expecting at least a 2D input, i.e. having a 3 dimensional shape <code>(batch_size, dim1, dim2)</code> (e.g. the output of a Conv2D layer), by removing it the model will build properly:</p> <pre><code>encoding_dim = 32 input_img = layers.Input(shape=(784, )) encoded = layers.Dense(encoding_dim * 4, activation='relu')(input_img) encoded = layers.Dense(encoding_dim * 2, activation='relu')(encoded) encoded = layers.Dense(encoding_dim, activation='relu')(encoded) encoder = Model(input_img, encoded) [...] model = Sequential() for layer in encoder.layers: print(layer.name) model.add(layer) model.add(layers.Dense(20, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(10, activation='softmax')) model.summary() </code></pre> <p>Which ouputs:</p> <pre><code>input_1 dense_1 dense_2 dense_3 Model: "sequential_1" ________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 128) 100480 _________________________________________________________________ dense_2 (Dense) (None, 64) 8256 _________________________________________________________________ dense_3 (Dense) (None, 32) 2080 _________________________________________________________________ dense_4 (Dense) (None, 20) 660 _________________________________________________________________ dropout_1 (Dropout) (None, 20) 0 _________________________________________________________________ dense_5 (Dense) (None, 10) 210 ================================================================= Total params: 111,686 Trainable params: 111,686 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p><strong>___</strong></p> <h3>Edit: integrating answers to questions in the comments</h3> <p><strong>Q: How can I be sure that the new model will be using the same weights as the previously trained encoder?</strong></p> <p>A: In your code, what you are doing is iterating through the layers contained inside of the encoder, then passing each of them to <code>model.add()</code>. What you are doing here is passing the reference to each layer directly, therefore you will have the very same layer inside your new model. Here is a proof of concept using the layer name:</p> <pre><code>encoding_dim = 32 input_img = Input(shape=(784, )) encoded = Dense(encoding_dim * 4, activation='relu')(input_img) encoded = Dense(encoding_dim * 2, activation='relu')(encoded) encoded = Dense(encoding_dim, activation='relu')(encoded) decoded = Dense(encoding_dim * 2, activation='relu')(encoded) decoded = Dense(encoding_dim * 4, activation='relu')(decoded) decoded = Dense(784, activation='sigmoid')(decoded) autoencoder = Model(input_img, decoded) print("autoencoder first Dense layer reference:", autoencoder.layers[1]) encoder = Model(input_img, encoded) print("encoder first Dense layer reference:", encoder.layers[1]) new_model = Sequential() for i, layer in enumerate(encoder.layers): print("Before: ", layer.name) new_model.add(layer) if i != 0: new_model.layers[i-1].name = "new_model_"+layer.name print("After: ", layer.name) </code></pre> <p>Which outputs:</p> <pre><code>autoencoder first Dense layer reference: &lt;keras.layers.core.Dense object at 0x7fb5f138e278&gt; encoder first Dense layer reference: &lt;keras.layers.core.Dense object at 0x7fb5f138e278&gt; Before: input_1 Before: dense_1 After: new_model_dense_1 Before: dense_2 After: new_model_dense_2 Before: dense_3 After: new_model_dense_3 </code></pre> <p>As you can see, the layer references in the encoder and in the autoencoder are the same. Whatsmore, by changing the layer name inside of the new model we are also changing the layer name inside of the encoder's corresponding layer. For more details on python arguments being passed by reference, check out this <a href="https://stackoverflow.com/a/986145/6945436">answer</a>.</p> <hr> <p><strong>Q: should I need one-hot encoding for my data? if so, then how?</strong></p> <p>A: You do need a one-hot encoding since you are dealing with a multi-label categorical problem. The encoding is simply done by using a handy keras function:</p> <pre><code>from keras.utils import np_utils one_hot = np_utils.to_categorical(y_train) </code></pre> <p>Here's a link to the <a href="https://keras.io/utils/#to_categorical" rel="nofollow noreferrer">documentation</a>.</p> <p><strong>___</strong></p> <hr> <h2>Problem 2</h2> <p>Regarding your second question, it is not very clear what you're aiming to, however what seems to me is that you want to build an architecture which contains several parallel auto-encoders which are specialized on different tasks and then concatenate their output by adding some final, common layers.</p> <p>In any case, so far what I can do is suggesting you to take a look into this <a href="https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models" rel="nofollow noreferrer">guide</a>, which explains how to build multi-input and multi-output models and use it as a baseline to start with your custom implementation.</p> <p><strong>___</strong></p> <h3>Edit 2: Problem 2 answer integration</h3> <p>Regarding the greedy training task, the approach is to train one layer at a time by freezing all the previous one as you append new ones. Here's an example for a 3(+1) greedy-trained-layers network, which is later used as a base for a new model:</p> <pre><code>(x_train, y_train), (x_test, y_test) = mnist.load_data() y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) x_train = np.reshape(x_train, (x_train.shape[0], -1)) x_test = np.reshape(x_test, (x_test.shape[0], -1)) model = Sequential() model.add(Dense(256, activation="relu", kernel_initializer="he_uniform", input_shape=(28*28,))) model.add(Dense(10, activation="softmax")) model.compile(optimizer=SGD(lr=0.01, momentum=0.9), loss="categorical_crossentropy", metrics=["accuracy"]) model.fit(x_train, y_train, batch_size=64, epochs=50, verbose=1) # Remove last layer model.pop() # 'Freeze' previous layers, so to single-train the new one for layer in model.layers: layer.trainable = False # Append new layer + classification layer model.add(Dense(64, activation="relu", kernel_initializer="he_uniform")) model.add(Dense(10, activation="softmax")) model.fit(x_train, y_train, batch_size=64, epochs=50, verbose=0) # Remove last layer model.pop() # 'Freeze' previous layers, so to single-train the new one for layer in model.layers: layer.trainable = False # Append new layer + classification layer model.add(Dense(32, activation="relu", kernel_initializer="he_uniform")) model.add(Dense(10, activation="softmax")) model.fit(x_train, y_train, batch_size=64, epochs=50, verbose=0) # Create new model which will use the pre-trained layers new_model = Sequential() # Discard the last layer from the previous model model.pop() # Optional: you can decide to set the pre-trained layers as trainable, in # which case it would be like having initialized their weights, or not. for l in model.layers: l.trainable = True new_model.add(model) new_model.add(Dense(20, activation='relu')) new_model.add(Dropout(0.5)) new_model.add(Dense(10, activation='softmax')) new_model.compile(optimizer=SGD(lr=0.01, momentum=0.9), loss="categorical_crossentropy", metrics=["accuracy"]) new_model.fit(x_train, y_train, batch_size=64, epochs=100, verbose=1) </code></pre> <p>This is roughly it, however I must say that greedy layer training may not be a proper solution anymore: nowadays ReLU, Dropout and other regularization techniques which make the greedy layer training an obsolete and time consuming weight initialization, therefore you might want to take a look at other possibilities as well before going for greedy training.</p> <p><strong>___</strong></p>
1,136
fine-tuning
How to fine tune fine tune GitHub Copilot?
https://stackoverflow.com/questions/72554328/how-to-fine-tune-fine-tune-github-copilot
<p>We can fine tune language models like <code>BERT</code>, <code>GPT-3</code>.</p> <p>Can I fine tune <code>GitHub Copilot</code> model?</p> <p>I have already looked into examples from <a href="https://copilot.github.com/" rel="nofollow noreferrer">https://copilot.github.com/</a> but cant find the details.</p> <p>Would really appreciate if someone had fine tuned Github Copilot.</p>
<p>There does not seem to be a client-facing feature allowing you to fine-tune Copilot directly.</p> <p>Here are two illustration as to why this feature is, for now (Q2 2022) missing.</p> <p>The <a href="https://github.com/features/copilot" rel="nofollow noreferrer">Copilot feature page</a> initially included this:</p> <blockquote> <h2>How will GitHub Copilot get better over time?</h2> <p>GitHub Copilot doesn’t actually test the code it suggests, so the code may not even compile or run. GitHub Copilot can only hold a very limited context, so even single source files longer than a few hundred lines are clipped and only the immediately preceding context is used. And GitHub Copilot may suggest old or deprecated uses of libraries and languages. You can use the code anywhere, but you do so at your own risk.</p> </blockquote> <p>As <a href="https://twitter.com/tomekkorbak" rel="nofollow noreferrer">Tomek Korbak</a> explains <a href="https://twitter.com/tomekkorbak/status/1410554250514636805" rel="nofollow noreferrer">on Twitter</a>:</p> <blockquote> <p>Actually, Copilot's completions will always be optimised for human's liking, not necessarily compiler's liking.</p> <p>That's because the language model training objective (predicting the next token in text) is great at capturing short-term dependencies (which explains the human feel of generated snippets).</p> <p>But it struggles to capture long-term, global, semantic properties of generated sequences such as compilability. And there's no easy way of including compilability as a signal for their training.</p> <p>The standard way -- fine-tuning language models using RL with compilability as a reward -- notoriously leads to catastrophic forgetting: less diverse and less accurate completions.</p> </blockquote> <p>Tomek references &quot;<a href="https://arxiv.org/pdf/2106.04985.pdf" rel="nofollow noreferrer">Energy-Based Models for Code Generation under Compilability Constraints (pdf)</a>&quot;</p> <blockquote> <p><a href="https://i.sstatic.net/ulfPr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ulfPr.png" alt="https://pbs.twimg.com/media/E5NHqGjXIAYRtwa?format=png&amp;name=small" /></a></p> <p>Our solution (KL-DPG) boosts compilability rate of generated sequences from 55% to 70%.<br /> RL fine-tuning can do better but at a cost of catastrophic forgetting.</p> <p>Overall, energy-based models (EBMs) turn out to be great at expressing weird, sequence-level constraints that would be super hard as to express as normalised priors for autoregressive language models.</p> <p>EBMs provide a way of injecting our structured, symbolic knowledge into large language models without breaking them down or sacrificing their uncanny abilities.<br /> The space of further applications in controllable generation is huge.</p> </blockquote> <p>So not so easy.</p> <p><a href="https://tmabraham.github.io/" rel="nofollow noreferrer">Tanishq Mathew Abraham</a> explains in &quot;<a href="https://tmabraham.github.io/blog/github_copilot" rel="nofollow noreferrer">Coding with GitHub Copilot</a>&quot;</p> <blockquote> <p>I wonder if the GitHub team might also develop a way of perhaps fine-tuning GitHub Copilot to specific use-cases.</p> <p>For example, there may be a specific GitHub Copilot models for fastai, JAX, etc. They would be fine-tuned on the source code of of these libraries and codebases that use these libraries.</p> <p>But making sure that the tool does not provide outdated suggestions would still be a challenge.<br /> I don’t think it would be possible to provide suggestions for a brand-new library that does not have enough codebases using it to train on.</p> <p>Additionally, for situations like fastai where there are older APIs and newer APIs, when fine-tuning a model, the codebases using the older APIs would have to be filtered out.</p> </blockquote>
1,137
fine-tuning
how to use XLMRoberta in fine-tuning ,
https://stackoverflow.com/questions/70255359/how-to-use-xlmroberta-in-fine-tuning
<p>There are two problems i met when i fine-tuning my code. And i was trying to use X_1 and X_2 to regress. There are different languages in the corpus.</p> <pre><code>HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/xlm-roberta-base/resolve/main/tf_model.h5 During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /tmp/ipykernel_33/2123064688.py in &lt;module&gt; 55 # ) 56 ---&gt; 57 model = TFXLMRobertaForSequenceClassification.from_pretrained('xlm-roberta-base',num_labels=1) OSError: Can't load weights for 'xlm-roberta-base'. Make sure that: - 'xlm-roberta-base' is a correct model identifier listed on 'https://huggingface.co/models' (make sure 'xlm-roberta-base' is not a path to a local directory with something else, in that case) - or 'xlm-roberta-base' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. </code></pre> <p>This is my code:</p> <pre><code>tokenizer = XLMRobertaTokenizerFast.from_pretrained('xlm-roberta-base') train_encoding = tokenizer(X_train_1,X_train_2,truncation=True,padding=True) val_encoding = tokenizer(X_val_1,X_val_2,truncation=True,padding=True) train_dataset = tf.data.Dataset.from_tensor_slices( (dict(train_encoding),y_train) ) val_dataset = tf.data.Dataset.from_tensor_slices( (dict(val_encoding),y_val) ) model = TFXLMRobertaForSequenceClassification.from_pretrained('xlm-roberta-base',num_labels=1) </code></pre>
<p>There are several things you're better to know before diving deep into <code>huggingface</code> transformers.</p> <ol> <li>The preferred library for working with <code>huggingface</code>'s transformers is <code>PyTorch</code>.</li> <li>For several widely used models, you may find the <code>Tensorflow</code> version alongside but not for all.</li> <li>fortunately, there are ways to convert <code>pt</code> checkpoints to <code>tf</code> and vise versa.</li> </ol> <p>Finally how to fix the code:</p> <pre><code># switching to pytorch tokenizer = XLMRobertaTokenizerFast.from_pretrained('xlm-roberta-base') model = XLMRobertaForSequenceClassification.from_pretrained('xlm-roberta-base') # using un-official checkpoints model = TFXLMRobertaForSequenceClassification.from_pretrained('jplu/tf-xlm-roberta-base',num_labels=1) # converting pt checkpoint to tensorflow (not recommended!) </code></pre>
1,138
fine-tuning
Exceed quota limits when fine tuning (Vertex ai)
https://stackoverflow.com/questions/76643013/exceed-quota-limits-when-fine-tuning-vertex-ai
<p>I'm facing an error while fine-tuning with a custom dataset on Vertex AI from the Google Cloud Platform (GCP). Here's the error message I encountered: <a href="https://i.sstatic.net/egslj.png" rel="nofollow noreferrer">enter image description here</a> I would greatly appreciate any assistance in resolving this issue. Thank you!</p> <p>Fine tuned model that i can use.</p>
<p>It looks like you've hit this quota for TPUv3s in <code>europe-west4</code>. <a href="https://cloud.google.com/vertex-ai/docs/quotas#custom-trained_model_quotas" rel="nofollow noreferrer">https://cloud.google.com/vertex-ai/docs/quotas#custom-trained_model_quotas</a></p> <p>You can request an increase in this quota for your model. <a href="https://cloud.google.com/docs/quota_detail/view_manage#requesting_higher_quota" rel="nofollow noreferrer">https://cloud.google.com/docs/quota_detail/view_manage#requesting_higher_quota</a></p>
1,139
fine-tuning
Code Infilling fine-tuning with llama code
https://stackoverflow.com/questions/77739328/code-infilling-fine-tuning-with-llama-code
<p>I have a dataset of java methods and I want to fine-tune a code llm to provide accurate method names. Right now the dataset is in a .txt format with methods in text separated by a delimiter(###del###).<br /> To do this I thought about using CodeLLaMa and more specifically code infilling.<br /> From the original documentation:</p> <pre><code>from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = &quot;codellama/CodeLlama-7b-hf&quot; tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16 ).to(&quot;cuda&quot;) prompt = '''def remove_non_ascii(s: str) -&gt; str: &quot;&quot;&quot; &lt;FILL_ME&gt; return result ''' input_ids = tokenizer(prompt, return_tensors=&quot;pt&quot;)[&quot;input_ids&quot;].to(&quot;cuda&quot;) output = model.generate( input_ids, max_new_tokens=200, ) output = output[0].to(&quot;cpu&quot;) filling = tokenizer.decode(output[input_ids.shape[1]:], skip_special_tokens=True) print(prompt.replace(&quot;&lt;FILL_ME&gt;&quot;, filling)) </code></pre> <p>If i set <code>max_new_tokens=4</code> and replace method name with <code>&lt;FILL_ME&gt;</code> i will get a valid method name when i Inference the model.<br /> My problem is with fine-tuning.<br /> How am i supposed to format the dataset (as a supervised task) to fine tune such a model?</p>
1,140
fine-tuning
Tensorflow: Fine tune Inception model
https://stackoverflow.com/questions/40016933/tensorflow-fine-tune-inception-model
<p>For a few days I am following the instructions here:<a href="https://github.com/tensorflow/models/tree/master/inception" rel="nofollow">https://github.com/tensorflow/models/tree/master/inception</a> for fine-tuning inception model. The problem is that my dataset is huge so converting it to TFRecords format would fill my entire hard-disk space. Is there a way of fine-tuning without using this format? Thanks!</p>
<p>Fine-tuning is independent of the data format; you're fine there. TFRecords promotes training and scoring speed; it shouldn't affect the quantity of iterations or epochs needed, nor the ultimate classification accuracy.</p>
1,141
fine-tuning
model fine-tuning, vanishing gradient problem
https://stackoverflow.com/questions/78645055/model-fine-tuning-vanishing-gradient-problem
<p>I am fine-tuning a <code>mistral-7b</code> with Hugging Face <code>peft</code> and quantization. In my training loop, I am printing the gradient values for each batch which seem a bit unusual.</p> <pre><code># Print gradients for name, param in model_init.named_parameters(): if param.grad is not None: print(f'Gradient for {name}: {param.grad.norm()}') </code></pre> <p>I am trying to understanding, why all the gradients values are <code>0s</code> except for the #1 iteration (starting from 0th).</p> <pre><code>iteration 0 ... ... Gradient for base_model.base_model.model.model.layers.31.self_attn.q_proj.lora_B.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.31.self_attn.k_proj.lora_A.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.31.self_attn.k_proj.lora_B.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.31.self_attn.v_proj.lora_A.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight: 0.0 iteration 1 ... ... Gradient for base_model.base_model.model.model.layers.0.self_attn.k_proj.lora_B.default.weight: 0.0142822265625 Gradient for base_model.base_model.model.model.layers.0.self_attn.v_proj.lora_A.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.0.self_attn.v_proj.lora_B.default.weight: 3.953125 Gradient for base_model.base_model.model.model.layers.1.self_attn.q_proj.lora_A.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.1.self_attn.q_proj.lora_B.default.weight: 0.185546875 iteration n ... ... Gradient for base_model.base_model.model.model.layers.31.self_attn.k_proj.lora_A.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.31.self_attn.k_proj.lora_B.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.31.self_attn.v_proj.lora_A.default.weight: 0.0 Gradient for base_model.base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight: 0.0 </code></pre> <p>Isn't this a bit unusual? Is it the vanishing gradient problem?</p> <p>For context,</p> <pre><code># custom class class BinaryClassification(nn.Module): def __init__(self, base_model): super().__init__() self.base_model = base_model self.dropout = nn.Dropout(0.05) #self.classifier = nn.Linear(hidden_size, 1) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() def forward(self, x): outputs = self.base_model(x) dropout_output = self.dropout(outputs.logits) relu_output = self.relu(dropout_output[:, -1, :]) probs = self.sigmoid(relu_output) # Apply sigmoid to logits to get probabilities #print('forward probs', probs) return probs model_init = BinaryClassification(peft_model) # optimizer criterion = torch.nn.BCELoss() optimizer = torch.optim.AdamW(model_init.parameters(), lr=0.001, eps=1e-08, weight_decay=0.001) scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, verbose=True) # Loss def calc_loss_batch(input_batch, target_batch): output = model_init(input_batch) # Reshape target to match the shape of probs target_batch = target_batch.unsqueeze(1) if output.shape != target_batch.shape: raise Exception(&quot;Shape mismatch between input logits and target label&quot;) # Logits of last output token loss = criterion(output, target_batch) return loss </code></pre>
1,142
fine-tuning
Vertex AI API. Fine tuning the chat model
https://stackoverflow.com/questions/77394815/vertex-ai-api-fine-tuning-the-chat-model
<p>I am trying to launch Vertex AI fine tuning. I have followed the tutorial step by step:</p> <ol> <li>Created storage</li> <li>Uploaded dataset in .jsonl format</li> <li>Configured the model details etc.</li> </ol> <p>One entry in my dataset file (jsonl) looks as follows:</p> <blockquote> <p>{&quot;messages&quot;: [{&quot;author&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Tell a joke about flovers&quot;}, {&quot;author&quot;: &quot;assistant&quot;, &quot;content&quot;: &quot;Roses are red, Violets are blue. I have a gun. Get in the van.&quot;}]}</p> </blockquote> <p>When I hit &quot;start tuning&quot; button nothing happens. The manual says I should see the process in the pipelines window, but I see nothing there. I thought maybe my dataset is corrupted and tried also to use the sample dataset provided with the tutorial.Same thing- tuning process is not starting. What am I missing here?</p> <p><a href="https://i.sstatic.net/8kzjL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8kzjL.png" alt="enter image description here" /></a></p> <p>Update:</p> <p>I was able to fix the problem above.All I had to do was to enable all the related APIs. It is weird that GCP doesn't provide any kind of warning during fine tuning startup. <a href="https://i.sstatic.net/gHzWh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gHzWh.png" alt="enter image description here" /></a></p> <p>Now, I have the fine tune pipeline job running but failing with the following error:</p> <p><a href="https://i.sstatic.net/LtXbL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LtXbL.png" alt="enter image description here" /></a></p>
<p><em>Posting the above comments as an answer to help community members in their research.</em></p> <p><strong>Issue 1</strong>:When trying to launch Vertex AI fine tuning chat model by clicking the &quot;start tuning&quot; button nothing happens.</p> <p><strong>Solution:</strong> The above issue was solved after enabling API’s.On the Vertex AI main page there is a blue button &quot;enable all recommended APIs&quot;. Once it clicked,the fine tuning setup started working.</p> <p><strong>Issue 2:</strong> Error message:<code>code=RESOURCE_EXHAUSTED</code></p> <p><strong>Solution:</strong> This type of error generally happens when the requested resource is not in that region or reaches the maximum quota. We can check the regions available of the resource in the Quotas page within your project. go to &quot;IAM &amp; Admin&quot; &gt; &quot;Quotas&quot; then go find the resource at the search bar.</p> <p>For more information refer to these <a href="https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/quickstart-tuning#generative-ai-tune-model-console" rel="nofollow noreferrer">link1</a>,<a href="https://www.youtube.com/watch?v=4A4W03qUTsw" rel="nofollow noreferrer">link2</a> and <a href="https://cloud.google.com/vertex-ai/docs/resources" rel="nofollow noreferrer">link3</a>.</p> <p>Posting the answer as <em><strong>community wiki</strong></em> for the benefit of the community that might encounter this use case in the future.</p> <p>Feel free to edit this answer for additional information.</p>
1,143
fine-tuning
Keras model gets worse when fine-tuning
https://stackoverflow.com/questions/66460418/keras-model-gets-worse-when-fine-tuning
<p>I'm trying to follow the fine-tuning steps described in <a href="https://www.tensorflow.org/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/images/transfer_learning#create_the_base_model_from_the_pre-trained_convnets</a> to get a trained model for binary segmentation.</p> <p>I create an encoder-decoder with the weights of the encoder being the ones of the MobileNetV2 and fixed as <code>encoder.trainable = False</code>. Then, I define my decoder as said in the tutorial and I train the network for 300 epochs using a learning rate of 0.005. I get the following loss value and Jaccard index during the lasts epochs:</p> <pre><code>Epoch 297/300 55/55 [==============================] - 85s 2s/step - loss: 0.2443 - jaccard_sparse3D: 0.5556 - accuracy: 0.9923 - val_loss: 0.0440 - val_jaccard_sparse3D: 0.3172 - val_accuracy: 0.9768 Epoch 298/300 55/55 [==============================] - 75s 1s/step - loss: 0.2437 - jaccard_sparse3D: 0.5190 - accuracy: 0.9932 - val_loss: 0.0422 - val_jaccard_sparse3D: 0.3281 - val_accuracy: 0.9776 Epoch 299/300 55/55 [==============================] - 78s 1s/step - loss: 0.2465 - jaccard_sparse3D: 0.4557 - accuracy: 0.9936 - val_loss: 0.0431 - val_jaccard_sparse3D: 0.3327 - val_accuracy: 0.9769 Epoch 300/300 55/55 [==============================] - 85s 2s/step - loss: 0.2467 - jaccard_sparse3D: 0.5030 - accuracy: 0.9923 - val_loss: 0.0463 - val_jaccard_sparse3D: 0.3315 - val_accuracy: 0.9740 </code></pre> <p>I store all the weights of this model and then, I compute the fine-tuning with the following steps:</p> <pre class="lang-py prettyprint-override"><code>model.load_weights('my_pretrained_weights.h5') model.trainable = True model.compile(optimizer=Adam(learning_rate=0.00001, name='adam'), loss=SparseCategoricalCrossentropy(from_logits=True), metrics=[jaccard, &quot;accuracy&quot;]) model.fit(training_generator, validation_data=(val_x, val_y), epochs=5, validation_batch_size=2, callbacks=callbacks) </code></pre> <p>Suddenly the performance of my model is way much worse than during the training of the decoder:</p> <pre><code>Epoch 1/5 55/55 [==============================] - 89s 2s/step - loss: 0.2417 - jaccard_sparse3D: 0.0843 - accuracy: 0.9946 - val_loss: 0.0079 - val_jaccard_sparse3D: 0.0312 - val_accuracy: 0.9992 Epoch 2/5 55/55 [==============================] - 90s 2s/step - loss: 0.1920 - jaccard_sparse3D: 0.1179 - accuracy: 0.9927 - val_loss: 0.0138 - val_jaccard_sparse3D: 7.1138e-05 - val_accuracy: 0.9998 Epoch 3/5 55/55 [==============================] - 95s 2s/step - loss: 0.2173 - jaccard_sparse3D: 0.1227 - accuracy: 0.9932 - val_loss: 0.0171 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 0.9999 Epoch 4/5 55/55 [==============================] - 94s 2s/step - loss: 0.2428 - jaccard_sparse3D: 0.1319 - accuracy: 0.9927 - val_loss: 0.0190 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 1.0000 Epoch 5/5 55/55 [==============================] - 97s 2s/step - loss: 0.1920 - jaccard_sparse3D: 0.1107 - accuracy: 0.9926 - val_loss: 0.0215 - val_jaccard_sparse3D: 0.0000e+00 - val_accuracy: 1.0000 </code></pre> <p>Is there any known reason why this is happening? Is it normal? Thank you in advance!</p>
<p>OK I found out what I do different that makes it NOT necessary to compile. I do not set encoder.trainable = False. What I do in the code below is equivalent</p> <pre><code>for layer in encoder.layers: layer.trainable=False </code></pre> <p>then train your model. Then you can unfreeze the encoder weights with</p> <pre><code>for layer in encoder.layers: layer.trainable=True </code></pre> <p>You do not need to recompile the model. I tested this and it works as expected. You can verify by priniting model summary before and after and look at the number of trainable parameters. As for changing the learning rate I find it is best to use the the keras callback ReduceLROnPlateau to automatically adjust the learning rate based on validation loss. I also recommend using the EarlyStopping callback which monitors validation and halts training if the loss fails to reduce after 'patience' number of consecutive epochs. Setting restore_best_weights=True will load the weights for the epoch with the lowest validation loss so you don't have to save then reload the weights. Set epochs to a large number to ensure this callback activates. The code I use is shown below</p> <pre><code>es=tf.keras.callbacks.EarlyStopping( monitor=&quot;val_loss&quot;, patience=3, verbose=1, restore_best_weights=True) rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor=&quot;val_loss&quot;, factor=0.5, patience=1, verbose=1) callbacks=[es, rlronp] </code></pre> <p>In model.fit set callbacks=callbacks</p>
1,144
fine-tuning
Which layers should I freeze for fine tuning a resnet model on keras?
https://stackoverflow.com/questions/47206714/which-layers-should-i-freeze-for-fine-tuning-a-resnet-model-on-keras
<p>I already know how to do it on vgg (fine tuning the last conv block) and inception (fine tuning the top two blocks). I'd like to know which layers is recommended to freeze in order to fine tuning a resnet model.</p>
<p>I think that there is no a state of the art strategy for this but I may share you my thoughts on this topic (names of layers are similar to these presented <a href="https://github.com/fchollet/keras/blob/master/keras/applications/resnet50.py" rel="noreferrer">here</a>:</p> <ol> <li><p>In case of having a lot of data real-world photos: freeze all stages up to stage 4 (leave the only 5th trainable). If you overfit - make the 5th stage to have fewer layers. If underfit <em>unfreeze</em> a half of the fourth layer. Remember - the deeper into the network - the more <em>ImageNet</em> specific features you have.</p></li> <li><p>In case of having a few real-world photos: cut 5th, leave half of 4th stage trainable and freeze the rest. If overfit - keep cutting stage 4th, if underfit - keep extending. </p></li> <li><p>In case of having a lot of simple photos data (e.g. medical ones) - cut 4th and 5th - leave 3rd trainable and freeze the rest. If overfit - keep cutting - id underfit - try point 2. </p></li> <li><p>In case of having a few simple (less than 10K) photos data - I would advise not to use <code>ResNet50</code>. From my experience it overfits severely. I usually implement my custom topologies similar to <code>ResNet18</code>. If you still want to try it - try instructions from 3rd point.</p></li> </ol>
1,145
fine-tuning
What are the differences between fine tuning and few shot learning?
https://stackoverflow.com/questions/72611335/what-are-the-differences-between-fine-tuning-and-few-shot-learning
<p>I am trying to understand the concept of <code>fine-tuning</code> and <code>few-shot</code> learning.</p> <p>I understand the need for fine-tuning. It is essentially tuning a pre-trained model to a specific downstream task. However, recently I have seen a plethora of blog posts stating zero-shot learning, one-shot learning and few-shot learning.</p> <ul> <li>How are they different from fine-tuning? It appears to me that few-shot learning is a specialization of fine-tuning. What am I missing here?</li> </ul> <p>Can anyone please help me?</p>
<p>Fine tuning - When you already have a model trained to perform the task you want but on a different dataset, you initialise using the pre-trained weights and train it on target (usually smaller) dataset (usually with a smaller learning rate).</p> <p>Few shot learning - When you want to train a model on any task using very few samples. e.g., you have a model trained on different but related task and you (optionally) modify it and train for target task using small number of examples.</p> <p>For example:</p> <p>Fine tuning - Training a model for intent classification and then fine tuning it on a different dataset.</p> <p>Few shot learning - Training a language model on large text dataset and modifying it (usually last (few) layer) to classify intents by training on small labelled dataset.</p> <p>There could be many more ways to do few shot learning. For 1 more example, training a model to classify images where some classes have very small (or 0 for zero shot and 1 for one shot) number of training samples. Here in inference, classifying these rare classes (rare in training) correctly becomes the aim of few shot learning.</p>
1,146
fine-tuning
Error &quot;invalidPayload&quot; with Microsoft Azure OpenAI fine-tuning
https://stackoverflow.com/questions/74088197/error-invalidpayload-with-microsoft-azure-openai-fine-tuning
<p>When wanting to run a fine-tune request via the REST API I get the following error message:</p> <pre><code>&quot;error&quot;: { &quot;code&quot;: &quot;invalidPayload&quot;, &quot;message&quot;: &quot;The fineTune field is required.&quot; } </code></pre> <p>Also, in the fine-tuning wizard I get a message saying &quot;No models are available. Please check your access or try again later.&quot; <a href="https://i.sstatic.net/L6EYG.png" rel="nofollow noreferrer">Screenshot of Error Message</a></p> <p>Does anyone know what the problem is? Do I need another subscription?</p>
<p>It looks like you may not have the correct subscription level for Azure OpenAI. You may need to upgrade your subscription in order to use the fine-tuning feature.</p>
1,147
fine-tuning
KerasLayer model increased size after fine-tuning
https://stackoverflow.com/questions/69529302/keraslayer-model-increased-size-after-fine-tuning
<p>I am working with Universal Sentence Encoder (v4) as TF KerasLayer. I load the model, fine-tune and save it back to the file. The original size of the model is 1GB, but if I fine-tune it (without adding any Dense Layer) the size increased to 3GB. If I just load and save the model (without fine-tuning) the size remains the same.</p> <p>Why there is such an increase in memory footprint? I was thinking that the original model is using some form of quantization but then it is strange to me that even the training is not performed with a different floating points:</p> <pre><code>import tensorflow as tf import tensorflow_addons as tfa import tensorflow_hub as hub </code></pre> <p>then:</p> <pre><code>x = tf.keras.layers.Input(shape=[], dtype=tf.string) tmp_output = hub.KerasLayer('https://tfhub.dev/google/universal-sentence-encoder/4', trainable=True)(x) model = tf.keras.models.Model(x, tmp_output) model.save(&quot;/media/petrlorenc/Data/universal-sentence-encoder_fine/&quot;) </code></pre> <p>vs</p> <pre><code>x = tf.keras.layers.Input(shape=[], dtype=tf.string) tmp_output = hub.KerasLayer('https://tfhub.dev/google/universal-sentence-encoder/4', trainable=True)(x) model = tf.keras.models.Model(x, tmp_output) model.compile(optimizer=tf.keras.optimizers.Adam(0.001), loss=tfa.losses.TripletSemiHardLoss()) model.fit([f&quot;sentence {x}&quot; for x in range(size)], y=[random.randint(0, 5) for _ in range(size)], epochs=1, batch_size=128) model.save(&quot;/media/petrlorenc/Data/universal-sentence-encoder_fine/&quot;) </code></pre>
1,148
fine-tuning
openai.error.InvalidRequestError: The specified base model does not support fine-tuning. when fine-tune Azure OpenAI model
https://stackoverflow.com/questions/77083082/openai-error-invalidrequesterror-the-specified-base-model-does-not-support-fine
<p>I'm running the following Python code for a fine-tune OpenAI task:</p> <pre class="lang-py prettyprint-override"><code>import openai from openai import cli import time import shutil import json openai.api_key = &quot;*********************&quot; openai.api_base = &quot;https://*********************&quot; openai.api_type = 'azure' openai.api_version = '2023-05-15' deployment_name ='*********************' training_file_name = 'training.jsonl' validation_file_name = 'validation.jsonl' # Samples data are fake sample_data = [ {&quot;prompt&quot;: &quot;Questa parte del testo e’ invece in italiano, perche’ Giuseppe Coco vive a Milano, codice postale 09576.&quot;, &quot;completion&quot;: &quot;[type: LOCATION, start: 36, end: 44, score: 0.85, type: PERSON, start: 54, end: 72, score: 0.85, type: LOCATION, start: 75, end: 81, score: 0.85]&quot;}, {&quot;prompt&quot;: &quot;In this fake document, we describe the ambarabacicicoco, of Alfred Johnson, who lives in Paris (France), the zip code is 21076, and his phone number is +32 475348723.&quot;, &quot;completion&quot;: &quot;[type: AU_TFN, start: 157, end: 166, score: 1.0, type: PERSON, start: 60, end: 74, score: 0.85, type: LOCATION, start: 89, end: 94, score: 0.85, type: LOCATION, start: 97, end: 103, score: 0.85, type: PHONE_NUMBER, start: 153, end: 166, score: 0.75]&quot;}, {&quot;prompt&quot;: &quot;This document is a fac simile&quot;, &quot;completion&quot;: &quot;[]&quot;}, {&quot;prompt&quot;: &quot;Here there are no PIIs&quot;, &quot;completion&quot;: &quot;[]&quot;}, {&quot;prompt&quot;: &quot;Questa parte del testo e’ invece in italiano, perche’ Giuseppe Coco vive a Milano, codice postale 09576.&quot;, &quot;completion&quot;: &quot;[type: LOCATION, start: 36, end: 44, score: 0.85, type: PERSON, start: 54, end: 72, score: 0.85, type: LOCATION, start: 75, end: 81, score: 0.85]&quot;}, {&quot;prompt&quot;: &quot;In this fake document, we describe the ambarabacicicoco, of Alfred Johnson, who lives in Paris (France), the zip code is 21076, and his phone number is +32 475348723.&quot;, &quot;completion&quot;: &quot;[type: AU_TFN, start: 157, end: 166, score: 1.0, type: PERSON, start: 60, end: 74, score: 0.85, type: LOCATION, start: 89, end: 94, score: 0.85, type: LOCATION, start: 97, end: 103, score: 0.85, type: PHONE_NUMBER, start: 153, end: 166, score: 0.75]&quot;}, {&quot;prompt&quot;: &quot;This document is a fac simile&quot;, &quot;completion&quot;: &quot;[]&quot;}, {&quot;prompt&quot;: &quot;Here there are no PIIs&quot;, &quot;completion&quot;: &quot;[]&quot;}, {&quot;prompt&quot;: &quot;10 August 2023&quot;, &quot;completion&quot;: &quot;[type: DATE_TIME, start: 0, end: 14, score: 0.85]&quot;}, {&quot;prompt&quot;: &quot;Marijn De Belie, Manu Brehmen (Deloitte Belastingconsulenten)&quot;, &quot;completion&quot;: &quot;[type: PERSON, start: 0, end: 15, score: 0.85, type: PERSON, start: 17, end: 29, score: 0.85]&quot;}, {&quot;prompt&quot;: &quot;The content expressed herein is based on the facts and assumptions you have provided us. We have assumed that these facts and assumptions are correct, complete and accurate.&quot;, &quot;completion&quot;: &quot;[]&quot;}, {&quot;prompt&quot;: &quot;This letter is solely for your benefit and may not be relied upon by anyone other than you.&quot;, &quot;completion&quot;: &quot;[]&quot;}, {&quot;prompt&quot;: &quot;Dear Mr. Mahieu,&quot;, &quot;completion&quot;: &quot;[type: PERSON, start: 9, end: 15, score: 0.85]&quot;}, {&quot;prompt&quot;: &quot;Since 1 January 2018, a capital reduction carried out in accordance with company law rules is partly imputed on the taxable reserves of the SPV&quot;, &quot;completion&quot;: &quot;[type: DATE_TIME, start: 6, end: 20, score: 0.85]&quot;}, ] # Generate the training dataset file. print(f'Generating the training file: {training_file_name}') with open(training_file_name, 'w') as training_file: for entry in sample_data: json.dump(entry, training_file) training_file.write('\n') # Copy the validation dataset file from the training dataset file. # Typically, your training data and validation data should be mutually exclusive. # For the purposes of this example, you use the same data. print(f'Copying the training file to the validation file') shutil.copy(training_file_name, validation_file_name) def check_status(training_id, validation_id): train_status = openai.File.retrieve(training_id)[&quot;status&quot;] valid_status = openai.File.retrieve(validation_id)[&quot;status&quot;] print(f'Status (training_file | validation_file): {train_status} | {valid_status}') return (train_status, valid_status) # Upload the training and validation dataset files to Azure OpenAI. training_id = cli.FineTune._get_or_upload(training_file_name, True) validation_id = cli.FineTune._get_or_upload(validation_file_name, True) # Check the upload status of the training and validation dataset files. (train_status, valid_status) = check_status(training_id, validation_id) # Poll and display the upload status once per second until both files succeed or fail to upload. while train_status not in [&quot;succeeded&quot;, &quot;failed&quot;] or valid_status not in [&quot;succeeded&quot;, &quot;failed&quot;]: time.sleep(1) (train_status, valid_status) = check_status(training_id, validation_id) # This example defines a fine-tune job that creates a customized model based on curie, # with just a single pass through the training data. The job also provides # classification-specific metrics by using our validation data, at the end of that epoch. create_args = { &quot;training_file&quot;: training_id, &quot;validation_file&quot;: validation_id, &quot;model&quot;: &quot;curie&quot;, &quot;n_epochs&quot;: 1, &quot;compute_classification_metrics&quot;: True, &quot;classification_n_classes&quot;: 3 } # Create the fine-tune job and retrieve the job ID and status from the response. resp = openai.FineTune.create(**create_args) job_id = resp[&quot;id&quot;] status = resp[&quot;status&quot;] # You can use the job ID to monitor the status of the fine-tune job. # The fine-tune job might take some time to start and complete. print(f'Fine-tuning model with job ID: {job_id}.') # Get the status of our fine-tune job. status = openai.FineTune.retrieve(id=job_id)[&quot;status&quot;] # If the job isn't yet done, poll it every 2 seconds. if status not in [&quot;succeeded&quot;, &quot;failed&quot;]: print(f'Job not in terminal status: {status}. Waiting.') while status not in [&quot;succeeded&quot;, &quot;failed&quot;]: time.sleep(2) status = openai.FineTune.retrieve(id=job_id)[&quot;status&quot;] print(f'Status: {status}') else: print(f'Fine-tune job {job_id} finished with status: {status}') # Check if there are other fine-tune jobs in the subscription. # Your fine-tune job might be queued, so this is helpful information to have # if your fine-tune job hasn't yet started. print('Checking other fine-tune jobs in the subscription.') result = openai.FineTune.list() print(f'Found {len(result)} fine-tune jobs.') # Retrieve the name of the customized model from the fine-tune job. result = openai.FineTune.retrieve(id=job_id) if result[&quot;status&quot;] == 'succeeded': model = result[&quot;fine_tuned_model&quot;] # Create the deployment for the customized model by using the standard scale type # without specifying a scale capacity. print(f'Creating a new deployment with model: {model}') result = openai.Deployment.create(model=model, scale_settings={&quot;scale_type&quot;:&quot;standard&quot;, &quot;capacity&quot;: None}) # Retrieve the deployment job ID from the results. deployment_id = result[&quot;id&quot;] </code></pre> <p>Based on this Microsoft official documentation: <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?pivots=programming-language-python" rel="nofollow noreferrer">Microsoft documentation for OpenAI fine-tuning</a></p> <p>Now, when I run this script I get the following error:</p> <pre><code>openai.error.InvalidRequestError: The specified base model does not support fine-tuning. </code></pre> <p>Based on a similar question (<a href="https://learn.microsoft.com/en-us/answers/questions/1190892/getting-error-while-finetuning-gpt-3-model-using-a" rel="nofollow noreferrer">similar question</a>), it seems that the problem is related to the region where my OpenAI service is deployed, indeed my OpenAI service is deployed in East US, and as far as I understood, the only available region for fine-tuning is Central US. The problem is that I don't see Central US as an available region to deploy an OpenAI service:</p> <p><a href="https://i.sstatic.net/t05Ru.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t05Ru.png" alt="regions available for OpenAI deployment" /></a></p> <p>Note that I also tried &quot;North Central US&quot;, and got the same error.</p> <p>Do you know what could be the reason of this error?</p>
<blockquote> <p>openai.error.InvalidRequestError: The specified base model does not support fine-tuning.</p> </blockquote> <p>According to <a href="https://learn.microsoft.com/en-us/answers/questions/1353932/when-fine-tune-models-in-azure-openai-will-be-avai" rel="nofollow noreferrer">MS-Q&amp;A</a> by AshokPeddakotla-MSFT,</p> <ul> <li>Fine-tuning is currently not available for new customers as it is turned off for all regions. Unfortunately, they don't have any ETA at this moment when fine-tuning opens again.</li> <li>If you have previously fine-tuned &amp; deployed in a region, then you can fine-tune in that region if it's available.</li> </ul> <p>A fine tuning can deploy only in <strong><code>Southcentral US</code></strong> location as of now.</p> <p>I had an old subscription with created Azure Open AI service in the South Central US location.</p> <p><strong>Portal:</strong> <img src="https://i.imgur.com/wieZlYm.png" alt="enter image description here" /></p> <p>Now, I tried with your same code and it deployed successfully.</p> <p><strong>Console:</strong></p> <p><img src="https://i.imgur.com/JJ424IY.png" alt="enter image description here" /></p> <p><strong>Portal:</strong></p> <p><img src="https://i.imgur.com/2UdaPSS.png" alt="enter image description here" /></p> <p>But as of now, a new customer can't able to deploy the fine-tuning-based model.</p> <p><strong>Reference:</strong></p> <p><a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability" rel="nofollow noreferrer">Azure OpenAI Service models - Azure OpenAI | Microsoft Learn</a></p>
1,149
fine-tuning
What does fine-tuning a multilingual checkpoint mean?
https://stackoverflow.com/questions/76826638/what-does-fine-tuning-a-multilingual-checkpoint-mean
<p>I'm fine-tuning a SetFit model on a French dataset and following the guide in <a href="https://huggingface.co/blog/setfit" rel="nofollow noreferrer">huggingface</a>. They mention this point on the site that I didn't quite understand</p> <blockquote> <p>&quot;🌎 Multilingual support: SetFit can be used with any Sentence Transformer on the Hub, which means you can classify text in multiple languages by simply fine-tuning a multilingual checkpoint.&quot;</p> </blockquote> <p>Does that mean I must find an already finetuned SetFit model in French when loading the model? As in replace &quot;paraphrase-mpnet-base-v2&quot; below with a French one?</p> <pre><code>model = SetFitModel.from_pretrained(&quot;sentence-transformers/paraphrase-mpnet-base-v2&quot;) </code></pre>
<p>What the point in the guide suggests is that multilingual models fine-tuned using <code>SetFit</code> method generalize well even on languages they did not see during the <code>SetFit</code> fine-tuning process. This seems to be generally true for multilingual language models but it probably does not do any damage to mention it explicitly, particularly when discussing <code>SetFit</code>, which is a method which usually works with a very small dataset (i.e. the dataset that might not be multilingual).</p> <p>The finding is supported by the <a href="https://arxiv.org/pdf/2209.11055.pdf" rel="nofollow noreferrer">paper</a> mentioned in the guide, where researchers show that model fine-tuned on English data using <code>SetFit</code> performs well on variety of languages (see table 4).</p> <p>What I would take from it is this: if you fine-tune multilingual checkpoint (e.g. <a href="https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2" rel="nofollow noreferrer"><code>sentence-transformers/paraphrase-multilingual-mpnet-base-v2</code></a>) and fine-tune it on French, it will perform well on French and probably will also perform well on other languages. If you plan to use the fine-tuned model only on French texts, you certainly can and try to fine-tune a specifically French model - however, it's certainly not true that you <em>must</em> do this.</p> <p>However, if there exists a specifically French sentence transformer and you want to use your model only on French texts, I would recommend using the French model. Not because you must, but because it might perform better than the multilingual model.</p>
1,150
fine-tuning
How can I check a confusion_matrix after fine-tuning with custom datasets?
https://stackoverflow.com/questions/68691450/how-can-i-check-a-confusion-matrix-after-fine-tuning-with-custom-datasets
<p>This question is the same with <a href="https://datascience.stackexchange.com/questions/99815/how-can-i-check-a-confusion-matrix-after-fine-tuning-with-custom-datasets">How can I check a confusion_matrix after fine-tuning with custom datasets?</a>, on Data Science Stack Exchange.</p> <h2>Background</h2> <p>I would like to check a confusion_matrix, including precision, recall, and f1-score like below after fine-tuning with custom datasets.</p> <p>Fine tuning process and the task are <a href="https://huggingface.co/transformers/custom_datasets.html#sequence-classification-with-imdb-reviews" rel="noreferrer">Sequence Classification with IMDb Reviews</a> on the <a href="https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer" rel="noreferrer">Fine-tuning with custom datasets tutorial on Hugging face</a>.</p> <p>After finishing the fine-tune with Trainer, how can I check a confusion_matrix in this case?</p> <p>An image of confusion_matrix, including precision, recall, and f1-score <a href="http://www.renom.jp/notebooks/product/renom_dl/trainer/notebook.html" rel="noreferrer">original site</a>: just for example output image</p> <pre><code>predictions = np.argmax(trainer.test(test_x), axis=1) # Confusion matrix and classification report. print(classification_report(test_y, predictions)) precision recall f1-score support 0 0.75 0.79 0.77 1000 1 0.81 0.87 0.84 1000 2 0.63 0.61 0.62 1000 3 0.55 0.47 0.50 1000 4 0.66 0.66 0.66 1000 5 0.62 0.64 0.63 1000 6 0.74 0.83 0.78 1000 7 0.80 0.74 0.77 1000 8 0.85 0.81 0.83 1000 9 0.79 0.80 0.80 1000 avg / total 0.72 0.72 0.72 10000 </code></pre> <h2>Code</h2> <pre class="lang-py prettyprint-override"><code>from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) model = DistilBertForSequenceClassification.from_pretrained(&quot;distilbert-base-uncased&quot;) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() </code></pre> <h2>What I did so far</h2> <p>Data set Preparation for <a href="https://huggingface.co/transformers/custom_datasets.html#sequence-classification-with-imdb-reviews" rel="noreferrer">Sequence Classification with IMDb Reviews</a>, and I'm fine-tuning with Trainer.</p> <pre><code>from pathlib import Path def read_imdb_split(split_dir): split_dir = Path(split_dir) texts = [] labels = [] for label_dir in [&quot;pos&quot;, &quot;neg&quot;]: for text_file in (split_dir/label_dir).iterdir(): texts.append(text_file.read_text()) labels.append(0 if label_dir is &quot;neg&quot; else 1) return texts, labels train_texts, train_labels = read_imdb_split('aclImdb/train') test_texts, test_labels = read_imdb_split('aclImdb/test') from sklearn.model_selection import train_test_split train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2) from transformers import DistilBertTokenizerFast tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') train_encodings = tokenizer(train_texts, truncation=True, padding=True) val_encodings = tokenizer(val_texts, truncation=True, padding=True) test_encodings = tokenizer(test_texts, truncation=True, padding=True) import torch class IMDbDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) train_dataset = IMDbDataset(train_encodings, train_labels) val_dataset = IMDbDataset(val_encodings, val_labels) test_dataset = IMDbDataset(test_encodings, test_labels) </code></pre>
<p>What you could do in this situation is to iterate on the validation set(or on the test set for that matter) and manually create a list of <code>y_true</code> and <code>y_pred</code>.</p> <pre><code>import torch import torch.nn.functional as F from sklearn import metrics y_preds = [] y_trues = [] for index,val_text in enumerate(val_texts): tokenized_val_text = tokenizer([val_text], truncation=True, padding=True, return_tensor='pt') logits = model(tokenized_val_text) prediction = F.softmax(logits, dim=1) y_pred = torch.argmax(prediction).numpy() y_true = val_labels[index] y_preds.append(y_pred) y_trues.append(y_true) </code></pre> <p>Finally,</p> <pre><code>confusion_matrix = metrics.confusion_matrix(y_trues, y_preds, labels=[&quot;neg&quot;, &quot;pos&quot;])) print(confusion_matrix) </code></pre> <p>Observations:</p> <ol> <li>The output of the model are the <code>logits</code>, not the probabilities normalized.</li> <li>As such, we apply <code>softmax</code> on dimension one to transform to actual probabilities (e.g. <code>0.2% class 0</code>, <code>0.8% class 1</code>).</li> <li>We apply the <code>.argmax()</code> operation to get the index of the class.</li> </ol>
1,151
fine-tuning
The Impact of Pretraining on Fine-tuning and Inference
https://stackoverflow.com/questions/78737176/the-impact-of-pretraining-on-fine-tuning-and-inference
<p>I am working on a binary prediction classification task, primarily focusing on fine-tuning a BERT model to learn the association between CVEs and CWEs. I've structured my task into three phases: first, using a large dataset of CVE and CWE for MLM pretraining; then using the pretrained model weights for fine-tuning; and finally, using the fine-tuned model with task-specific weights for inference. For inference, I randomly select 100 pairs of CVEs for the model to predict, only retaining the model weights and setting the model to `model.eval()`. However, I've encountered an issue where, after transferring the pretrained model to fine-tuning, the evaluation metrics significantly improve (accuracy, precision, recall, f1), and both training and validation loss decrease substantially. Yet, when I use the fine-tuned model weights for inference, the accuracy turns out to be worse than before.</p> <p>What is the reason for this discrepancy?</p> <p>Here are the hyperparameters used for pretraining and fine-tuning:</p> <p>Pretrain Model:</p> <pre><code>batch_size = 16 num_epochs = 10 learning_rate = 5e-4 eps = 1e-8 beta1 = 0.9 beta2 = 0.99 weight_decay = 0.01 total_steps = num_epochs \len(train_loader) warmup_steps = total_steps // 10 early_stopping_patience = 2 </code></pre> <pre><code>mask_prob = 0.15 replace_mask_prob = 0.8 random_replace_prob = 0.10 keep_original_prob = 0.10 </code></pre> <p>Train Model:</p> <pre><code>learning_rate = 2e-5 batch_size = 16 epoch = 5 </code></pre> <p>I have experimented with various combinations of hyperparameters and found that the current settings are optimal for minimizing validation loss and maximizing accuracy.</p>
1,152
fine-tuning
Tesseract fine tuning Cant open lstmbox
https://stackoverflow.com/questions/72182929/tesseract-fine-tuning-cant-open-lstmbox
<br /> im trying to fine tune tesseract 4.1.1 in my WSL shell. Doing that im following [this](https://www.statworx.com/en/content-hub/blog/fine-tuning-tesseract-ocr-for-german-invoices/) tutorial. I have created a number of .tiff files and now are trying to generate the corresponding .box files using lstmbox. ``` tesseract my_picture.tiff my_picture --psm 2 eng lstmbox ``` But instead of giving me the corresponding .box files i am hoping for, tesseract just returns the following: ``` read_params_file: Cant't open lstmbox ``` After looking trough some tesseract related content i think that I'm maybe just missing a part of tesseract that wasn't installed the right way. But I am not sure and can't find which one it might be.<br /> Does somebody once had the same problem or just knows how to solve this problem?
<p>Error <code>read_params_file: Cant't open...</code> means that tesseract can not find defined configuration file (<code>lstmbox</code>). If your installation process did not installed it you can download it (and others) from <a href="https://github.com/tesseract-ocr/tesseract/tree/main/tessdata/configs" rel="nofollow noreferrer">https://github.com/tesseract-ocr/tesseract/tree/main/tessdata/configs</a> and install it manually to your installation structure.</p>
1,153
fine-tuning
MXnet fine-tune save model
https://stackoverflow.com/questions/44473612/mxnet-fine-tune-save-model
<p>I'm using mxnet's fine-tune example to fine-tune my own data with this code: </p> <p><a href="https://github.com/dmlc/mxnet/blob/master/example/image-classification/fine-tune.py" rel="nofollow noreferrer">https://github.com/dmlc/mxnet/blob/master/example/image-classification/fine-tune.py</a></p> <p>By viewing common/fit.py, I got no idea of how to save temp model when I fine tuning.</p> <p>For example, I wanna save .params files every 5000 iters, how can I do it? THX!</p>
<p><a href="http://mxnet.io/api/python/callback.html" rel="nofollow noreferrer">http://mxnet.io/api/python/callback.html</a></p> <p>Try to use the mx.callback API.</p> <pre><code>module.fit(iterator, num_epoch=n_epoch, ... epoch_end_callback = mx.callback.do_checkpoint("mymodel", 1)) Start training with [cpu(0)] Epoch[0] Resetting Data Iterator Epoch[0] Time cost=0.100 Saved checkpoint to "mymodel-0001.params" Epoch[1] Resetting Data Iterator Epoch[1] Time cost=0.060 Saved checkpoint to "mymodel-0002.params" </code></pre>
1,154
fine-tuning
Continual pre-training vs. Fine-tuning a language model with MLM
https://stackoverflow.com/questions/68461204/continual-pre-training-vs-fine-tuning-a-language-model-with-mlm
<p>I have some custom data I want to use to <em><strong>further pre-train</strong></em> the BERT model. I’ve tried the two following approaches so far:</p> <ol> <li>Starting with a pre-trained BERT checkpoint and continuing the pre-training with Masked Language Modeling (<code>MLM</code>) + Next Sentence Prediction (<code>NSP</code>) heads (e.g. using <em><strong>BertForPreTraining</strong></em> model)</li> <li>Starting with a pre-trained BERT model with the <code>MLM</code> objective (e.g. using the <em><strong>BertForMaskedLM</strong></em> model assuming we don’t need NSP for the pretraining part.)</li> </ol> <p>But I’m still confused that if using either <em>BertForPreTraining</em> or <em>BertForMaskedLM</em> actually does the continual pre-training on BERT or these are just two models for fine-tuning that use MLM+NSP and MLM for fine-tuning BERT, respectively. Is there even any difference between fine-tuning BERT with MLM+NSP or continually pre-train it using these two heads or this is something we need to test?</p> <p>I've reviewed similar questions such as <a href="https://stackoverflow.com/questions/65646925/how-to-train-bert-from-scratch-on-a-new-domain-for-both-mlm-and-nsp">this one</a> but still, I want to make sure that whether technically there's a difference between continual pre-training a model from an initial checkpoint and fine-tuning it using the same objective/head.</p>
<p>The answer is a mere difference in the terminology used. When the model is trained on a large generic corpus, it is called 'pre-training'. When it is adapted to a particular task or dataset it is called as 'fine-tuning'.</p> <p>Technically speaking, in either cases ('pre-training' or 'fine-tuning'), there are updates to the model weights.</p> <p>For example, usually, you can just take the pre-trained model and then fine-tune it for a specific task (such as classification, question-answering, etc.). However, if you find that the target dataset is from a specific domain, and you have a few unlabled data that might help the model to adapt to the particular domain, then you can do a MLM or MLM+NSP 'fine-tuning' (unsupervised learning) (some researchers do call this as 'pre-training' especially when a huge corpus is used to train the model), followed by using the target corpus with target task fine-tuning.</p>
1,155
fine-tuning
Unsupervised fine-tuning on custom documents after the supervised fine tuning on general question-answers dataset. Will it be useful for GPT-2 model?
https://stackoverflow.com/questions/76489469/unsupervised-fine-tuning-on-custom-documents-after-the-supervised-fine-tuning-on
<p>I know the formal way of training a GPT2 model on custom documents is to first do semi-supervised fine tuning on the text of the documents followed by supervised fine-tuning on question answers from the same documents. But the sole purpose of supervised fine-tuning being to acquire style of answering question, is it possible to do supervised fine-tuning on a general dataset, and after that perform unsupervised fine-tuning on our custom text dataset from documents. This way question answering style can also be acquired by the model along with the advantage of having no need of making a question-answer dataset for the custom documents.</p> <p>Will it give the desired results?</p>
<p>It is very difficult to say this methodology would 'work' reliably for use cases. One approach I have tried is taking a base model and <a href="https://huggingface.co/learn/nlp-course/chapter7/6" rel="nofollow noreferrer">causally</a> fine-tuning it on the documents at hand. Following this, you can take a publicly created Q&amp;A dataset like <a href="https://rajpurkar.github.io/SQuAD-explorer/" rel="nofollow noreferrer">SQuAD</a> and further fine-tune in a prompt + expected response way. During this supervised stage, much research has shown that using Parameter Efficient methods for this task adaptation stage is more beneficial than training all weights (see <a href="https://arxiv.org/pdf/2106.09685.pdf" rel="nofollow noreferrer">LoRa</a>).</p> <p>Finally, I will say this: for Question and Answering systems, I personally have found that using In-Context learning has been far more beneficial than fine-tuning and closed-book Q&amp;A - even in the case that using Vector DBs and Embeddings is required to search for relevant chunks of context.</p>
1,156
fine-tuning
Multiple gpu fine-tuning does not accelerate?
https://stackoverflow.com/questions/79351083/multiple-gpu-fine-tuning-does-not-accelerate
<p>I am experimenting using single or multiple GPUs for LLM fine-tuning by changing the CUDA_VISIBLE_DEVICES variable in the following cmd:</p> <pre><code>CUDA_VISIBLE_DEVICES=0,1 accelerate launch --multi_gpu finetuning_with_lora_HfArgumentParser.py \ --model_name &quot;/root/123/local_model&quot; \ --train_json_path &quot;./train.json&quot; \ --val_json_path &quot;./val.json&quot; \ --max_source_length 128 \ --max_target_length 256 \ --lora_rank 8 \ --lora_alpha 32 \ --output_dir &quot;output&quot; \ --logging_dir &quot;logs&quot; \ --num_train_epochs 10 \ --per_device_train_batch_size 1 \ --learning_rate 1e-4 \ --gradient_accumulation_steps 1024 </code></pre> <p>However, I observed that total training time does not change regardless of CUDA_VISIBLE_DEVICES=0 or CUDA_VISIBLE_DEVICES=0,1. Does anyone know what's wrong with it?</p> <p>Here is the training code:</p> <pre><code>import torch from dataclasses import dataclass, field from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter from transformers import AutoModelForCausalLM, AutoTokenizer, HfArgumentParser, TrainingArguments from peft import LoraConfig, get_peft_model, TaskType from qa_dataset import QADataset from tqdm import tqdm import time, sys @dataclass class CustomTrainingArguments: model_name: str = field( default=&quot;Qwen/Qwen2-1.5B-Instruct&quot;, metadata={&quot;help&quot;: &quot;Pre-trained model for finetuning&quot;} ) train_json_path: str = field( default=&quot;./train.json&quot;, metadata={&quot;help&quot;: &quot;Path of training json data&quot;} ) val_json_path: str = field( default=&quot;./val.json&quot;, metadata={&quot;help&quot;: &quot;Path of validation json data&quot;} ) max_source_length: int = field( default=128, metadata={&quot;help&quot;: &quot;Maximum length of the input&quot;} ) max_target_length: int = field( default=256, metadata={&quot;help&quot;: &quot;Maximum length of the output&quot;} ) lora_rank: int = field( default=8, metadata={&quot;help&quot;: &quot;Inner dimension of the low-rank matrices to train&quot;} ) lora_alpha: int = field( default=32, metadata={&quot;help&quot;: &quot;Scaling factor for the low-rank matrices contribution&quot;} ) def train_model(model, train_loader, val_loader, optimizer, gradient_accumulation_steps, device, num_epochs, model_output_dir, writer): batch_step = 0 for epoch in range(int(num_epochs)): time1 = time.time() model.train() for index, data in enumerate(tqdm(train_loader, file=sys.stdout, desc=&quot;Train Epoch: &quot; + str(epoch))): input_ids = data['input_ids'].to(device, dtype=torch.long) attention_mask = data['attention_mask'].to(device, dtype=torch.long) labels = data['labels'].to(device, dtype=torch.long) outputs = model( input_ids=input_ids, attention_mask=attention_mask, labels=labels, ) loss = outputs.loss loss.backward() if (index % gradient_accumulation_steps == 0 and index != 0) or index == len(train_loader) - 1: optimizer.step() optimizer.zero_grad() writer.add_scalar('Loss/train', loss, batch_step) batch_step += 1 if index % 100 == 0 or index == len(train_loader) - 1: time2 = time.time() tqdm.write( f&quot;{index}, epoch: {epoch} -loss: {str(loss)} ; each step's time spent: {(str(float(time2 - time1) / float(index + 0.0001)))}&quot;) model.eval() val_loss = validate_model(model, val_loader, device) writer.add_scalar('Loss/val', val_loss, epoch) print(f&quot;val loss: {val_loss} , epoch: {epoch}&quot;) print(&quot;Save Model To &quot;, model_output_dir) model.save_pretrained(model_output_dir) def validate_model(model, val_loader, device): running_loss = 0.0 with torch.no_grad(): for _, data in enumerate(tqdm(val_loader, file=sys.stdout, desc=&quot;Validation Data&quot;)): input_ids = data['input_ids'].to(device, dtype=torch.long) attention_mask = data['attention_mask'].to(device, dtype=torch.long) labels = data['labels'].to(device, dtype=torch.long) outputs = model( input_ids=input_ids, attention_mask=attention_mask, labels=labels, ) loss = outputs.loss running_loss += loss.item() return running_loss / len(val_loader) def main(): parser = HfArgumentParser((TrainingArguments, CustomTrainingArguments)) training_args, custom_args = parser.parse_args_into_dataclasses() # model_name = &quot;Qwen/Qwen2-1.5B-Instruct&quot; model_name = custom_args.model_name # train_json_path = &quot;./train.json&quot; train_json_path = custom_args.train_json_path # val_json_path = &quot;./val.json&quot; val_json_path = custom_args.val_json_path # max_source_length = 128 max_source_length = custom_args.max_source_length # max_target_length = 256 max_target_length = custom_args.max_target_length # epochs = 10 epochs = training_args.num_train_epochs # batch_size = 1 batch_size = training_args.per_device_train_batch_size # lr = 1e-4 lr = training_args.learning_rate # gradient_accumulation_steps = 16 gradient_accumulation_steps = training_args.gradient_accumulation_steps # lora_rank = 8 lora_rank = custom_args.lora_rank # lora_alpha = 32 lora_alpha = custom_args.lora_alpha # model_output_dir = &quot;output&quot; model_output_dir = training_args.output_dir # logs_dir = &quot;logs&quot; logs_dir = training_args.logging_dir # device = torch.device(&quot;cuda:0&quot; if torch.cuda.is_available() else &quot;cpu&quot;) device = &quot;cuda:{}&quot;.format(training_args.local_rank) print('------------------------') print(training_args.local_rank) print(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) # setup peft peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, target_modules=[&quot;q_proj&quot;, &quot;k_proj&quot;, &quot;v_proj&quot;, &quot;o_proj&quot;, &quot;gate_proj&quot;, &quot;up_proj&quot;, &quot;down_proj&quot;], inference_mode=False, r=lora_rank, lora_alpha=lora_alpha, lora_dropout=0.1 ) model = get_peft_model(model, peft_config) model.is_parallelizable = True model.model_parallel = True model.print_trainable_parameters() print(&quot;Start Load Train Data...&quot;) train_params = { &quot;batch_size&quot;: batch_size, &quot;shuffle&quot;: True, &quot;num_workers&quot;: 0, } training_set = QADataset(train_json_path, tokenizer, max_source_length, max_target_length) training_loader = DataLoader(training_set, **train_params) print(&quot;Start Load Validation Data...&quot;) val_params = { &quot;batch_size&quot;: batch_size, &quot;shuffle&quot;: False, &quot;num_workers&quot;: 0, } val_set = QADataset(val_json_path, tokenizer, max_source_length, max_target_length) val_loader = DataLoader(val_set, **val_params) writer = SummaryWriter(logs_dir) optimizer = torch.optim.AdamW(params=model.parameters(), lr=lr) model = model.to(device) print(&quot;Start Training...&quot;) train_model( model=model, train_loader=training_loader, val_loader=val_loader, optimizer=optimizer, gradient_accumulation_steps=gradient_accumulation_steps, device=device, num_epochs=epochs, model_output_dir=model_output_dir, writer=writer ) writer.close() if __name__ == '__main__': main() </code></pre>
1,157
fine-tuning
fine-tuning from an existing checkpoint: meaning of &quot;steps&quot;
https://stackoverflow.com/questions/45193033/fine-tuning-from-an-existing-checkpoint-meaning-of-steps
<p>In <a href="https://github.com/tensorflow/models/blob/master/slim/scripts/finetune_inception_v3_on_flowers.sh" rel="nofollow noreferrer">this example</a> of fine-tuning an InceptionV3 model on the Flowers training set, there are two parts which say:</p> <pre><code># Fine-tune only the new layers for 1000 steps. </code></pre> <p>after which an evaluation is run.</p> <p>Then, </p> <pre><code># Fine-tune all the new layers for 500 steps. </code></pre> <p>after which a second evaluation is run.</p> <p>What does this mean in context of fine-tuning? I'm not sure what this concept of "steps" means or why they need to do evaluation twice.</p>
<p>A step means a gradient descent step on a minibatch of examples. The example tunes for 1000 steps and then for 500 further steps to show how the performance improves after fine tuning.</p>
1,158
fine-tuning
Imbalanced fine-tuning of inception v3 network (tensorflow)
https://stackoverflow.com/questions/41348902/imbalanced-fine-tuning-of-inception-v3-network-tensorflow
<p>I am doing fine-tuning on the pretrained on the Image-net Challenge, inception v3 network of tensorflow. In my classification problem I am using 2 classes because I want to understand if an image contains food or not. My dataset is imbalanced because I have a lot more images on the non-food class (which makes sense I think). When I apply fine-tuning, I have high accuracy on the non-food class and very low accuracy on the food class. Then I am cutting the last 2 inception modules and 2 layers are inserted: a new convolutional layer and a global average pooling layer (and then we have a softmax for output). I apply again a short fine-tuning on the newly introduced layers and now the network gives me high accuracy on the food class and very low accuracy on the non-food. </p> <p>Any ideas of what might be happening?</p>
1,159
fine-tuning
Fine-tuning an open-source LLM for a new language?
https://stackoverflow.com/questions/76561780/fine-tuning-an-open-source-llm-for-a-new-language
<p>What are the most suitable open source LLMs and frameworks for fine-tuning? I intend to use this model in a quite specific domain, perhaps a physics mentor for a school. How long might it take (with 3070 Ti 11Gb) to achieve acceptable accuracy for this purpose? I assume that the process of fine-tuning a new language is the same as fine-tuning on any other data, or is it not?</p> <p>I couldn't find any open source LLMs that support the language I need, or are even partially trained on it, which would've made fine-tuning less complex. While there've been LLMs that support languages from the same family, but I believe that this is more likely to cause issues and confusion, since it'll be harder for the model to distinguish between languages.</p>
1,160
fine-tuning
Fine-tuning PyTorch: No deepcopy
https://stackoverflow.com/questions/68636072/fine-tuning-pytorch-no-deepcopy
<p>Regarding fine-tuning CNNs in PyTorch, as per <a href="https://pytorch.org/tutorials/beginner/saving_loading_models.html" rel="nofollow noreferrer">SAVING AND LOADING MODELS</a>:</p> <blockquote> <p>If you only plan to keep the best performing model (according to the acquired validation loss), … You must serialize best_model_state or use best_model_state = deepcopy(model.state_dict()) otherwise your best best_model_state will keep getting updated by the subsequent training iterations. As a result, the final model state will be the state of the overfitted model.</p> </blockquote> <p>However, I have done something like this:</p> <pre><code>def train_model(model, ...): ... if validation_loss improves: delete previous best model torch.save(model.state_dict(), best_model_path) else: .... ... return model def test_model(model, best_model_path, ...): model.load_state_dict(torch.load(best_model_path)) model.eval() ... ... my_model = train_model(my_model, ...) test_model(my_model, my_path, ...) </code></pre> <p>In other words, the model returned by the training phase is the final one which is likely to present overfitting (I did not use deepcopy). But since I saved the best model during training, I have no problem during the test/inference phase because I load the best model, overloading the final model obtained during testing.</p> <p>Is something wrong with this solution?</p> <p>Thank you.</p>
<p>You’re still following the tutorial’s instructions. Note this part of the tutorial:</p> <blockquote> <p>You must serialize <code>best_model_state</code> or use <code>best_model_state = deepcopy(model.state_dict())</code></p> </blockquote> <p>You serialized the best model’s state (wrote it to disk), so you don’t need to use <code>deepcopy</code>.</p> <p>If you kept the model in memory, you’d use <code>deepcopy</code> to make sure it’s not altered during training. But because you’re keeping it on disk instead, it won’t be altered.</p>
1,161
fine-tuning
Python: fine tuning several fits functions
https://stackoverflow.com/questions/15551956/python-fine-tuning-several-fits-functions
<p>I need a hand to fine tune the plot resulting from my code. the code is really rude, but basically it serves to fit some data, that have two peaks in the ditribution of counts versus time. The rising part of each peak is fitted with a gaussian, and the decaying part with an exponential. I need to fine tuning the fits, cause, as it is clear from the plot, the data are fitted but not in the best way. I need to avoid the discontinuites between the different functions (so the functions have to "touch" each other), and I would like to obtain fits that really follow the data and behave according to their definition (i.e., the first gaussian has not "bell" shape at the peak, and the second gaussian stops "too soon"). The code take the data from the web, so it is directly executable. Hopefully, the code and the image will be clearer than my words. Many thanks in advance.</p> <pre><code>#!/usr/bin/env python import pyfits, os, re, glob, sys from scipy.optimize import leastsq from numpy import * from pylab import * from scipy import * # ---------------- Functions ---------------------------# def right_exp(p, x, y, err1): yfit1 = p[0]*exp(-p[2]*(x - p[1])) dev_exp = (y - yfit1)/err1 return dev_exp def left_gauss(p, x, y, err2): yfit2 = p[0]*(1/sqrt(2*pi*(p[2]**2)))*exp(-(x - p[1])**2/(2*p[2]**2)) dev_gauss = (y - yfit2)/err2 return dev_gauss # ------------------------------------------------------ # tmin = 56200 tmax = 56249 data=pyfits.open('http://heasarc.gsfc.nasa.gov/docs/swift/results/transients/weak/GX304-1.orbit.lc.fits') time = data[1].data.field(0)/86400. + data[1].header['MJDREFF'] + data[1].header['MJDREFI'] rate = data[1].data.field(1) error = data[1].data.field(2) data.close() cond1 = ((time &gt; 56200) &amp; (time &lt; 56209)) #| ((time &gt; 56225) &amp; (time &lt; 56234)) time1 = time[cond1] rate1 = rate[cond1] error1 = error[cond1] cond2 = ((time &gt; 56209) &amp; (time &lt; 56225)) #| ((time &gt; 56234) &amp; (time &lt; 56249)) time2 = time[cond2] rate2 = rate[cond2] error2 = error[cond2] cond3 = ((time &gt; 56225) &amp; (time &lt; 56234)) time3 = time[cond3] rate3 = rate[cond3] error3 = error[cond3] cond4 = ((time &gt; 56234) &amp; (time &lt; 56249)) time4 = time[cond4] rate4 = rate[cond4] error4 = error[cond4] totaltime = np.append(time1, time2) totalrate = np.append(rate1, rate2) v0= [0.23, 56209.0, 1] #inital guesses for Gaussian Fit, just do it around the peaks v1= [0.40, 56233.0, 1] # ------------------------ First peak -------------------------------------------------------------------# out = leastsq(left_gauss, v0[:], args=(time1, rate1, error1), maxfev = 100000, full_output = 1) p = out[0] v = out[0] xxx = arange(min(time1), max(time1), time1[1] - time1[0]) yfit1 = p[0]*(1/sqrt(2*pi*(p[2]**2)))*exp(-(xxx - p[1])**2/(2*p[2]**2)) out2 = leastsq(right_exp, v0[:], args = (time2, rate2, error2), maxfev = 100000, full_output = 1) p2 = out2[0] v2 = out2[0] xxx2 = arange(min(time2), max(time2), time2[1] - time2[0]) yfit2 = p2[0]*exp(-p2[2]*(xxx2 - p2[1])) # ------------------------ Second peak -------------------------------------------------------------------# out3 = leastsq(left_gauss, v1[:], args=(time3, rate3, error3), maxfev = 100000, full_output = 1) p3 = out3[0] v3 = out3[0] xxx3 = arange(min(time3), max(time3), time3[1] - time3[0]) yfit3 = p3[0]*(1/sqrt(2*pi*(p3[2]**2)))*exp(-(xxx3 - p3[1])**2/(2*p3[2]**2)) out4 = leastsq(right_exp, v1[:], args = (time4, rate4, error4), maxfev = 100000, full_output = 1) p4 = out4[0] v4 = out4[0] xxx4 = arange(min(time4), max(time4), time4[1] - time4[0]) yfit4 = p4[0]*exp(-p4[2]*(xxx4 - p4[1])) # ------------------------------------------------------------------------------------------------------- # fig = figure(figsize = (9, 9)) #make a plot ax1 = fig.add_subplot(111) ax1.plot(time, rate, 'g.') ax1.plot(xxx, yfit1, 'b-') ax1.plot(xxx2, yfit2, 'b-') ax1.plot(xxx3, yfit3, 'b-') ax1.plot(xxx4, yfit4, 'b-') axis([tmin, tmax, -0.00, 0.45]) savefig("first peak.png") </code></pre> <p><img src="https://i.sstatic.net/Y2ypj.png" alt="first peak.png"></p>
<p>Using trigonometric series can handle very well this problem in order to create a continuous function. The example below works if pasted after your code. You can change the number of terms in the trigonometric series if you need.</p> <p><img src="https://i.sstatic.net/dJ38m.png" alt="enter image description here"></p> <pre><code>import numpy as np from scipy.optimize import curve_fit x = np.concatenate((time1, time2, time3, time4)) y_points = np.concatenate((rate1, rate2, rate3, rate4)) den = x.max() - x.min() def func(x, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15): return a1 *sin( 1*pi*x/den)+\ a2 *sin( 2*pi*x/den)+\ a3 *sin( 3*pi*x/den)+\ a4 *sin( 4*pi*x/den)+\ a5 *sin( 5*pi*x/den)+\ a6 *sin( 6*pi*x/den)+\ a7 *sin( 7*pi*x/den)+\ a8 *sin( 8*pi*x/den)+\ a9 *sin( 9*pi*x/den)+\ a10*sin(10*pi*x/den)+\ a11*sin(11*pi*x/den)+\ a12*sin(12*pi*x/den)+\ a13*sin(13*pi*x/den)+\ a14*sin(14*pi*x/den)+\ a15*sin(15*pi*x/den) popt, pcov = curve_fit(func, x, y_points) y = func(x, *popt) plot(x,y, color='r', linewidth=2.) show() </code></pre> <hr> <h2>EDIT</h2> <p>As suggested by @Alfe, this fitting function could be written in a compact format like:</p> <pre><code>def func(x, a): return sum(a_i * sin(i * pi * x / den) for i, a_i in enumerate(a, 1)) </code></pre>
1,162
fine-tuning
BERT always predicts same class (Fine-Tuning)
https://stackoverflow.com/questions/64675655/bert-always-predicts-same-class-fine-tuning
<p>I am fine-tuning BERT on a financial news dataset. Unfortunately BERT seems to be trapped in a local minimum. It is content with learning to always predict the same class.</p> <ul> <li>balancing the dataset didnt work</li> <li>tuning parameters didnt work as well</li> </ul> <p>I am honestly not sure what is causing this problem. With the simpletransformers library I am getting very good results. I would really appreciate if somebody could help me. thanks a lot!</p> <p>Full code on github: <a href="https://github.com/Bene939/BERT_News_Sentiment_Classifier" rel="nofollow noreferrer">https://github.com/Bene939/BERT_News_Sentiment_Classifier</a></p> <p>Code:</p> <pre><code>from transformers import BertForSequenceClassification, AdamW, BertTokenizer, get_linear_schedule_with_warmup, Trainer, TrainingArguments import torch from torch.utils.data import DataLoader, RandomSampler, SequentialSampler, TensorDataset import pandas as pd from pathlib import Path import sklearn from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, precision_recall_fscore_support import numpy as np from torch.nn import functional as F from collections import defaultdict import random #defining tokenizer, model and optimizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = BertForSequenceClassification.from_pretrained('bert-base-cased', num_labels=3) if torch.cuda.is_available(): print(&quot;\nUsing: &quot;, torch.cuda.get_device_name(0)) device = torch.device('cuda') else: print(&quot;\nUsing: CPU&quot;) device = torch.device('cpu') model = model.to(device) #loading dataset labeled_dataset = &quot;news_headlines_sentiment.csv&quot; labeled_dataset_file = Path(labeled_dataset) file_loaded = False while not file_loaded: if labeled_dataset_file.exists(): labeled_dataset = pd.read_csv(labeled_dataset_file) file_loaded = True print(&quot;Dataset Loaded&quot;) else: print(&quot;File not Found&quot;) print(labeled_dataset) #counting sentiments negative = 0 neutral = 0 positive = 0 for idx, row in labeled_dataset.iterrows(): if row[&quot;sentiment&quot;] == 0: negative += 1 elif row[&quot;sentiment&quot;] == 1: neutral += 1 else: positive += 1 print(&quot;Unbalanced Dataset&quot;) print(&quot;negative: &quot;, negative) print(&quot;neutral: &quot;, neutral) print(&quot;positive: &quot;, positive) #balancing dataset to 1/3 per sentiment for idx, row in labeled_dataset.iterrows(): if row[&quot;sentiment&quot;] == 0: if negative - neutral != 0: index_name = labeled_dataset[labeled_dataset[&quot;news&quot;] == row[&quot;news&quot;]].index labeled_dataset.drop(index_name, inplace=True) negative -= 1 elif row[&quot;sentiment&quot;] == 2: if positive - neutral != 0: index_name = labeled_dataset[labeled_dataset[&quot;news&quot;] == row[&quot;news&quot;]].index labeled_dataset.drop(index_name, inplace=True) positive -= 1 #custom dataset class class NewsSentimentDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) #method for tokenizing dataset list def tokenize_headlines(headlines, labels, tokenizer): encodings = tokenizer.batch_encode_plus( headlines, add_special_tokens = True, truncation = True, padding = 'max_length', return_attention_mask = True, return_token_type_ids = True ) dataset = NewsSentimentDataset(encodings, labels) return dataset #splitting dataset into training and validation set #load news sentiment dataset all_headlines = labeled_dataset['news'].tolist() all_labels = labeled_dataset['sentiment'].tolist() train_headlines, val_headlines, train_labels, val_labels = train_test_split(all_headlines, all_labels, test_size=.2) val_dataset = tokenize_headlines(val_headlines, val_labels, tokenizer) train_dataset = tokenize_headlines(train_headlines, val_labels, tokenizer) #data loader train_batch_size = 8 val_batch_size = 8 train_data_loader = DataLoader(train_dataset, batch_size = train_batch_size, shuffle=True) val_data_loader = DataLoader(val_dataset, batch_size = val_batch_size, sampler=SequentialSampler(val_dataset)) #optimizer and scheduler num_epochs = 1 num_steps = len(train_data_loader) * num_epochs optimizer = AdamW(model.parameters(), lr=5e-5, eps=1e-8) scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_steps*0.06, num_training_steps=num_steps) #training and evaluation seed_val = 64 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) for epoch in range(num_epochs): print(&quot;\n###################################################&quot;) print(&quot;Epoch: {}/{}&quot;.format(epoch+1, num_epochs)) print(&quot;###################################################\n&quot;) #training phase average_train_loss = 0 average_train_acc = 0 model.train() for step, batch in enumerate(train_data_loader): input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) labels = batch['labels'].to(device) token_type_ids = batch['token_type_ids'].to(device) outputs = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids = token_type_ids) loss = F.cross_entropy(outputs[0], labels) average_train_loss += loss if step % 40 == 0: print(&quot;Training Loss: &quot;, loss) logits = outputs[0].detach().cpu().numpy() label_ids = labels.to('cpu').numpy() average_train_acc += sklearn.metrics.accuracy_score(label_ids, np.argmax(logits, axis=1)) print(&quot;predictions: &quot;,np.argmax(logits, axis=1)) print(&quot;labels: &quot;,label_ids) print(&quot;#############&quot;) optimizer.zero_grad() loss.backward() #maximum gradient clipping torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() model.zero_grad() average_train_loss = average_train_loss / len(train_data_loader) average_train_acc = average_train_acc / len(train_data_loader) print(&quot;======Average Training Loss: {:.5f}======&quot;.format(average_train_loss)) print(&quot;======Average Training Accuracy: {:.2f}%======&quot;.format(average_train_acc*100)) #validation phase average_val_loss = 0 average_val_acc = 0 model.eval() for step,batch in enumerate(val_data_loader): input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) labels = batch['labels'].to(device) token_type_ids = batch['token_type_ids'].to(device) pred = [] with torch.no_grad(): outputs = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids) loss = F.cross_entropy(outputs[0], labels) average_val_loss += loss logits = outputs[0].detach().cpu().numpy() label_ids = labels.to('cpu').numpy() print(&quot;predictions: &quot;,np.argmax(logits, axis=1)) print(&quot;labels: &quot;,label_ids) print(&quot;#############&quot;) average_val_acc += sklearn.metrics.accuracy_score(label_ids, np.argmax(logits, axis=1)) average_val_loss = average_val_loss / len(val_data_loader) average_val_acc = average_val_acc / len(val_data_loader) print(&quot;======Average Validation Loss: {:.5f}======&quot;.format(average_val_loss)) print(&quot;======Average Validation Accuracy: {:.2f}%======&quot;.format(average_val_acc*100)) ################################################### Epoch: 1/1 ################################################### Training Loss: tensor(1.1006, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) predictions: [1 0 2 0 0 0 2 0] labels: [2 0 1 1 0 1 0 1] ############# predictions: [2 2 0 0 0 2 0 0] labels: [1 2 1 0 2 0 1 2] ############# predictions: [0 0 0 0 1 0 0 1] labels: [0 1 1 0 1 1 2 0] ############# predictions: [0 0 0 2 0 1 0 0] labels: [0 0 0 2 0 0 2 1] ############# predictions: [1 0 0 0 0 0 2 0] labels: [0 2 2 1 0 0 0 0] ############# predictions: [0 0 0 0 0 1 0 0] labels: [1 0 2 2 2 1 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 2 2 2 0 2 0] ############# predictions: [0 1 0 0 0 0 0 0] labels: [2 2 0 2 0 0 0 1] ############# predictions: [0 0 0 0 0 2 0 1] labels: [0 1 0 2 2 0 1 2] ############# predictions: [0 0 2 0 0 0 1 0] labels: [0 0 0 1 2 1 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 1 0 1 0 1 1] ############# predictions: [0 2 0 0 0 0 0 0] labels: [2 2 0 1 0 1 2 1] ############# predictions: [0 1 0 0 0 0 1 2] labels: [2 2 1 0 2 0 0 2] ############# predictions: [0 0 1 1 1 1 0 1] labels: [1 2 1 1 1 1 2 2] ############# predictions: [1 0 0 0 0 1 2 1] labels: [1 0 1 1 0 0 0 2] ############# predictions: [0 1 1 1 1 0 2 1] labels: [2 2 1 2 2 1 1 2] ############# predictions: [0 0 1 0 1 1 0 0] labels: [1 0 0 1 0 1 0 2] ############# predictions: [1 2 0 0 1 2 0 0] labels: [0 2 2 1 2 0 1 0] ############# predictions: [0 2 1 1 0 1 1 0] labels: [2 2 0 1 1 0 1 2] ############# predictions: [1 0 1 1 1 1 1 0] labels: [0 2 0 1 0 1 2 2] ############# predictions: [0 2 1 2 0 0 1 1] labels: [2 1 1 1 1 2 2 0] ############# predictions: [0 1 2 2 2 1 1 2] labels: [2 2 1 1 2 1 0 1] ############# predictions: [2 2 2 1 2 1 1 1] labels: [0 1 1 0 0 2 2 1] ############# predictions: [1 2 2 2 1 2 1 2] labels: [0 0 0 0 2 0 1 2] ############# predictions: [2 1 1 1 2 2 2 2] labels: [1 0 2 2 1 0 0 0] ############# predictions: [2 1 2 2 2 1 2 2] labels: [2 1 1 1 1 1 2 2] ############# predictions: [1 1 0 2 1 2 1 2] labels: [2 2 0 2 0 1 2 0] ############# predictions: [0 1 1 2 0 1 2 1] labels: [2 2 2 1 2 2 0 1] ############# predictions: [2 1 1 1 1 2 1 1] labels: [0 1 1 2 1 0 0 2] ############# predictions: [1 2 2 0 1 1 1 2] labels: [0 1 2 1 2 1 0 1] ############# predictions: [0 1 1 1 1 1 1 0] labels: [0 2 0 1 1 2 2 2] ############# predictions: [1 2 1 1 2 1 1 0] labels: [0 2 2 2 0 0 1 0] ############# predictions: [2 2 2 1 2 1 1 2] labels: [2 2 1 2 1 0 0 0] ############# predictions: [2 2 1 2 2 2 1 2] labels: [1 1 2 2 2 0 2 1] ############# predictions: [2 2 2 2 2 0 2 2] labels: [2 2 1 2 0 1 1 2] ############# predictions: [1 1 2 1 2 2 0 1] labels: [2 1 1 1 0 0 2 2] ############# predictions: [2 1 2 2 2 2 1 0] labels: [0 2 0 2 0 0 0 0] ############# predictions: [2 2 2 2 2 2 2 2] labels: [1 1 0 2 0 1 2 1] ############# predictions: [2 2 2 2 1 2 2 2] labels: [1 0 0 1 1 0 0 0] ############# predictions: [2 2 2 1 2 2 2 2] labels: [1 0 1 1 0 2 2 0] ############# Training Loss: tensor(1.1104, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) predictions: [2 0 1 2 1 2 2 0] labels: [2 2 0 0 1 0 0 2] ############# predictions: [0 2 2 0 2 1 1 1] labels: [0 0 0 1 0 0 1 0] ############# predictions: [0 2 2 0 1 1 1 2] labels: [2 1 1 1 2 2 1 0] ############# predictions: [2 1 1 2 2 0 2 0] labels: [1 2 1 2 1 0 2 1] ############# predictions: [0 2 2 0 0 2 1 2] labels: [0 0 2 2 0 0 2 0] ############# predictions: [0 0 1 2 2 0 2 2] labels: [0 0 0 0 0 0 0 0] ############# predictions: [1 1 2 1 2 0 1 2] labels: [0 0 2 0 0 0 1 1] ############# predictions: [0 0 2 1 0 2 0 1] labels: [1 1 2 1 1 0 2 0] ############# predictions: [0 0 0 0 1 0 0 0] labels: [2 2 1 1 2 1 1 1] ############# predictions: [0 0 0 0 1 0 0 0] labels: [1 1 2 2 1 1 2 0] ############# predictions: [0 0 0 0 0 1 1 1] labels: [2 0 1 1 0 1 2 2] ############# predictions: [0 0 1 0 0 1 2 1] labels: [1 2 0 2 2 0 2 1] ############# predictions: [1 1 1 1 0 1 0 1] labels: [2 0 1 0 1 0 1 2] ############# predictions: [1 2 2 0 0 0 1 1] labels: [2 0 0 2 1 2 2 2] ############# predictions: [1 0 2 1 0 2 2 0] labels: [0 0 2 1 2 1 1 1] ############# predictions: [0 0 0 1 1 1 1 1] labels: [1 2 1 0 0 0 1 0] ############# predictions: [1 1 1 0 1 1 0 1] labels: [0 2 1 2 1 2 2 0] ############# predictions: [2 1 0 1 1 2 0 0] labels: [0 1 0 0 1 2 0 2] ############# predictions: [0 1 1 0 0 1 0 1] labels: [1 0 0 2 2 1 1 2] ############# predictions: [1 1 1 1 1 1 1 1] labels: [2 0 1 0 2 0 0 2] ############# predictions: [1 0 0 1 0 1 0 2] labels: [1 0 0 1 1 2 2 1] ############# predictions: [1 1 1 1 1 1 0 0] labels: [1 1 0 2 1 0 2 0] ############# predictions: [1 1 2 1 0 1 0 0] labels: [0 2 1 2 1 1 0 2] ############# predictions: [1 1 0 0 1 2 1 1] labels: [0 2 1 0 2 2 0 1] ############# predictions: [0 1 1 0 0 1 0 1] labels: [0 0 1 2 2 0 1 2] ############# predictions: [1 0 2 2 2 1 1 0] labels: [2 2 1 0 0 1 1 2] ############# predictions: [1 2 2 1 1 2 1 1] labels: [1 0 0 1 0 0 0 0] ############# predictions: [0 2 0 2 2 0 2 2] labels: [2 0 0 0 2 1 1 2] ############# predictions: [0 0 1 0 1 0 2 2] labels: [0 0 1 0 1 0 2 0] ############# predictions: [0 2 0 1 1 2 2 0] labels: [0 2 0 2 0 2 0 0] ############# predictions: [2 2 2 2 2 2 2 1] labels: [2 2 1 1 0 0 2 2] ############# predictions: [2 0 0 2 2 1 1 0] labels: [1 0 0 1 0 2 1 2] ############# predictions: [2 0 0 2 0 2 2 0] labels: [2 2 2 2 0 1 1 1] ############# predictions: [0 2 2 0 2 2 0 0] labels: [1 0 1 2 0 1 1 1] ############# predictions: [0 0 0 0 0 0 0 2] labels: [2 1 1 0 0 0 1 2] ############# predictions: [2 0 2 0 2 1 0 2] labels: [2 1 1 2 1 1 0 0] ############# predictions: [1 1 2 0 2 0 2 2] labels: [0 2 1 2 1 2 1 0] ############# predictions: [2 0 1 1 0 2 0 0] labels: [2 1 0 1 1 0 2 0] ############# predictions: [2 0 0 2 0 2 1 0] labels: [0 0 0 0 2 1 0 1] ############# predictions: [1 2 1 0 0 2 0 2] labels: [2 0 2 1 0 0 1 1] ############# Training Loss: tensor(1.1162, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) predictions: [2 0 0 1 1 1 0 1] labels: [0 1 1 1 1 2 2 1] ############# predictions: [0 2 0 1 2 0 0 1] labels: [2 2 1 0 1 0 0 0] ############# predictions: [0 0 1 0 0 0 0 1] labels: [1 0 2 0 0 2 2 0] ############# predictions: [2 1 2 2 0 1 2 0] labels: [2 0 1 0 2 1 0 1] ############# predictions: [1 0 0 2 0 0 1 1] labels: [2 2 0 2 0 2 0 0] ############# predictions: [0 0 1 0 0 0 0 0] labels: [2 2 2 1 2 2 2 2] ############# predictions: [0 0 1 1 0 1 1 0] labels: [2 1 1 1 0 2 1 0] ############# predictions: [0 0 0 1 0 0 1 0] labels: [2 0 2 2 0 0 1 2] ############# predictions: [1 0 1 0 0 2 0 0] labels: [1 1 2 0 0 1 0 0] ############# predictions: [2 1 0 0 0 1 0 0] labels: [1 2 0 0 0 0 0 0] ############# predictions: [0 2 0 0 0 0 0 0] labels: [2 0 1 1 2 2 1 1] ############# predictions: [0 1 0 0 0 1 0 2] labels: [0 2 1 1 0 0 1 2] ############# predictions: [0 2 1 0 0 1 1 1] labels: [1 1 0 2 0 1 1 0] ############# predictions: [0 1 1 0 0 0 1 0] labels: [0 0 1 0 1 2 1 1] ############# predictions: [0 1 1 0 1 0 0 0] labels: [0 1 1 1 2 2 2 0] ############# predictions: [0 0 0 0 1 1 0 0] labels: [2 0 2 2 1 2 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 0 2 2 0 1 1] ############# predictions: [0 1 0 0 0 0 0 0] labels: [0 2 0 1 1 2 0 2] ############# predictions: [1 1 0 1 0 1 0 2] labels: [1 2 0 0 2 2 2 1] ############# predictions: [1 1 0 0 0 1 2 1] labels: [0 0 1 2 2 1 2 2] ############# predictions: [1 1 1 0 1 1 2 0] labels: [0 0 0 2 0 1 0 2] ############# predictions: [0 1 0 0 1 1 2 1] labels: [2 0 0 1 2 2 1 2] ############# predictions: [1 0 0 0 1 0 0 1] labels: [1 2 2 2 2 1 0 1] ############# predictions: [2 0 0 0 0 0 0 0] labels: [1 2 0 2 2 1 1 1] ############# predictions: [2 0 1 1 0 0 1 0] labels: [0 0 0 0 2 2 1 1] ############# predictions: [2 0 0 1 0 0 1 1] labels: [2 2 1 1 0 0 1 0] ############# predictions: [1 1 1 1 1 2 0 0] labels: [0 0 2 1 0 0 0 0] ############# predictions: [1 1 2 0 1 2 0 1] labels: [0 2 1 0 2 0 0 1] ############# predictions: [0 0 2 1 0 2 0 1] labels: [1 2 0 2 2 1 0 0] ############# predictions: [0 0 2 0 2 1 1 2] labels: [2 2 1 2 2 2 0 0] ############# predictions: [0 1 0 0 0 0 2 1] labels: [1 1 0 1 1 1 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 0 0 2 0 0 2] ############# predictions: [2 2 2 0 1 1 1 0] labels: [1 0 2 1 1 2 0 0] ############# predictions: [0 0 1 0 0 0 2 0] labels: [0 1 2 1 1 0 0 0] ############# predictions: [0 2 0 1 0 2 0 0] labels: [0 0 2 1 1 0 2 2] ############# predictions: [0 0 1 2 0 2 0 1] labels: [2 2 0 0 0 2 2 2] ############# predictions: [1 0 0 0 2 0 0 1] labels: [2 0 1 1 1 0 0 1] ############# predictions: [0 1 0 0 0 0 0 2] labels: [1 1 1 0 0 0 2 2] ############# predictions: [0 2 0 1 0 2 0 0] labels: [1 1 1 1 2 2 1 0] ############# predictions: [1 2 0 0 0 0 0 0] labels: [2 0 2 1 0 1 1 1] ############# Training Loss: tensor(1.2082, device='cuda:0', grad_fn=&lt;NllLossBackward&gt;) predictions: [0 2 0 0 0 0 2 0] labels: [1 0 2 1 2 2 1 1] ############# predictions: [2 0 0 0 0 0 1 0] labels: [1 0 0 0 0 2 1 0] ############# predictions: [0 0 0 0 2 1 1 1] labels: [0 2 2 0 1 2 1 1] ############# predictions: [2 1 0 1 0 0 2 0] labels: [1 0 2 1 0 2 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 0 0 0 0 1 0] ############# predictions: [0 2 1 0 0 0 1 1] labels: [0 2 2 2 2 1 1 0] ############# predictions: [0 0 0 1 1 0 0 1] labels: [0 1 0 1 2 2 2 2] ############# predictions: [0 0 0 1 1 1 1 2] labels: [2 2 1 2 0 1 1 1] ############# predictions: [0 1 2 0 0 1 0 0] labels: [0 2 1 0 0 1 0 0] ############# predictions: [1 1 1 1 0 0 0 0] labels: [2 1 2 1 0 2 2 1] ############# predictions: [0 1 2 0 0 1 1 0] labels: [2 0 2 1 1 1 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 0 1 1 0 0] ############# predictions: [0 0 0 0 0 1 2 2] labels: [2 2 1 1 0 2 1 2] ############# predictions: [0 1 0 0 1 1 0 1] labels: [0 1 0 2 1 0 0 1] ############# predictions: [0 2 2 0 0 0 0 2] labels: [0 0 2 1 2 2 0 1] ############# predictions: [2 0 0 2 2 0 2 0] labels: [2 1 0 2 2 0 1 0] ############# predictions: [0 2 2 0 2 1 1 2] labels: [1 1 0 0 2 1 0 0] ############# predictions: [1 1 2 2 0 0 1 2] labels: [2 0 2 0 1 1 1 1] ############# predictions: [0 1 1 0 0 1 1 0] labels: [0 2 1 0 0 2 2 0] ############# predictions: [2 1 0 0 0 0 1 1] labels: [0 2 0 2 0 0 1 1] ############# predictions: [1 2 0 1 2 0 0 0] labels: [1 0 1 1 0 2 2 2] ############# predictions: [0 0 0 0 2 2 1 2] labels: [2 2 2 1 1 1 1 0] ############# predictions: [1 2 0 1 0 0 2 0] labels: [2 2 1 1 1 0 2 0] ############# predictions: [2 0 0 0 0 2 1] labels: [0 1 1 2 2 0 2] ############# ======Average Training Loss: 1.11279====== ======Average Training Accuracy: 33.77%====== predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 1 1 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 0 2 1 0 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 2 2 2 1 2 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 1 2 0 1 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 2 0 0 1 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 0 1 2 1 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 2 1 2 0 2 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 1 2 2 1 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 0 2 2 0 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 0 2 0 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 0 1 1 2 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 1 2 2 0 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 0 0 1 2 2 1] ############# predictions: [0 0 0 1 0 0 0 0] labels: [0 0 1 1 0 2 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 2 2 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 1 2 2 2 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 2 1 2 0 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 0 0 2 2 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 0 1 0 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 2 2 2 2 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 2 1 1 0 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 2 1 1 2 0 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 2 1 2 2 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 1 0 2 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 2 1 1 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 0 1 2 1 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 1 1 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 1 0 0 2 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 0 0 0 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 1 1 2 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 1 2 1 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 2 0 1 1 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 1 0 1 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 2 2 1 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 2 0 2 0 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 1 1 1 0 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 1 2 2 0 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 2 0 0 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 0 1 0 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 2 1 1 2 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 2 2 2 2 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 2 2 1 0 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 2 2 2 1 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 0 0 1 0 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 0 0 0 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 2 1 2 0 2 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 2 0 1 2 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 2 0 0 0 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 1 0 0 0 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 0 1 1 2 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 0 0 2 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 2 1 1 1 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 0 0 2 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 2 1 0 2 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 1 2 2 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 0 0 2 1 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 2 0 2 1 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 0 2 0 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 0 0 1 0 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 2 2 0 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 1 1 1 0 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 1 2 2 1 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 0 2 0 2 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 1 1 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 1 1 1 1 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 2 1 0 0 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 2 1 0 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 2 2 0 0 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 2 2 0 0 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 0 2 2 2 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 0 0 1 2 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 1 2 0 1 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 0 0 2 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 1 2 0 2 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 1 0 1 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 2 0 1 0 0 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 0 0 2 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 1 1 2 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 2 2 0 1 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 2 0 1 1 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 0 0 1 2 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 2 1 2 0 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 1 1 1 0 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 1 2 0 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 1 0 1 1 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 2 0 2 1 0 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 0 0 0 2 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 0 1 2 2 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 2 0 1 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 2 1 0 2 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 1 2 0 2 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 2 2 2 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 1 2 0 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 1 1 0 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 2 2 2 2 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 0 0 0 1 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 2 1 2 1 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 0 0 0 2 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 1 1 1 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 1 0 2 2 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 1 1 1 2 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 2 0 1 0 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 2 2 0 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 1 2 2 2 1 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 0 1 0 2 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 2 1 0 2 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 2 0 2 2 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 2 0 0 1 0 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 1 0 0 0 2 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 2 0 1 2 1 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 2 2 2 2 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 0 1 2 0 2 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 2 1 1 1 1 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 0 0 0 1 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 1 2 0 1 2 2 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 1 1 1 2 1 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [1 0 1 1 1 0 0 2] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 2 0 0 0 0 1 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [0 0 1 1 2 0 0 1] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 1 1 1 0 1 0 0] ############# predictions: [0 0 0 0 0 0 0 0] labels: [2 0 2 2 2 0 0 1] ############# predictions: [0 0 0 0 0 0 0] labels: [2 2 1 1 0 0 1] ############# ======Average Validation Loss: 1.09527====== ======Average Validation Accuracy: 35.53%====== </code></pre>
<p>For multi-class classification/sentiment analysis using BERT the 'neutral' class HAS TO BE 2!! It CANNOT be between 'negative' = 0 and 'positive' = 2</p>
1,163
fine-tuning
Fine-Tuning EfficientNetB0 with pretrained &#39;imagenet&#39; is not reproducIble
https://stackoverflow.com/questions/78922573/fine-tuning-efficientnetb0-with-pretrained-imagenet-is-not-reproducible
<p>I am encountering an issue with fine-tuning an EfficientNetB0 model that was originally pretrained on ImageNet.</p> <ul> <li><p>Model Training and Fine-Tuning: I start with an EfficientNetB0 model pretrained on ImageNet and fine-tune it on my specific dataset.</p> </li> <li><p>Saving the Model: After fine-tuning, I save the model using model.save() with the .keras format.</p> </li> <li><p>Loading the Model: When I later load the model using load_model(), the performance of the model does not match the performance achieved during the fine-tuning phase. The results appear to be inconsistent or random.</p> </li> </ul> <p>I am initializing random states through seed for reproducibility.</p> <pre><code>from tensorflow.keras.applications import EfficientNetB0 from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, GlobalAveragePooling2D import tensorflow as tf # Set seed for reproducibility tf.random.set_seed(42) np.random.seed(42) # Define and compile the model base_model = EfficientNetB0(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) x = base_model.output x = GlobalAveragePooling2D()(x) x = Dense(1024, activation='relu')(x) predictions = Dense(10, activation='softmax')(x) # Adjust number of classes model = Model(inputs=base_model.input, outputs=predictions) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Train and fine-tune the model # model.fit(x_train, y_train, epochs=10) # Save the fine-tuned model model.save('fine_tuned_model.keras') # Load the model later loaded_model = tensorflow.keras.models.load_model('fine_tuned_model.keras') # Evaluate performance results = loaded_model.evaluate(x_test, y_test) </code></pre> <p>I have observed that when saving the model weights in HDF5 format (.h5) and subsequently loading them within the same session, the validation performance is consistently reproduced. However, when the .h5 model weights are loaded in a different session, the validation performance does not match the original results, and returns random accuracies.</p> <p>Additionally, when using EfficientNetB0 with pretrained weights set to 'None', the model's performance remains consistent regardless of the session, and it can reproduce its performance.</p> <p>Other models such as ResNet50, VGG16 runs as expected, only the efficientNetBx are having this issue.</p> <p>tensorflow 2.16.1 Similar behavior is observed in tensorflow 2.17.0</p>
1,164
fine-tuning
Fine tuning custom keras model
https://stackoverflow.com/questions/57702247/fine-tuning-custom-keras-model
<p>I have a keras model which is trained on 5 classes,The final layers of the model look like so</p> <pre><code>dr_steps = Dropout(0.25)(Dense(128, activation = 'relu')(gap_dr)) out_layer = Dense(5, activation = 'softmax')(dr_steps) model = Model(inputs = [in_lay], outputs = [out_layer]) </code></pre> <p>What I want to do is fine tune this model on an 8 class multilabel problem but I am not sure how to achieve this. This is what I have tried:</p> <pre><code>dr_steps = Dropout(0.25)(Dense(128, activation = 'relu')(gap_dr)) out_layer = Dense(t_y.shape[-1], activation = 'softmax')(dr_steps) model = Model(inputs = [in_lay], outputs = [out_layer]) weights_path = 'weights.best.hdf5' retina_model.load_weights(weights_path) model.layers.pop() output = Dense(8, activation = 'sigmoid')(model.layers[-1].output) model = Model(inputs = [in_lay], outputs = [output]) loss = 'binary_crossentropy' model.compile(optimizer = RAdam(), loss = FocalLoss, metrics = [&quot;binary_accuracy&quot;,precision, recall,auc]) </code></pre> <p>but this will raise an error like this</p> <pre><code>raise ValueError(str(e)) ValueError: Dimension 1 in both shapes must be equal, but are 8 and 5. Shapes are [128,8] and [128,5]. for 'Assign_390' (op: 'Assign') with input shapes: [128,8], [128,5]. </code></pre> <p>Any suggestions on how to fine tune this model will be very helpful,Thanks in advance.</p>
<p>Here,</p> <pre class="lang-py prettyprint-override"><code>model = Model(inputs = [in_lay], outputs = [out_layer]) weights_path = 'weights.best.hdf5' </code></pre> <p>this <strong>out_layer</strong> should have the same dimension(5 classes) described inside <strong>weights.best.hdf5</strong>. </p> <p>So, <code>t_y.shape[-1]</code> should be <code>5</code> dimensional, <em>not 8</em>. </p>
1,165
fine-tuning
Fine tuning PIG for local execution
https://stackoverflow.com/questions/4460727/fine-tuning-pig-for-local-execution
<p>I'm using PIG latin for log processing because its expressiveness in a problem where the data is not big enough to worry about setting up a whole hadoop cluster. I'm running PIG in local mode but I think that it isn't using all the cores it has available (16 at the moment), monitoring the CPU shows 200% of CPU usage at maximum. </p> <p>Is there any tutorial or recommendations for fine tuning PIG for local execution? I'm sure that all the mappers could use all the available cores with some easy tweaking. (In my script I have already set up the default_parallel parameter to 20)</p> <p>Best regards.</p>
<p><a href="http://wiki.apache.org/pig/PigExecutionModel">Pig's documentation</a> makes it clear that local operation is intended to be run single-threaded, taking different code paths for certain functions that would otherwise use distributed sort. As a result, optimizing for Pig's local mode seems like the wrong solution to the presented problem. </p> <p>Have you considered running a local, "pseudo-distributed" cluster instead of investing in full cluster setup? You can follow <a href="http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html#PseudoDistributed">Hadoop's instructions for pseudo-distributed operation,</a> then point Pig at <code>localhost</code>. This would have the desired result, at the expense of two-step startup and teardown. </p> <p>You'll want to raise the number of default mappers and reducers to consume all cores available on your machine. Fortunately, this is reasonably well-documented (admittedly, in the <a href="http://hadoop.apache.org/common/docs/current/cluster_setup.html">cluster setup documentation</a>); simply define <code>mapred.tasktracker.map.tasks.maximum</code> and <code>mapred.tasktracker.reduce.tasks.maximum</code> in your local copy of <code>$HADOOP_HOME/conf/mapred-site.xml</code>.</p>
1,166
fine-tuning
Fine-tuning SSD Lite in torchvision
https://stackoverflow.com/questions/71094251/fine-tuning-ssd-lite-in-torchvision
<p>I want to fine-tune an object detector in PyTorch. For that, I was using this tutorial:</p> <p><a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html</a></p> <p>However, FastRCNN model is not suitable for my use case so instead, I fine-tuned SSDLite. I wrote this code to set a new classification head:</p> <pre><code>from functools import partial from torchvision.models.detection import _utils as det_utils from torchvision.models.detection.ssdlite import SSDLiteClassificationHead model = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True) in_channels = det_utils.retrieve_out_channels(model.backbone, (320, 320)) num_anchors = model.anchor_generator.num_anchors_per_location() norm_layer = partial(nn.BatchNorm2d, eps=0.001, momentum=0.03) num_classes = 2 model.head.classification_head = SSDLiteClassificationHead(in_channels, num_anchors, num_classes, norm_layer) </code></pre> <p>Since my model is not performing well, I want to ask the community if the above code is correct?</p> <p>Thanks in advance.</p>
<p>if your goal is to create a model with a custom num_classes, then you could just:</p> <ol> <li>Set the new custom class in the initialization of torchvision.</li> <li>Load the default pretrained model explicitly.</li> <li>Match the shape, and discard the weights with different shapes.</li> <li>Load the adjusted pretrained weight to the model, and you could do the retraining process.</li> </ol> <p>As the following:</p> <pre><code>num_classes = 2 # Step 1. model = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=False, num_classes=num_classes) checkpoint = torch.load(default_pretrained_model_path) # in windows, you could check the model here C:\Users\user\.cache\torch\hub\checkpoints # Step 2, load the model state_dict and the default model's state_dict mstate_dict = model.state_dict() cstate_dict = torch.load(args.weights) # Step 3. for k in mstate_dict.keys(): if mstate_dict[k].shape != cstate_dict[k].shape: print('key {} will be removed, orishape: {}, training shape: {}'.format(k, cstate_dict[k].shape, mstate_dict[k].shape)) cstate_dict.pop(k) # Step 4. model.load_state_dict(cstate_dict, strict=False) </code></pre> <p>Hope it helps, cheers~</p>
1,167
fine-tuning
How to save fine tuning LLM model?
https://stackoverflow.com/questions/78631565/how-to-save-fine-tuning-llm-model
<p>I am doing fine-tuning of a model and then I want to save this model to convert it to the GGUF format to use in Ollama. However, when converting to the GGUF format, I get the following error: ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'.</p> <p>I'm not sure if I am saving it correctly. Can anyone help?</p> <pre><code>import transformers from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig import torch from langchain.llms import HuggingFacePipeline from langchain_community.document_loaders.csv_loader import CSVLoader from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings import HuggingFaceEmbeddings from langchain.vectorstores import FAISS from langchain.chains import ConversationalRetrievalChain import csv import sys from huggingface_hub import snapshot_download csv.field_size_limit(sys.maxsize) device = 'cuda' if torch.cuda.is_available() else 'cpu' print(&quot;Device:&quot;, device) if device == 'cuda': print(torch.cuda.get_device_name(0)) origin_model_path = &quot;IlyaGusev/saiga_llama3_8b&quot; model_path = &quot;IlyaGusev/saiga_llama3_8b&quot; bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type=&quot;nf4&quot;, bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, quantization_config=bnb_config, device_map=&quot;auto&quot;) tokenizer = AutoTokenizer.from_pretrained(origin_model_path) text_generation_pipeline = transformers.pipeline( model=model, tokenizer=tokenizer, task=&quot;text-generation&quot;, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, repetition_penalty=1.1, return_full_text=False, max_new_tokens=2000, temperature=0.5, do_sample=True, ) mistral_llm = HuggingFacePipeline(pipeline=text_generation_pipeline) loader = CSVLoader(file_path='data/data_issues.csv') data = loader.load() print(data[:3]) text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200) chunked_docs = text_splitter.split_documents(data) embeddings_model = HuggingFaceEmbeddings(model_name='sentence-transformers/all-mpnet-base-v2') db = FAISS.from_documents(chunked_docs, embeddings_model) retriever = db.as_retriever(search_type=&quot;similarity&quot;, search_kwargs={'k': 16}) qa_chain = ConversationalRetrievalChain.from_llm(mistral_llm, retriever, return_source_documents=True) query = &quot;Example answer&quot; chat_history = [] response = qa_chain({&quot;question&quot;: query, &quot;chat_history&quot;: chat_history}) fine_tuned_model_path = &quot;./fine_tuned_model&quot; model.save_pretrained(fine_tuned_model_path) tokenizer.save_pretrained(fine_tuned_model_path) </code></pre>
1,168
fine-tuning
Nginx url rewrite fine tuning
https://stackoverflow.com/questions/19272219/nginx-url-rewrite-fine-tuning
<p>I'm having trouble fine tuning a regex for Nginx url rewrite rules, What I am trying to do is take the first two pieces of the url and convert them to variables (nothing too fancy, and should be simple). </p> <p>e.g. I type in <a href="http://www.webserver.com/piece1/piece2" rel="nofollow">http://www.webserver.com/piece1/piece2</a> and get <a href="http://www.webserver.com/rewtest.php??val1=piece1&amp;val2=piece2" rel="nofollow">http://www.webserver.com/rewtest.php??val1=piece1&amp;val2=piece2</a></p> <p>So far I have:</p> <pre><code> location / { rewrite ^/(.*)/(.*)/? /rewtest.php?val1=$1&amp;val2=$2 last; return 404;} } </code></pre> <p>which does seem to work. The problem is if the user types <a href="http://www.webserver.com/piece1/piece2/" rel="nofollow">http://www.webserver.com/piece1/piece2/</a> it gives val 1 as piece1/piece2 (as 1 variable, not 2).</p> <p>Also if the user were to type <a href="http://www.webserver.com/piece1/" rel="nofollow">http://www.webserver.com/piece1/</a> I currently get piece1 in var 1, which is great. BUT if the user types <a href="http://www.webserver.com/piece1" rel="nofollow">http://www.webserver.com/piece1</a> it gives me an error and I'd like to get the same (var 1=piece1).</p> <p>Any help greatly appreciated as I am new to regexs!</p>
<p>Seems as if / is also recognized by "."...try: </p> <pre><code>location / { rewrite ^/([^/]+)/([^/]+)/? /rewtest.php?val1=$1&amp;val2=$2 last; return 404;} } </code></pre>
1,169
fine-tuning
Fine tuning on MXNet other than FC layers
https://stackoverflow.com/questions/49567237/fine-tuning-on-mxnet-other-than-fc-layers
<p>Im new to MXNet and I was wondering if any one knows how to fine tune more layers in CNN other than only the FC layers. All the examples that Im looking at, have fine tuning only on the FC layers. In Keras this can be easily done and more blocks of ConvNets other than FC block can be fine tuned: <a href="https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/10_Fine-Tuning.ipynb" rel="nofollow noreferrer">https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/10_Fine-Tuning.ipynb</a></p> <p><a href="https://i.sstatic.net/fO687.png" rel="nofollow noreferrer">Pre-trained network</a></p> <p>If we want to fine-tune only the FC block, we make all the layers trainability to false: layer.trainable = False</p> <p><a href="https://i.sstatic.net/qBS8L.png" rel="nofollow noreferrer">finetune the FC layers</a></p> <p>If we want to fine-tune more blocks of ConnNet other than FC layers, we make the layer.trainable=True for those layers: <a href="https://i.sstatic.net/ZvnaN.png" rel="nofollow noreferrer">finetune blocks of ConvNet in Keras</a></p> <p>My question is how to do similarly in MXNet</p>
<p>Answer depends on whether you are using the imperative (Gluon) or symbolic API. </p> <h2>If you are using the imperative (Gluon) API:</h2> <p>Instead of creating <code>gluon.Trainer</code> with all parameters (<code>net.collect_params()</code>), you can provide a subset of those parameters that you want to train. Any parameter that is not present in the <code>ParameterDict</code> you pass to Trainer will remain frozen.</p> <h2>If you are using the Symbolic API:</h2> <p>You can use the <code>fixed_param_names</code> parameter while creating <code>Module</code>. You can provide a regex matching the parameter names you want to freeze. Check <a href="https://github.com/apache/incubator-mxnet/blob/b2ce81218b13c3140dc3d2fd9fa27a1dd0264b75/example/ssd/train/train_net.py#L239" rel="noreferrer">this</a> example.</p>
1,170
fine-tuning
Keras accuracy discrepancy in fine-tuned model
https://stackoverflow.com/questions/45399535/keras-accuracy-discrepancy-in-fine-tuned-model
<h1>Background</h1> <p>While fine tuning a classification model in Keras, it printed <code>val_acc: 0.8456</code>. <a href="https://pythonexample.com/snippet/python/transfer_learning_keras_01py_gengho_python" rel="nofollow noreferrer">This code</a> was used for fine-tuning.</p> <p>After fine-tuning, manually loading the trained model and predicting the valuation set, a much lower accuracy of <code>0.28</code> was received.</p> <p>The following code was used for valuation:</p> <pre><code>model = load_model(MODEL_PATH) ... img = kimage.load_img(img_path, target_size=target_size) x = kimage.img_to_array(img) x = np.expand_dims(x, axis=0) x = vgg19.preprocess_input(x) pred = model.predict(x) </code></pre> <h1>Question</h1> <p>What might be the cause for the big discrepancy in accuracy <code>0.85 != 0.28</code>?</p>
<p>You're using different preprocessing for training and testing. Specifically,</p> <pre><code>rescale = 1./255 </code></pre> <p>is used for training, but</p> <pre><code>x = vgg19.preprocess_input(x) </code></pre> <p>is used for testing.</p> <p>What <code>imagenet_utils.preprocess_input()</code> does is subtracting the mean (computed on ImageNet, as suggested by the name):</p> <pre><code> # Zero-center by mean pixel x[:, :, :, 0] -= 103.939 x[:, :, :, 1] -= 116.779 x[:, :, :, 2] -= 123.68 </code></pre> <p>So it's fairly different from the preprocessing applied on your training data.</p>
1,171
fine-tuning
Fine-tuning model&#39;s classifier layer with new label
https://stackoverflow.com/questions/67158554/fine-tuning-models-classifier-layer-with-new-label
<p>I would like to fine-tune already fine-tuned BertForSequenceClassification model with new dataset containing just 1 additional label which hasn't been seen by model before.</p> <p>By that, I would like to add 1 new label to the set of labels that model is currently able of classifying properly.</p> <p>Moreover, I don't want classifier weights to be randomly initialized, I'd like to keep them intact and just update them accordingly to the dataset examples while increasing the size of classifier layer by 1.</p> <p>The dataset used for further fine-tuning could look like this:</p> <pre><code>sentece,label intent example 1,new_label intent example 2,new_label ... intent example 10,new_label </code></pre> <p>My model's current classifier layer looks like this:</p> <pre><code>Linear(in_features=768, out_features=135, bias=True) </code></pre> <p>How could I achieve it?<br> Is it even a good approach?</p>
<p>You can just extend the weights and bias of your model with new values. Please have a look at the commented example below:</p> <pre class="lang-py prettyprint-override"><code>#This is the section that loads your model #I will just use an pretrained model for this example import torch from torch import nn from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(&quot;jpcorb20/toxic-detector-distilroberta&quot;) model = AutoModelForSequenceClassification.from_pretrained(&quot;jpcorb20/toxic-detector-distilroberta&quot;) #we check the output of one sample to compare it later with the extended layer #to verify that we kept the previous learnt &quot;knowledge&quot; f = tokenizer.encode_plus(&quot;This is an example&quot;, return_tensors='pt') print(model(**f).logits) #Now we need to find out the name of the linear layer you want to extend #The layers on top of distilroberta are wrapped inside a classifier section #This name can differ for you because it can be chosen randomly #use model.parameters instead find the classification layer print(model.classifier) #The output shows us that the classification layer is called `out_proj` #We can now extend the weights by creating a new tensor that consists of the #old weights and a randomly initialized tensor for the new label model.classifier.out_proj.weight = nn.Parameter(torch.cat((model.classifier.out_proj.weight, torch.randn(1,768)),0)) #We do the same for the bias: model.classifier.out_proj.bias = nn.Parameter(torch.cat((model.classifier.out_proj.bias, torch.randn(1)),0)) #and be happy when we compare the output with our expectation print(model(**f).logits) </code></pre> <p>Output:</p> <pre><code>tensor([[-7.3604, -9.4899, -8.4170, -9.7688, -8.4067, -9.3895]], grad_fn=&lt;AddmmBackward&gt;) RobertaClassificationHead( (dense): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) (out_proj): Linear(in_features=768, out_features=6, bias=True) ) tensor([[-7.3604, -9.4899, -8.4170, -9.7688, -8.4067, -9.3895, 2.2124]], grad_fn=&lt;AddmmBackward&gt;) </code></pre> <p>Please note, that you should fine-tune your model. The new weights are randomly initialized and will therefore negatively impact the performance.</p>
1,172
fine-tuning
Fine tuning transition delays - CSS
https://stackoverflow.com/questions/34891345/fine-tuning-transition-delays-css
<p>I'm working on a website with a left navigation bar and I need some help fine-tuning the transition delays. Here's the website: <a href="http://104.193.173.104/modx/index.php" rel="nofollow">http://104.193.173.104/modx/index.php</a></p> <p><strong>Desktop</strong></p> <ul> <li>Starts wide (240px) with social icons inline</li> <li>Click hamburger to shrink (56px wide) and social icons stack</li> </ul> <p><strong>Tablet</strong></p> <ul> <li>Starts narrow (56px) with social icons stacked</li> <li>Click hamburger to open (240px) and social icons go inline</li> </ul> <p>The problem is that when I switch from narrow to wide, the icons stack first, bump the below navigation down, and then open up to inline. The goal is to have the lower set of navigation stay in the same vertical position through the whole change. I'm assuming I need to use the transition-delay css property, but I'm not very familiar with it and struggling to get things to work.</p> <p>.mmc means Main Menu Collapse and .mme is Expand.</p> <p><strong>Code:</strong></p> <p><div class="snippet" data-lang="js" data-hide="true"> <div class="snippet-code snippet-currently-hidden"> <pre class="snippet-code-css lang-css prettyprint-override"><code>@media (max-width: 480px) { #social-links-container { padding: 60px 0; } ul.social-links { margin: 0; padding: 0; list-style-type: none; text-decoration: none; } .social-links li { display: inline; padding: 0 18px; } .social-links li a { min-width: 80px; color: #808b9c; } } /* tablet */ @media (min-width: 480px) { #social-links-container { padding: 45px 0 20px; } .mme #social-links-container { padding: 60px 0; } ul.social-links { margin: 0; padding: 0; list-style-type: none; text-decoration: none; } .social-links li { display: block; padding-left: 10px; padding-bottom: 15px; } .mme .social-links li { display: inline; padding: 0 18px; } .social-links li a { min-width: 80px; color: #808b9c; } } /* desktop */ @media (min-width: 992px) { #social-links-container { padding: 60px 0; } .mmc #social-links-container { padding: 45px 0 20px; } ul.social-links { margin: 0; padding: 0; list-style-type: none; text-decoration: none; } .social-links li { display: inline; padding: 0 18px; } .mmc .social-links li { display: block; padding-left: 10px; padding-bottom: 15px; } .social-links li a { min-width: 80px; color: #808b9c; } }</code></pre> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;div id="main-menu" role="navigation"&gt; &lt;div id="main-menu-inner"&gt; &lt;!-- LOGOS --&gt; &lt;div id="logo"&gt; &lt;a href="index.php"&gt; &lt;img src="/modx/assets/images/logos/ComoxCaptivates_blue_220.png"&gt; &lt;/a&gt; &lt;/div&gt; &lt;div id="logo-collapsed" &gt; &lt;a href="index.php"&gt; &lt;img src="/modx/assets/images/logos/32x32.png"&gt; &lt;/a&gt; &lt;/div&gt; &lt;!-- SOCIAL MEDIA --&gt; &lt;div id="social-links-container"&gt; &lt;ul class="social-links"&gt; &lt;li&gt;&lt;a target="_blank" href="https://www.facebook.com/TownofComox/"&gt;&lt;i class="fa fa-facebook-official fa-2x fa-fw"&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a target="_blank" href="https://twitter.com/TownofComox"&gt;&lt;i class="fa fa-twitter fa-2x fa-fw"&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt; &lt;li&gt;&lt;a href="[[~#]]"&gt;&lt;i class="fa fa-calendar fa-2x fa-fw"&gt;&lt;/i&gt;&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;!-- LEFT NAVIGATION --&gt; &lt;ul class="navigation"&gt; &lt;li&gt; &lt;a href="[[~9]]"&gt; &lt;i class="fa fa-bicycle fa-lg fa-fw" style="margin-right: 8px;"&gt;&lt;/i&gt; &lt;span class="mm-text"&gt;Recreation&lt;/span&gt; &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="[[~10]]"&gt; &lt;i class="fa fa-users fa-lg fa-fw" style="margin-right: 8px;"&gt;&lt;/i&gt; &lt;span class="mm-text"&gt;Employment&lt;/span&gt; &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="[[~11]]"&gt; &lt;i class="fa fa-map-o fa-lg fa-fw" style="margin-right: 8px;"&gt;&lt;/i&gt; &lt;span class="mm-text"&gt;Maps&lt;/span&gt; &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="[[~12]]"&gt; &lt;i class="fa fa-tint fa-lg fa-fw" style="margin-right: 8px;"&gt;&lt;/i&gt; &lt;span class="mm-text"&gt;Water Restrictions&lt;/span&gt; &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="[[~13]]"&gt; &lt;i class="fa fa-ship fa-lg fa-fw" style="margin-right: 8px;"&gt;&lt;/i&gt; &lt;span class="mm-text"&gt;Tourism&lt;/span&gt; &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a href="[[~14]]"&gt; &lt;i class="fa fa-money fa-lg fa-fw" style="margin-right: 8px;"&gt;&lt;/i&gt; &lt;span class="mm-text"&gt;Investment&lt;/span&gt; &lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt;</code></pre> </div> </div> </p>
1,173
fine-tuning
Fine tuning a pretrained language model with Simple Transformers
https://stackoverflow.com/questions/61482810/fine-tuning-a-pretrained-language-model-with-simple-transformers
<p>In his article 'Language Model Fine-Tuning For Pre-Trained Transformers' Thilina Rajapakse (<a href="https://medium.com/skilai/language-model-fine-tuning-for-pre-trained-transformers-b7262774a7ee" rel="nofollow noreferrer">https://medium.com/skilai/language-model-fine-tuning-for-pre-trained-transformers-b7262774a7ee</a>) provides the following code snippet for fine-tuning a pre-trained model using the library <code>simpletransformers</code>:</p> <pre><code>from simpletransformers.language_modeling import LanguageModelingModel import logging logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.WARNING) train_args = { "reprocess_input_data": True, "overwrite_output_dir": True, } model = LanguageModelingModel('bert', 'bert-base-cased', args=train_args) model.train_model("data/train.txt", eval_file="data/text.txt") model.eval_model("data/test.txt") </code></pre> <p>He then adds:</p> <blockquote> <p>We assume that you have combined all the text in your dataset into two text files train.txt and test.txt which can be found in the data/ directory.</p> </blockquote> <p>I have 2 questions:</p> <p><strong>Question 1</strong></p> <p>Does the highlighted sentence above implies that the entire corpus will be merged into one text file? So assuming that the Training Corpus is comprised of 1,000,000 text files, are we supposed to merge them all in one text file with code like this?</p> <pre><code>import fileinput with open(outfilename, 'w') as fout, fileinput.input(filenames) as fin: for line in fin: fout.write(line) </code></pre> <p><strong>Question 2</strong></p> <p>I presume that I can use the pretrained model: <code>bert-base-multilingual-cased</code>. Correct?</p>
<h3>Question 1</h3> <p>Yes, the input to the <code>train_model()</code> and <code>eval_model()</code> methods need to be a single file.</p> <p><em>Dynamically loading from multiple files will likely be supported in the future</em></p> <h3>Question 2</h3> <p>Yes, you can use <code>bert-base-multilingual-cased</code> model.</p> <p>You will find a much more detailed, updated guide on language model training <a href="https://towardsdatascience.com/understanding-electra-and-training-an-electra-language-model-3d33e3a9660d?source=friends_link&amp;sk=2b4b4a79954e3d7c84ab863efaea8c65" rel="nofollow noreferrer">here</a>.</p> <p><em>Disclaimer: I am the creator of the above library</em>.</p>
1,174
fine-tuning
Fine tuning a model - base_model Dropout in inference or training mode?
https://stackoverflow.com/questions/71006896/fine-tuning-a-model-base-model-dropout-in-inference-or-training-mode
<p>In the TensorFlow documentation it is highlighted that it is important during fine tuning to set the base_model to ’inference mode’ setting the parameter <code>training = False</code> when calling the <code>base_model</code>. The reason to do so is because of the <code>tf.keras.layers.BatchNormalization</code> layers, that should be executed in inference mode during fine tuning.<br /> <a href="https://www.tensorflow.org/tutorials/images/transfer_learning#fine_tuning" rel="nofollow noreferrer">TensorFlow documentation on Fine Tuning</a></p> <p>But setting the <code>base_model</code> to inference mode will also affect the <code>tf.keras.layers.Dropout</code> in the <code>base_model</code> as these will then also run in inference mode and will not apply any dropout at all.</p> <p>What is useful for getting meaningful results when fine tuning a model?</p> <p>Running the dropout layers in the <code>base_model</code> in inference mode (no dropout at all) or running them in training mode applying the dropout as defined in the <code>base_model</code>?</p>
1,175
fine-tuning
does tf slim fine tuning require gpu?
https://stackoverflow.com/questions/44773074/does-tf-slim-fine-tuning-require-gpu
<p>Actually I am trying to fine tune inceptionV3 model using tf slim fine tuning example on git hub it is giving me this error :</p> <p>InvalidArgumentError (see above for traceback): Cannot assign a device to node 'InceptionV3/AuxLogits/Conv2d_2b_1x1/biases/RMSProp_1': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0<br> Colocation Debug Info:<br> Colocation group had the following types and devices:<br> ApplyRMSProp: CPU <br> Const: CPU <br> Assign: CPU <br> IsVariableInitialized: CPU<br> Identity: CPU <br> VariableV2: CPU <br> [[Node: InceptionV3/AuxLogits/Conv2d_2b_1x1/biases/RMSProp_1 = VariableV2_class=["loc:@InceptionV3/AuxLogits/Conv2d_2b_1x1/biases"], container="", dtype=DT_FLOAT, shape=[5], shared_name="", _device="/device:GPU:0"]]</p>
<p>Please provide more information about your Tensorflow installation (GPU or CPU installation). If you are running a GPU Tensorflow version this probably is raising the error.</p>
1,176
fine-tuning
How to test a model before fine-tuning in Pytorch Lightning?
https://stackoverflow.com/questions/69249187/how-to-test-a-model-before-fine-tuning-in-pytorch-lightning
<p>Doing things on Google Colab.</p> <ul> <li>transformers: 4.10.2</li> <li>pytorch-lightning: 1.2.7</li> </ul> <pre class="lang-py prettyprint-override"><code>import torch from torch.utils.data import DataLoader from transformers import BertJapaneseTokenizer, BertForSequenceClassification import pytorch_lightning as pl dataset_for_loader = [ {'data':torch.tensor([0,1]), 'labels':torch.tensor(0)}, {'data':torch.tensor([2,3]), 'labels':torch.tensor(1)}, {'data':torch.tensor([4,5]), 'labels':torch.tensor(2)}, {'data':torch.tensor([6,7]), 'labels':torch.tensor(3)}, ] loader = DataLoader(dataset_for_loader, batch_size=2) for idx, batch in enumerate(loader): print(f'# batch {idx}') print(batch) category_list = [ 'dokujo-tsushin', 'it-life-hack', 'kaden-channel', 'livedoor-homme', 'movie-enter', 'peachy', 'smax', 'sports-watch', 'topic-news' ] tokenizer = BertJapaneseTokenizer.from_pretrained(MODEL_NAME) max_length = 128 dataset_for_loader = [] for label, category in enumerate(tqdm(category_list)): # file ./text has lots of articles, categorized by category # and they are just plain texts, whose content begins from forth line for file in glob.glob(f'./text/{category}/{category}*'): lines = open(file).read().splitlines() text = '\n'.join(lines[3:]) encoding = tokenizer( text, max_length=max_length, padding='max_length', truncation=True ) encoding['labels'] = label encoding = { k: torch.tensor(v) for k, v in encoding.items() } dataset_for_loader.append(encoding) SEED=lambda:0.0 # random.shuffle(dataset_for_loader) # ランダムにシャッフル random.shuffle(dataset_for_loader,SEED) n = len(dataset_for_loader) n_train = int(0.6*n) n_val = int(0.2*n) dataset_train = dataset_for_loader[:n_train] dataset_val = dataset_for_loader[n_train:n_train+n_val] dataset_test = dataset_for_loader[n_train+n_val:] dataloader_train = DataLoader( dataset_train, batch_size=32, shuffle=True ) dataloader_val = DataLoader(dataset_val, batch_size=256) dataloader_test = DataLoader(dataset_test, batch_size=256) class BertForSequenceClassification_pl(pl.LightningModule): def __init__(self, model_name, num_labels, lr): super().__init__() self.save_hyperparameters() self.bert_sc = BertForSequenceClassification.from_pretrained( model_name, num_labels=num_labels ) def training_step(self, batch, batch_idx): output = self.bert_sc(**batch) loss = output.loss self.log('train_loss', loss) return loss def validation_step(self, batch, batch_idx): output = self.bert_sc(**batch) val_loss = output.loss self.log('val_loss', val_loss) def test_step(self, batch, batch_idx): labels = batch.pop('labels') output = self.bert_sc(**batch) labels_predicted = output.logits.argmax(-1) num_correct = ( labels_predicted == labels ).sum().item() accuracy = num_correct/labels.size(0) self.log('accuracy', accuracy) def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=self.hparams.lr) checkpoint = pl.callbacks.ModelCheckpoint( monitor='val_loss', mode='min', save_top_k=1, save_weights_only=True, dirpath='model/', ) trainer = pl.Trainer( gpus=1, max_epochs=10, callbacks = [checkpoint] ) model = BertForSequenceClassification_pl( MODEL_NAME, num_labels=9, lr=1e-5 ) ### (a) ### # I think this is where I am doing fine-tuning trainer.fit(model, dataloader_train, dataloader_val) # this is to score after fine-tuning test = trainer.test(test_dataloaders=dataloader_test) print(f'Accuracy: {test[0][&quot;accuracy&quot;]:.2f}') </code></pre> <p>But I am not really sure how to do a test before fine-tuning, in order to compare two models before and after fine-tuning, in order to show how effective fine-tuning is.</p> <p>Inserting the following two lines to <code>### (a) ###</code>:</p> <pre class="lang-py prettyprint-override"><code>test = trainer.test(test_dataloaders=dataloader_test) print(f'Accuracy: {test[0][&quot;accuracy&quot;]:.2f}') </code></pre> <p>I got this result:</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-13-c8b2c67f2d5c&gt; in &lt;module&gt;() 9 10 # 6-19 ---&gt; 11 test = trainer.test(test_dataloaders=dataloader_test) 12 print(f'Accuracy: {test[0][&quot;accuracy&quot;]:.2f}') 13 /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in test(self, model, test_dataloaders, ckpt_path, verbose, datamodule) 896 self.verbose_test = verbose 897 --&gt; 898 self._set_running_stage(RunningStage.TESTING, model or self.lightning_module) 899 900 # If you supply a datamodule you can't supply train_dataloader or val_dataloaders /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _set_running_stage(self, stage, model_ref) 563 the trainer and the model 564 &quot;&quot;&quot; --&gt; 565 model_ref.running_stage = stage 566 self._running_stage = stage 567 AttributeError: 'NoneType' object has no attribute 'running_stage' </code></pre> <p>I noticed that <a href="https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#fit" rel="nofollow noreferrer"><code>Trainer.fit()</code> can take <code>None</code> as arguments other than <code>model</code></a>, so I tried this:</p> <pre class="lang-py prettyprint-override"><code>trainer.fit(model) test=trainer.test(test_dataloaders=dataloader_test) print(f'Accuracy: {test[0][&quot;accuracy&quot;]:.2f}') </code></pre> <p>The result:</p> <pre class="lang-py prettyprint-override"><code>MisconfigurationException: No `train_dataloader()` method defined. Lightning `Trainer` expects as minimum a `training_step()`, `train_dataloader()` and `configure_optimizers()` to be defined. </code></pre> <p>Thanks.</p>
<p>The <code>Trainer</code> needs to call its <code>.fit()</code> in order to set up a lot of things and then only you can do <code>.test()</code> or other methods.</p> <p>You are right about putting a <code>.fit()</code> just before <code>.test()</code> but the fit call needs to a valid one. You have to feed a dataloader/datamodule to it. But since you don't want to do a training/validation in this fit call, just pass <code>limit_[train/val]_batches=0</code> while Trainer construction.</p> <pre><code>trainer = Trainer(gpus=..., ..., limit_train_batches=0, limit_val_batches=0) trainer.fit(model, dataloader_train, dataloader_val) trainer.test(model, dataloader_test) # without fine-tuning </code></pre> <p>The fit call here will just set things up for you and skip training/validation. And then the testing follows. Next time run the same code but without the <code>limit_[train/val]_batches</code>, this will do the pretraining for you</p> <pre><code>trainer = Trainer(gpus=..., ...) trainer.fit(model, dataloader_train, dataloader_val) trainer.test(model, dataloader_test) # with fine-tuning </code></pre> <hr /> <p>Clarifying a bit about <code>.fit()</code> taking <code>None</code> for all but model: Its not quite true - you must provide <em>either</em> a DataLoader or a DataModule.</p>
1,177
fine-tuning
Fine-tuning of OpeanAI model with unsupervised set, not supervised
https://stackoverflow.com/questions/75722268/fine-tuning-of-opeanai-model-with-unsupervised-set-not-supervised
<p>I want GPT-3 model to know everything about my domain area, for example my inbox. I want to be able to ask it questions like &quot;Have I even had a Silicon Valley Bank account?&quot; and get correct response. I've familiarized myself with <a href="https://platform.openai.com/docs/guides/fine-tuning" rel="nofollow noreferrer">fine-tuning mechanism</a> in official OpenAI docs and it's not exactly what I'm looking for. I want to just dump all my emails on the model and ask it: &quot;Learn!&quot;. However fine-tuning require supervised style learning with prompts and reponses, which I do not have. <a href="https://platform.openai.com/docs/guides/fine-tuning/example-notebooks" rel="nofollow noreferrer">Example</a> in the notebooks for doc suggests that you can use &quot;Davinci-instruct to ask a few questions based on a Wikipedia section, as well as answer those questions, based on that section&quot;, which I guess solves my problem if I apply it to all my emails, but I'd rather not do this step, because I might screw up something. Can I have other options?</p> <p>I found that Azure Open AI integration <a href="https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-studio" rel="nofollow noreferrer">allows you to do fine-tuning</a> as well, but it seems to have the same problem.</p> <p>I might be calling what I want to do is fine-tuning, but in fact I keep pre-training process and just decided to go with fine-tuning because it has documentation and API. On the other hand fine-tuning guaranties that I would get wrong answers, pre-training doesn't, and you dont want to get wrong answer on question &quot;Have I even had a Silicon Valley Bank account?&quot;</p>
<p>You can generate training set of prompt/completion pairs based on your knowledge base via various techniques such as:</p> <ul> <li>summarizations (for example via GPT models);</li> <li>generated Q/A via GPT;</li> <li>cloze-style questions (again you can utilize GPT itself to suggest such questions);</li> </ul> <p>Use augmentations (paraphrasing, word substitutions, changing the order of sentences).</p>
1,178
fine-tuning
Fine-tuning of multilingual translation models (Huggingface Transformers, Helsinki)
https://stackoverflow.com/questions/76201629/fine-tuning-of-multilingual-translation-models-huggingface-transformers-helsin
<p>I want to fine-tune pre-trained multilingual Models from the Huggingface transformers library (MarianMT in this case) for domain-specific translation. I want the models to be able to translate between 5 different languages. I have domain-specific datasets for every sentence pair (e.g. de-en, en-de, de-es, es-de and so on). In the available tutorials for fine-tuning I only could find fine-tuning for single language pairs (e.g. only the pretrained “Helsinki-NLP/opus-mt-en-roa” model is downloaded) which then needs to be fine-tuned on en-roa datasets. What I want to do is to train the whole multilingual model (not just en-roa). I want to mix the sentences of all sentence pairs of my datasets into one dataset and fine-tune the whole multilingual model on this large dataset. How can I achieve this task? Is it possible to download the “whole” model and not just the language pair models like en-roa? I hope someone can help me :)</p> <p>Best regards,</p> <p>Simon</p>
1,179
fine-tuning
Cost-Effective Methods for Fine-Tuning LLMs with RAFT
https://stackoverflow.com/questions/78892614/cost-effective-methods-for-fine-tuning-llms-with-raft
<p>I'm new to LLMs and fine-tuning in general so please bear with me if I've made obvious mistakes!</p> <p><em>I have no GPU/cloud based platform other than a free tral GCP account.</em></p> <p>I'm trying to fine-tune an LLM using the RAFT method (Retrieval Augmented Fine-Tuning). The goal is to enhance the mode's ability to answer questions like an 'open-book' exam. Here is the RAFT paper:<a href="https://arxiv.org/html/2403.10131v1" rel="nofollow noreferrer">https://arxiv.org/html/2403.10131v1</a> and the corresponding github repo: <a href="https://github.com/ShishirPatil/gorilla" rel="nofollow noreferrer">https://github.com/ShishirPatil/gorilla</a></p> <p>Here's what I've done so far:</p> <ol> <li>I've followed the RAFT instructions and generated a JSON file using a pdf input (BMW 3 Series wiki page).</li> <li>I put credit in my Open AI account and used their API to generate the JSONL</li> <li>I modified the keys in the jsonl file to meet the requirements for GCP Vertex AI.</li> <li>I uploaded the jsonl file to vertex ai to fine tune the 'text-bison@002' model, keeping the default parameters.</li> </ol> <p>The fine tuning ran fine, but when testing the model it failed to answer questions based on the content of the PDF. I also cannot export the fine-tuned model (which my aim is to do, specifically in gguf format), which i guess are due to the limitations of using the free account. In the RAFT paper, they recommend continuing the fine-tuning on Azure AI Studio with Llama-2-7b, but this also requires a paid account.</p> <p>Given the constraints that I don't have a GPU, am on a tight budget, and I am looking for that most cost-effective way to fine-tune with RAFT, ideally using open-source models like llmama;</p> <p><strong>My question is:</strong></p> <p>What is your recommendation for the cheapest, yet reliable way to fine-tune LLMs without access to expensive hardware or platforms? Are there any alternative approaches or platforms you would suggest?</p> <p>Any advice or tips would be greatly appreciated!</p> <p>I have done the steps for RAFT and tried to fine-tune using only CPU but of course that just crashed everything. So I signed up for GCP Free trial to make use of the free trial credits they provided, but cannot seem to go further and am not sure how to continue.</p>
1,180
fine-tuning
Fine tuning T5 not converging
https://stackoverflow.com/questions/76932312/fine-tuning-t5-not-converging
<p>I am new in this world of transformers and NLP, and I am having a problem when fine tuning T5 for my specific use case.</p> <p>What I want to achieve, is that the model receives an input text, and outputs a JSON (as a string) of the relevant information in the text.</p> <p>There are 3 formats that the model can respond, below are some examples: Input: Hey, can you give one hundred dollars to John? Expected Output: '{&quot;action&quot;: &quot;T&quot;, &quot;data&quot;: {&quot;name&quot;: &quot;John&quot;, &quot;amount&quot;: 100, &quot;currency&quot;: &quot;USD&quot;}}'</p> <p>Input: I want to add Benjamin Franklin to my contacts. He has an account on citibank, with number 412389124. Expected Output: '{&quot;action&quot;: &quot;A&quot;, &quot;data&quot;: {&quot;name&quot;: &quot;Benjamin Franklin&quot;, &quot;account_no&quot;: 412389124, &quot;entity&quot;: &quot;Citibank&quot;, &quot;id_num&quot;: null}'</p> <p>Input: Hey, what's the weather gonna be tonight? Expected Output: '{&quot;accion&quot;: &quot;N&quot;, &quot;datos&quot;: {}}'</p> <p>I've built a Python script to generate the inputs and labels as random as possible. With that python script, I generated 20000 data points (I can generate less or more of that).</p> <p>Using T5 as my base model, I've trained it using the trainer from pytorch.</p> <p>Below is my code:</p> <pre><code>model_name_huggingface = &quot;google/t5-base&quot; tokenizer = T5Tokenizer.from_pretrained(model_name_huggingface) model = T5ForConditionalGeneration.from_pretrained(model_name_huggingface) </code></pre> <p>Then, after I tokenize my dataset.</p> <pre><code>batch_size = 16 training_args = Seq2SeqTrainingArguments( output_dir=&quot;models/chimi-mt5-base&quot;, evaluation_strategy=&quot;steps&quot;, eval_steps=100, logging_strategy=&quot;steps&quot;, logging_steps=100, save_strategy=&quot;steps&quot;, save_steps=200, # learning_rate=1e-4, optim=&quot;adafactor&quot;, learning_rate=5e-4, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, weight_decay=0.05, save_total_limit=3, num_train_epochs=2, metric_for_best_model=&quot;exact_match&quot;, # greater_is_better=False, load_best_model_at_end=True ) </code></pre> <pre><code>data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=base_model) cer = evaluate.load(&quot;cer&quot;, module_type=&quot;metric&quot;) exact_match = evaluate.load(&quot;exact_match&quot;, module_type=&quot;metric&quot;) </code></pre> <pre><code>import numpy as np def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) result = {} # Compute CER result[&quot;cer&quot;] = cer.compute(predictions=decoded_preds, references=decoded_labels) # Compute Exact Match exact_match_res = exact_match.compute(predictions=decoded_preds, references=decoded_labels, ignore_case=True) result[&quot;exact_match&quot;] = exact_match_res[&quot;exact_match&quot;] return {k: round(v, 4) for k, v in result.items()} </code></pre> <pre><code>trainer = Seq2SeqTrainer( model=base_model, args=training_args, train_dataset=tokenized_chimi_dataset[&quot;train&quot;], eval_dataset=tokenized_chimi_dataset[&quot;validation&quot;], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) </code></pre> <pre><code>result = trainer.train() </code></pre> <p>That's the current code I am using to fine tune T5.</p> <p>The training loss goes down up to 0.054, and never improves. The validation loss goes down up 0.034, and never improves. The CER metric goes down up to 0.4875 and never improves after that. But, just to let you know, after the first 100 steps, it already has a CER of 0.583. The Exact Match Metric goes up to 0.3089, and that already happens after the 600th step.</p> <p>By testing, I see that it responds in the correct JSON format, and the action is responded correctly normally. But then, the data inside the JSON is not often correct.</p> <p>What can I do to improve this? I am stuck on this for a long time, and I am not really sure how to proceed. Any help is appreciated.</p> <p>Thanks in advance!</p> <p>I tried balancing my dataset, and tuning the hyperparameters, but it still didn't result in any relevant ups in performance.</p>
<p>It's been almost a year - did you solve this?</p> <p>I see two possible issues with your approach.</p> <p>(1) Trying to model randomness (?) You said:</p> <blockquote> <p>I've built a Python script to generate the inputs and labels as random as possible.</p> </blockquote> <p>If I understand correct - you are creating synthetic data that is totally random. (Is that correct?) That won't work.</p> <p>(2) Trying to output valid JSON You said</p> <blockquote> <p>Expected Output:</p> </blockquote> <pre class="lang-json prettyprint-override"><code>'{&quot;action&quot;: &quot;T&quot;, &quot;data&quot;: {&quot;name&quot;: &quot;John&quot;, &quot;amount&quot;: 100, &quot;currency&quot;: &quot;USD&quot;}}' </code></pre> <p>As far as I know curly braces {} are not in the t5 tokenizer vocabulary. see: <a href="https://github.com/huggingface/transformers/issues/21836" rel="nofollow noreferrer">https://github.com/huggingface/transformers/issues/21836</a></p>
1,181
fine-tuning
fine-tuning a CNN from a lower fc layer
https://stackoverflow.com/questions/32536426/fine-tuning-a-cnn-from-a-lower-fc-layer
<p>I've noticed that most fine-tuning of CNN over new dataset is done only on the "last" fully connected (fc) layer.</p> <p>I'm interested in fine-tuning from the "first" fully connected layer: that is, I want to use mid-level features from convolution and pooling layers as they are, (supposing it's trained on ImageNet) but then fit all fc layers to my new dataset.</p> <p>Theoretically and in practice, what is the supposed effect of this? Is it likely to learn a more proper set of parameters for my new dataset?</p>
<p>Theoretically, the deeper you fine-tune, the better your model fits your data. So, if you could fine-tune the whole model - the better. </p> <p>So, what's the catch, you must be asking, why don't everyone fine-tune the whole model?<br> First, fine-tuning the whole model involves lots and lots of parameters, in order to train properly millions of parameters without the risk of overfitting, you must have a LOT of new training examples. In most cases, when fine-tuning you only have a very few annotated samples for the new task and therefore you are unable to afford fine-tuning of the whole model.<br> Second, fine-tuning the whole model takes much longer than training just the top fc layer. Thus, if you have little time and budget you only fine-tune the top fc layer.</p> <p>In your case, if you have enough samples you may fine-tune the top two fc layers. From my experience it is better to fine tune the top layer first and then fine-tune the top two together after some iterations are done on the top layer alone. </p>
1,182
fine-tuning
Is Caffe Net Surgery requires fine tuning?
https://stackoverflow.com/questions/51464821/is-caffe-net-surgery-requires-fine-tuning
<p>I am new at Caffe and I want to use already trained caffeNet model with ImageNet. I applied net surgery by removing a convolutional intermediate conv4 layer. </p> <pre><code>'layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "relu5-new" type: "ReLU" bottom: "conv5-new" top: "conv5-new" } layer { name: "pool5-new" type: "Pooling" bottom: "conv5-new" top: "pool5-new" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5-new" top: "fc6" inner_product_param { num_output: 4096 } }' </code></pre> <p><a href="https://ideone.com/1PhzSs" rel="nofollow noreferrer">Full of prototxt file can be found here</a></p> <p>After saving this new network the accuracy became 0. Should I make fine tuning on ImageNet validation set, or is there something wrong on my new prototxt file?</p> <p>Any help will be appreciated!</p>
<p>The original net you started with had <code>conv4</code> between <code>conv3</code> and <code>conv5</code>: this means the filters (weights) of <code>conv5</code> were expecting certain number of inputs and certain "order" or "meaning" of inputs. Once you removed <code>conv4</code>, you had to alter <code>conv5</code> to accept different number of inputs. Therefore, the new <code>conv5</code> layer <strong>must</strong> be trained to accommodate to the new inputs it receives.<br> In this case, when you introduced a new <code>conv5</code> layer, you should have <code>weight_filler</code> defined in your prototxt to guide caffe as to how to initialize the new weights. Otherwise caffe will set the weights to zero and it will be almost impossible to finetune in this case.</p>
1,183
fine-tuning
Fine-Tuning with Custom Dataset: Formatting and Best Practices
https://stackoverflow.com/questions/79238709/fine-tuning-with-custom-dataset-formatting-and-best-practices
<p>I’m working on a domain-specific problem where no pre-existing datasets are available. I’ve manually collected some examples and initially used them in Large Language Model (LLM) prompts (few-shot learning). However, the results are not very good, and i am doing fine-tuning for better outcomes.</p> <p>I’ve created my own dataset for fine-tuning a Large Language Model (LLM). The goal is for the model to take a simple text input and output two Python lists: one with what the user is looking for and the other with what the user is not looking for. For example:</p> <p>Input:</p> <pre><code>&quot;I am looking for a laptop with a good camera and long battery life, but I don’t want any laptops that use only USB-C.&quot; </code></pre> <p>Output:</p> <pre><code>wants : [&quot;laptop&quot;, &quot;camera&quot;, &quot;long battery life&quot;] does not want: [&quot;does not want laptop that uses USB-C&quot;] </code></pre> <p>Currently, my dataset consists of two columns—input and output. However, I’m still unsure how to format the dataset going forward. Should I use multiple columns, or is there a better approach? I’ve noticed that some datasets have multiple outputs, and I’m curious about why they do that and whether I should adopt a similar structure.</p> <p>here are some other questions i have and it would be great if some people could answer it:</p> <ol> <li>What types of data are most beneficial as input for this type of task (e.g., raw text, structured data)?</li> <li>Best practices for preparing and formatting data for fine-tuning?</li> <li>How should I format my dataset to make it more effective for fine-tuning?</li> <li>Any best practices for preparing and scaling up my dataset for better results?</li> <li>Any tutorials or tools that can help streamline the fine-tuning process for a text-and-logic generation problem.</li> </ol> <p>Any tips, insights, or references to useful resources would be greatly appreciated!</p>
1,184
fine-tuning
Fine tuning flair ner model
https://stackoverflow.com/questions/75824467/fine-tuning-flair-ner-model
<p>I am trying to fine tune flair ner model using these lines of code:</p> <pre><code>embedding_types = [WordEmbeddings('glove'), WordEmbeddings('extvec'), WordEmbeddings('crawl'),] embeddings = StackedEmbeddings(embeddings=embedding_types) pretrained_model = SequenceTagger.load('ner') trainer : ModelTrainer = ModelTrainer(pretrained_model, corpus) trainer.train('resources/taggers/example-ner', learning_rate=0.1, mini_batch_size=32, max_epochs=3) </code></pre> <p>but I get this error message when I execute:</p> <pre><code>RuntimeError: Error(s) in loading state_dict for SequenceTagger: Missing key(s) in state_dict: &quot;embeddings.list_embedding_0.embedding.weight&quot;. </code></pre> <p>I have already tried to change embedding types but I get the same issue. How can I solve it?</p>
<p>The error you are getting should be solved just by upgrading flair</p> <pre><code>pip install --upgrade flair </code></pre> <p>However, there is an error in the concept on how you are doing this. First, you are not passing the embeddings nor to the trainer nor to the tagger. This makes sense if what you want to do is finetune a pretrained model. A pretrained model has already its own embeddings.</p> <p>If what you want to do is finetune that pretrained model you only have to do:</p> <pre><code>pretrained_model = SequenceTagger.load('ner') trainer : ModelTrainer = ModelTrainer(pretrained_model, corpus) trainer.fine_tune('resources/taggers/example-ner', learning_rate=0.1, mini_batch_size=32, max_epochs=3) </code></pre>
1,185
fine-tuning
Fine Tuning Pretrained Model MobileNet_V3_Large PyTorch
https://stackoverflow.com/questions/69321848/fine-tuning-pretrained-model-mobilenet-v3-large-pytorch
<p>I am trying to add a layer to fine-tune the MobileNet_V3_Large pre-trained model. I looked around at the PyTorch docs but they don't have a tutorials for this specific pre-trained model. I did find that I can fine-tune MobileNet_V2 with:</p> <pre class="lang-py prettyprint-override"><code>model_ft =models.mobilenet_v2(pretrained=True,progress=True) model_ft.classifier[1] = nn.Linear(model_ft.last_channel, out_features=len(class_names)) </code></pre> <p>but I am not sure what the linear layer for MobileNet V3 should look like.</p>
<p>For V3 Large, you should do</p> <pre><code>model_ft = models.mobilenet_v3_large(pretrained=True, progress=True) model_ft.classifier[-1] = nn.Linear(1280, your_number_of_classes) </code></pre> <p>(This would also work for V2, but the code you posted would not work for V3 correctly).</p> <p>To see the structure of your network, you can just do</p> <pre><code>print(model_ft.classifier) </code></pre> <p>or</p> <pre><code>print(model_ft) </code></pre> <p>For fine-tuning people often (but not always) freeze all layers except the last one. Again, the layer to <em>not</em> freeze is <code>model_ft.classifier[-1]</code> rather than <code>model_ft.classifier[1]</code>.</p> <p>Whether or not you should freeze layers depends on how much data you have, and is best determined empirically.</p>
1,186
fine-tuning
Fine-tuning method listFiles
https://stackoverflow.com/questions/36764442/fine-tuning-method-listfiles
<p>Can anyone help in tuning this method? When I log the "files" - it only takes around 5 seconds. But takes more than 10 minutes before returning the "fileInfo"</p> <pre><code>// fileSystem is HDFS // dateNow = java.util.Date // basePath = new Path("/") // filePattern = "*.sf" private Map&lt;String, Long&gt; listFiles(final Date dateNow, final Path basePath, final String filePattern) throws IOException { RemoteIterator&lt;LocatedFileStatus&gt; files = fileSystem.listFiles(basePath, true); _LOG.info("files=" + files); // map containing &lt;filename, filesize&gt; Map&lt;String, Long&gt; fileInfo = new HashMap&lt;String, Long&gt;(); String regex = RegexUtil.convertGlobToRegex(filePattern); Pattern pattern = Pattern.compile(regex); if (files != null) { while (files.hasNext()) { LocatedFileStatus file = files.next(); Path filePath = file.getPath(); // Get only the files with created date = current date if (DateUtils.truncate(new Date(file.getModificationTime()), java.util.Calendar.DAY_OF_MONTH).equals(dateNow)) { if (pattern.matcher(filePath.getName()).matches()) { fileInfo.put(file.getPath().getName(), file.getLen()); } } } } _LOG.info("fileInfo =" + fileInfo); return fileInfo; } </code></pre>
<p>You Said </p> <p><strong>When I log the "files" - it only takes around 5 seconds</strong></p> <pre><code> RemoteIterator&lt;LocatedFileStatus&gt; files = fileSystem.listFiles(basePath, true); </code></pre> <p><strong>Yes</strong>. Because this part of the code only checks the <code>File</code> present at that path (eg.:- no.Of Files,size) Status not looking into the file what and how much data it Contains. </p> <p>Now if you look into this part of code</p> <pre><code> while (files.hasNext()) { LocatedFileStatus file = files.next(); Path filePath = file.getPath(); // Get only the files with created date = current date if (DateUtils.truncate(new Date(file.getModificationTime()), java.util.Calendar.DAY_OF_MONTH).equals(dateNow)) { if (pattern.matcher(filePath.getName()).matches()) { fileInfo.put(file.getPath().getName(), file.getLen()); } } } </code></pre> <p>then you analyze that it Iterate throughout the Content of all the files in List. So, Definitely It will take more time than the previous one. This <code>files</code> may contains a number of files with different size of <code>Content</code>.</p> <p>So, Iterating into each file content will definitely took more time. It also depends upon the size of the files this directory Contains. The more large your file the more time would took this loop.</p>
1,187
fine-tuning
Encoding issues on OpenAI predictions after fine-tuning
https://stackoverflow.com/questions/69928517/encoding-issues-on-openai-predictions-after-fine-tuning
<p>I'm following <a href="https://beta.openai.com/docs/guides/fine-tuning/create-a-fine-tuned-model" rel="noreferrer">this OpenAI tutorial</a> about fine-tuning.</p> <p>I already generated the dataset with the openai tool. The problem is that the outputs encoding (inference result) is mixing UTF-8 with non UTF-8 characters.</p> <p>The generated model looks like this:</p> <pre><code>{&quot;prompt&quot;:&quot;Usuario: Quién eres\\nAsistente:&quot;,&quot;completion&quot;:&quot; Soy un Asistente\n&quot;} {&quot;prompt&quot;:&quot;Usuario: Qué puedes hacer\\nAsistente:&quot;,&quot;completion&quot;:&quot; Ayudarte con cualquier gestión o ofrecerte información sobre tu cuenta\n&quot;} </code></pre> <p>For instance, if I ask &quot;¿Cómo estás?&quot; and there's a trained completion for that sentence: &quot;Estoy bien, ¿y tú?&quot;, the inference often returns exactly the same (which is good), but sometimes it adds non-encoded words: &quot;Estoy bien, ¿y tú? Cuéntame algo de ti&quot;, adding &quot;é&quot; instead of &quot;é&quot;.</p> <p>Sometimes, it returns exactly the same sentence that was trained for, with no encoding issues. I don't know if the inference is taking the non-encoded characters from my model or from somewhere else.</p> <p>What should I do? Should I encode the dataset in UTF-8? Should I leave the dataset with UTF-8 and decode the bad encoded chars in the response?</p> <p>The OpenAI docs for fine-tuning don't include anything about encoding.</p>
<p>I faced the same issue dealing with Portuguese strings.</p> <p>Try to use <code>.encode(&quot;cp1252&quot;).decode()</code> after the string:</p> <pre><code>&quot;Cuéntame algo de ti&quot;.encode(&quot;cp1252&quot;).decode() </code></pre> <p>This should result in:</p> <pre><code>&quot;Cuéntame algo de ti&quot; </code></pre> <p><code>cp1252</code> relates to the windows-1252 Western Europe codec. If that's not working, try another codec from here: <a href="https://docs.python.org/3.7/library/codecs.html#standard-encodings" rel="nofollow noreferrer">https://docs.python.org/3.7/library/codecs.html#standard-encodings</a></p>
1,188
fine-tuning
TTS Tacotron2 Fine-tuning: Missing Layers and No Output Sound
https://stackoverflow.com/questions/77520193/tts-tacotron2-fine-tuning-missing-layers-and-no-output-sound
<p>I’m attempting to use <a href="https://github.com/coqui-ai/TTS" rel="nofollow noreferrer">TTS</a> to fine tune a Tacotron2 TTS model. If it makes a difference, I'm using Python 3.9.1 and I'm fine-tuning the latest <code>tts_models--en--ljspeech--tacotron2-DDC</code>.</p> <p>During the fine-tuning process, when I load the pretrained model the system throws errors indicating <code>Layer missing in the checkpoint.</code> Then it says</p> <pre><code>| &gt; 81 / 105 layers are restored. &gt; Model restored from step 278000 &gt; Model has 47669492 parameters &gt; Number of output frames: 2 </code></pre> <p>I've checked to see that I'm using the correct model version as per TTS docs and everything else seems to be in order.</p> <p>Training technically does execute... but the wav file synthesized by the fine-tuned model doesn't contain any audible sound.</p> <p>Any ideas or suggestions are welcome.</p> <p>Here's the exact command-line code I'm using:</p> <pre><code>CUDA_VISIBLE_DEVICES=&quot;0&quot; python ./TTS/recipes/ljspeech/tacotron2-DDC/train_tacotron_ddc.py --restore_path ../.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/model_file.pth --config_path ../.local/share/tts/tts_models--en--ljspeech--tacotron2-DDC/config.json </code></pre>
1,189
fine-tuning
Apache configuration fine tuning
https://stackoverflow.com/questions/23852763/apache-configuration-fine-tuning
<p>I run a very simple website (basically a redirect based on a php database) which gets on average 5 visits per second throughout the day, but at peak times (usually 2-3 times a day), this may go up to even 300 visits/s or more. I've modified the default apache settings as follows (based on various info found online as I'm not an expert):</p> <pre><code>Start Servers: 5 (default) / 25 (now) Minimum Spare Servers: 5 (default) / 50 (now) Maximum Spare Servers: 10 (default) / 100 (now) Server Limit: 256 (default) / 512 (now) Max Clients: 150 (default) / 450 (now) Max Requests Per Child: 10000 (default) Keep-Alive: On (default) / Off (now) Timeout: 300 (default) </code></pre> <p>Server (VPS) specs: </p> <pre><code>4x 3199.998 MHz, 512 KB Cache, QEMU Virtual CPU version (cpu64-rhel6) 8GB RAM (Memory: 8042676k/8912896k available (5223k kernel code, 524700k absent, 345520k reserved, 7119k data, 1264k init) 70GB SSD disk CENTOS 6.5 x86_64 kvm – server </code></pre> <p>During average loads the server handles just fine. Problems occur almost every day during peak traffic times, as in http time-outs or extremely long response/load times. Question is, do I need to get a better server or can I improve response times during peak traffic by further tuning Apache config? Any help would be appreciated. Thank you!</p>
<p>maybe you need to enable mod_cache with mod_mem_cache, another parameter that i always configure is ulimits: </p> <ul> <li>nofile to get more sockets</li> <li>nproc to get more processes</li> </ul> <p><a href="http://www.webperformance.com/load-testing/blog/2012/12/setting-apache2-ulimit-for-maximum-prefork-performance/" rel="nofollow">http://www.webperformance.com/load-testing/blog/2012/12/setting-apache2-ulimit-for-maximum-prefork-performance/</a></p> <p>finally TCP Tuning and Network, check all net.core and net.ipv4 parameters to get less latency</p> <p><a href="http://fasterdata.es.net/host-tuning/linux/" rel="nofollow">http://fasterdata.es.net/host-tuning/linux/</a></p>
1,190
fine-tuning
Fine-Tuning Pyannote Model for VAD Task — Issues After Training
https://stackoverflow.com/questions/79372052/fine-tuning-pyannote-model-for-vad-task-issues-after-training
<p>I try to fine-tune pre train Pyannote model for VAD task. I can fine-tune it for Segmentation task and everything goes well and I can improve the model results.</p> <p>Here is the code how I fine-tune it:</p> <pre><code>pretrained = Model.from_pretrained(config[&quot;pretrained_model_path&quot;]) registry.load_database(config[&quot;database_path&quot;]) data = registry.get_protocol('MyProtocol.SpeakerDiarization.data') finetuned = deepcopy(pretrained) task = Segmentation( data, duration=config[&quot;duration&quot;], #max_num_speakers=config[&quot;max_num_speakers&quot;], batch_size=config[&quot;batch_size&quot;], #TODO (2^ - 16...) num_workers=config[&quot;num_workers&quot;], #2S loss=config[&quot;loss&quot;], vad_loss=config[&quot;vad_loss&quot;]) finetuned.task = task finetuned.prepare_data() finetuned.setup() trainer = Trainer(accelerator=config[&quot;accelerator&quot;], callbacks=callbacks, max_epochs=config[&quot;max_epochs&quot;], gradient_clip_val=config[&quot;gradient_clip_val&quot;], logger=[tensorboard_logger, csv_logger]) trainer.fit(finetuned) </code></pre> <p>My dataset includes only audio files with speakers. When I check the VAD after the training it faild. I test it with zeros vector and with wav files without any speaker.</p> <p>This is how I check the vad model on my model:</p> <pre><code>pipeline = VoiceActivityDetection(segmentation=segmentation_model_path) pipeline.onset = 0.5 pipeline.offset = 0.5 pipeline.instantiate({ &quot;min_duration_on&quot;: 0.3, &quot;min_duration_off&quot;: 1.0 }) waveform, sr = torchaudio.load(audio_file_path) vad_result = pipeline({&quot;waveform&quot;: waveform, &quot;sample_rate&quot;: sample_rate}) </code></pre> <p>When I use the pre-trained model, it correctly detects speech/non-speech regions. However, after fine-tuning the model (even for one epoch), it fails to detect these regions accurately.</p> <p>I then tried fine-tuning specifically for the VAD task. I understand that .lab files with speech/non-speech labels are required for this. However, I noticed that in the - <a href="https://github.com/pyannote/pyannote-audio/blob/develop/tutorials/voice_activity_detection.ipynb" rel="nofollow noreferrer"> Pyannote VAD tutorial</a>, there’s no mention of .lab files — only .rttm and .uem files are referenced. This has left me confused about the correct setup for fine-tuning a Pyannote model specifically for VAD.</p> <p>My Questions:</p> <ol> <li>Why does the fine-tuned model for Segmentation fail to detect silence or non-speech regions when tested for VAD?</li> <li>For fine-tuning Pyannote specifically for VAD, do I need .lab files, and if so, how can I generate them from my dataset? And how can I configure the database.yml file for it?</li> <li>How can I ensure that the fine-tuned model performs well for the VAD task?</li> </ol> <p>Any guidance would be greatly appreciated! 😊</p>
1,191
fine-tuning
BERT Model Fine Tuning and migrating to TF2
https://stackoverflow.com/questions/66253543/bert-model-fine-tuning-and-migrating-to-tf2
<p>I executed this excellent tutorial: <a href="https://towardsdatascience.com/building-a-multi-label-text-classifier-using-bert-and-tensorflow-f188e0ecdc5d" rel="nofollow noreferrer">https://towardsdatascience.com/building-a-multi-label-text-classifier-using-bert-and-tensorflow-f188e0ecdc5d</a></p> <p>I understood most of it except where model is being created. I would like to know it and migrate to TF2 bert.</p> <ol> <li>When he says &quot;Basically we load the pre-trained model and then train the last layer for classification task.&quot;, does it mean that he is freezing all the other layers and fine-tuning the last layer? This is the relevant code (in TF1) which I am not able to understand:</li> </ol> <pre class="lang-py prettyprint-override"><code>def create_model(bert_config, is_training, input_ids, input_mask, segment_ids, labels, num_labels, use_one_hot_embeddings): &quot;&quot;&quot;Creates a classification model.&quot;&quot;&quot; model = modeling.BertModel( config=bert_config, is_training=is_training, input_ids=input_ids, input_mask=input_mask, token_type_ids=segment_ids, use_one_hot_embeddings=use_one_hot_embeddings) output_layer = model.get_pooled_output() hidden_size = output_layer.shape[-1].value output_weights = tf.get_variable( &quot;output_weights&quot;, [num_labels, hidden_size], initializer=tf.truncated_normal_initializer(stddev=0.02)) output_bias = tf.get_variable( &quot;output_bias&quot;, [num_labels], initializer=tf.zeros_initializer()) with tf.variable_scope(&quot;loss&quot;): if is_training: # I.e., 0.1 dropout output_layer = tf.nn.dropout(output_layer, keep_prob=0.9) logits = tf.matmul(output_layer, output_weights, transpose_b=True) logits = tf.nn.bias_add(logits, output_bias) # probabilities = tf.nn.softmax(logits, axis=-1) ### multiclass case probabilities = tf.nn.sigmoid(logits)#### multi-label case labels = tf.cast(labels, tf.float32) tf.logging.info(&quot;num_labels:{};logits:{};labels:{}&quot;.format(num_labels, logits, labels)) per_example_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits) loss = tf.reduce_mean(per_example_loss) return (loss, per_example_loss, logits, probabilities) </code></pre> <ol start="2"> <li>I went through the TF2 fine tuning tutorials for BERT, but how do I achieve the same? I am able to train other models where step 1 is not required.</li> </ol>
<p>Use the official bert example : <a href="https://www.tensorflow.org/tutorials/text/classify_text_with_bert" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/classify_text_with_bert</a></p>
1,192
fine-tuning
How to use pre-trained models for text classification?Comparing a fine-tuned model with a pre-trained model without fine-tuning
https://stackoverflow.com/questions/73250207/how-to-use-pre-trained-models-for-text-classification-comparing-a-fine-tuned-mod
<p>I want to know how much the fine-tuned model improves compared to the model without fine-tuning.I want to compare the performance of the pre-trained model(BERT) and the model(fine-tuned BERT) obtained by fine-tuning the pre-trained model on text classification.I know how to fine-tune BERT for text classification, but not very clear on how to use BERT directly for classification.what should I do?The following is the code for fine-tuning the model, how to rewrite it to directly use the pre-trained model.</p> <pre><code> &lt;!-- language: python --&gt; from transformers import BertTokenizer, BertModel import torch import torch.nn as nn import torch.utils.data as Data import torch.optim as optim from sklearn.metrics import accuracy_score,matthews_corrcoef from sklearn.model_selection import train_test_split tokenizer_model = BertTokenizer.from_pretrained('bert-base-uncased') pretrained_model = BertModel.from_pretrained(&quot;bert-base-uncased&quot;) class MyDataSet(Data.Dataset): def __init__ (self, data, label): self.data = data self.label = label self.tokenizer = tokenizer_model def __getitem__(self, idx): text = self.data[idx] label = self.label[idx] inputs = self.tokenizer(text, return_tensors=&quot;pt&quot;,padding='max_length',max_length=256,truncation=True) input_ids = inputs.input_ids.squeeze(0) #token_type_ids = inputs.token_type_ids.squeeze(0) attention_mask = inputs.attention_mask.squeeze(0) #return input_ids, token_type_ids, attention_mask, label return input_ids, attention_mask, label def __len__(self): return len(self.data) data,label = [],[] with open(path) as f: for line in f.readlines(): a,b = line.strip().split('\t') data.append(b) if a == 'LOW': label.append('0') elif a == 'MEDIUM': label.append('1') else: label.append('2') label = [int(i) for i in label] train_x,test_x,train_y,test_y = train_test_split(data, label, test_size = 0.15,random_state = 32, stratify=label) dataset_train = MyDataSet(train_x,train_y) dataset_test = MyDataSet(test_x,test_y) dataloader_train = Data.DataLoader(dataset_train, batch_size=128, shuffle=True,num_workers=32,pin_memory=True) dataloader_test = Data.DataLoader(dataset_test, batch_size=128, shuffle=True,num_workers=32,pin_memory=True) class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.bert = pretrained_model self.linear = nn.Linear(768,3) def forward(self, input_ids, attention_mask): output = self.bert(input_ids, attention_mask).pooler_output print(output.shape) # torch.Size([1, 768]) output = self.linear(output) return output device = torch.device(&quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;) if torch.cuda.device_count() &gt; 1: print(&quot;Use&quot;, torch.cuda.device_count(), 'gpus') model = MyModel() model = nn.DataParallel(model) model = model.to(device) ## model = MyModel().to(device) loss_fn = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=1e-5) for epoch in range(10): for input_ids,attention_mask,label in dataloader_train: train_input_ids,train_attention_mask,train_label = input_ids.to(device),attention_mask.to(device),label.to(device) model.train() pred = model(train_input_ids,train_attention_mask) print('epoch:',epoch) #print('pred,label:',pred,label) loss = loss_fn(pred, train_label) print('Loss:',loss.item()) pred = torch.argmax(pred,dim=1) acc = (pred == train_label).float().mean() print('acc:',acc) loss.backward() optimizer.step() optimizer.zero_grad() savename_train = str(path) +'_' + str(name) + '_train' + '.txt' with open(savename_train,'a') as f: f.write(str(epoch)+'\t'+str(loss.item())+'\t'+str(acc.item())+'\n') model.eval() with torch.no_grad(): for input_ids,attention_mask,label in dataloader_test: validation_input_ids,validation_attention_mask,validation_label = input_ids.to(device),attention_mask.to(device),label.to(device) pred = model(validation_input_ids,validation_attention_mask) loss = loss_fn(pred, validation_label) pred = torch.argmax(pred, dim=1) acc = (pred == validation_label).float().mean() print('acc:',acc) savename_eval = str(path) +'_' + str(name) + '_val' + '.txt' with open(savename_eval,'a') as f: f.write(str(epoch)+'\t'+str(loss.item())+'\t'+str(acc.item())+'\n') </code></pre>
<p>What you are trying to do does not make sense. The naive BERT model was retrained using a combination of masked language modelling objective and next sentence prediction. So, all it can do is predicting masked tokens, predicting if a pair of given sentence can be next to each other in a text. Most importantly, it can provide embeddings.</p> <p>To use for classification you have to add a classification head to the end of the model. Initially, the weights of that layer is randomly initialised. If you do not fine tune the last layer, what do you really expect from random weights?</p> <p>If you really want to compare the fine-tuned model to a baseline, take the embeddings vector from the BERT and use a tradional ML model like SVM or Tree based calssifier.</p>
1,193
fine-tuning
Problem in fine tuning DeepLabV3Plus using keras_cv for Semantic Segmentation
https://stackoverflow.com/questions/79645357/problem-in-fine-tuning-deeplabv3plus-using-keras-cv-for-semantic-segmentation
<p>I'm using <code>open-images-v7</code> dataset (accessing via <code>fiftyone</code> lib) and <code>keras_cv</code> lib to fine tune <code>DeepLabV3Plus</code> with <code>mobilenet_v3_small</code> backbone, but the accuracy doesn't improve with epochs at all, and I'm getting shape warning.</p> <p><strong>Dataset Preparation Code:</strong></p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np def preprocess_sample(sample): img = tf.io.read_file(sample[&quot;filepath&quot;]) img = tf.image.decode_jpeg(img, channels=3) img.set_shape([None, None, 3]) img = tf.image.resize(img, (512, 512)) # img = img / 255.0 # Normalize for detection in sample.ground_truth.detections: if detection.label == 'Vehicle registration plate': mask = detection.mask.astype(np.float32) break mask = tf.expand_dims(mask, axis=-1) mask.set_shape([None, None, 1]) mask = tf.image.resize(mask, (512, 512), method=&quot;nearest&quot;) return img, mask # Convert FiftyOne samples to TF Dataset tf_train_dataset = tf.data.Dataset.from_generator( lambda: (preprocess_sample(s) for s in train_dataset), output_signature=( tf.TensorSpec(shape=(512, 512, 3), dtype=tf.float32), # Image tf.TensorSpec(shape=(512, 512, 1), dtype=tf.float32), # Mask ) ) tf_val_dataset = tf.data.Dataset.from_generator( lambda: (preprocess_sample(s) for s in val_dataset), output_signature=( tf.TensorSpec(shape=(512, 512, 3), dtype=tf.float32), # Image tf.TensorSpec(shape=(512, 512, 1), dtype=tf.float32), # Mask ) ) # Batch and shuffle tf_train_dataset = tf_train_dataset.batch(8).prefetch(tf.data.AUTOTUNE) tf_val_dataset = tf_val_dataset.batch(8).prefetch(tf.data.AUTOTUNE) </code></pre> <p><strong>Fine Tuning Code</strong></p> <pre class="lang-py prettyprint-override"><code>model = keras_cv.models.DeepLabV3Plus.from_preset( &quot;mobilenet_v3_small&quot;, input_shape=(512, 512, 3), num_classes=1, # activation=None # Remove final activation ) # outputs = tf.keras.layers.Activation(&quot;sigmoid&quot;)(model.output) # model = tf.keras.Model(inputs=model.input, outputs=outputs) model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-4), # 'adam' loss=&quot;binary_crossentropy&quot;, metrics=[&quot;binary_accuracy&quot;] # accuracy ) # Train model.fit( tf_train_dataset, validation_data=tf_val_dataset, epochs=5, callbacks=[ keras.callbacks.EarlyStopping(patience=3, restore_best_weights=True), ], ) </code></pre> <p><strong>Error/Output</strong></p> <pre><code>Epoch 1/5 /usr/local/lib/python3.11/dist-packages/keras/src/models/functional.py:237: UserWarning: The structure of `inputs` doesn't match the expected structure. Expected: ['keras_tensor'] Received: inputs=Tensor(shape=(None, 512, 512, 3)) warnings.warn(msg) /usr/local/lib/python3.11/dist-packages/keras/src/ops/nn.py:907: UserWarning: You are using a softmax over axis -1 of a tensor of shape (None, 512, 512, 1). This axis has size 1. The softmax operation will always return the value 1, which is likely not what you intended. Did you mean to use a sigmoid instead? warnings.warn( 13/Unknown 200s 12s/step - binary_accuracy: 0.7506 - loss: 0.6739 /usr/local/lib/python3.11/dist-packages/keras/src/trainers/epoch_iterator.py:151: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self._interrupted_warning() 13/13 ━━━━━━━━━━━━━━━━━━━━ 211s 13s/step - binary_accuracy: 0.7506 - loss: 0.6725 - val_binary_accuracy: 0.6972 - val_loss: 0.6890 Epoch 2/5 13/13 ━━━━━━━━━━━━━━━━━━━━ 202s 13s/step - binary_accuracy: 0.7506 - loss: 0.5131 - val_binary_accuracy: 0.6972 - val_loss: 0.6835 Epoch 3/5 13/13 ━━━━━━━━━━━━━━━━━━━━ 182s 14s/step - binary_accuracy: 0.7506 - loss: 0.4288 - val_binary_accuracy: 0.6972 - val_loss: 0.6792 Epoch 4/5 13/13 ━━━━━━━━━━━━━━━━━━━━ 166s 13s/step - binary_accuracy: 0.7506 - loss: 0.3606 - val_binary_accuracy: 0.6972 - val_loss: 0.6756 Epoch 5/5 13/13 ━━━━━━━━━━━━━━━━━━━━ 166s 13s/step - binary_accuracy: 0.7506 - loss: 0.3141 - val_binary_accuracy: 0.6972 - val_loss: 0.6723 </code></pre>
<p>The loss function, as you have defined it, expects values in the form of probability values for a given class. If you do not apply an activation function to the model output, raw values (<strong>logits</strong>) are fed into the loss function.</p> <p>You can perform training without using an activation function, but when defining the loss function, you must specify that the values provided were NOT converted to probability values:</p> <pre><code>loss = tf.keras.losses.BinaryCrossentropy(from_logits=True) model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-4), # 'adam' loss=loss, metrics=[&quot;binary_accuracy&quot;] # accuracy ) </code></pre>
1,194
fine-tuning
Loss when starting fine tuning is higher than loss from transfer learning
https://stackoverflow.com/questions/72548173/loss-when-starting-fine-tuning-is-higher-than-loss-from-transfer-learning
<p>Since I start fine tuning with the weights learned by transfer learning, I would expect the loss to be the same or less. However it looks like it starts fine tuning using a different set of starting weights.</p> <p>Code to start transfer learning:</p> <pre><code>base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') base_model.trainable = False model = tf.keras.Sequential([ base_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(units=3, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) epochs = 1000 callback = tf.keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True) history = model.fit(train_generator, steps_per_epoch=len(train_generator), epochs=epochs, validation_data=val_generator, validation_steps=len(val_generator), callbacks=[callback],) </code></pre> <p>Output from last epoch:</p> <pre><code>Epoch 29/1000 232/232 [==============================] - 492s 2s/step - loss: 0.1298 - accuracy: 0.8940 - val_loss: 0.1220 - val_accuracy: 0.8937 </code></pre> <p>Code to start fine tuning:</p> <pre><code>model.trainable = True # Fine-tune from this layer onwards fine_tune_at = -20 # Freeze all the layers before the `fine_tune_at` layer for layer in model.layers[:fine_tune_at]: layer.trainable = False model.compile(optimizer=tf.keras.optimizers.Adam(1e-5), loss='binary_crossentropy', metrics=['accuracy']) history_fine = model.fit(train_generator, steps_per_epoch=len(train_generator), epochs=epochs, validation_data=val_generator, validation_steps=len(val_generator), callbacks=[callback],) </code></pre> <p>But this is what I see for the first few epochs:</p> <pre><code>Epoch 1/1000 232/232 [==============================] - ETA: 0s - loss: 0.3459 - accuracy: 0.8409/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images &quot;Palette images with Transparency expressed in bytes should be &quot; 232/232 [==============================] - 509s 2s/step - loss: 0.3459 - accuracy: 0.8409 - val_loss: 0.7755 - val_accuracy: 0.7262 Epoch 2/1000 232/232 [==============================] - 502s 2s/step - loss: 0.1889 - accuracy: 0.9066 - val_loss: 0.5628 - val_accuracy: 0.8881 </code></pre> <p>Eventually the loss drops and passes the transfer learning loss:</p> <pre><code>Epoch 87/1000 232/232 [==============================] - 521s 2s/step - loss: 0.0232 - accuracy: 0.8312 - val_loss: 0.0481 - val_accuracy: 0.8563 </code></pre> <p>Why was the loss in the first epoch of fine tuning higher than the last loss from transfer learning?</p>
<p>According to Tensorflow, Keras page on Transfer learning and fine-tuning <a href="https://keras.io/guides/transfer_learning" rel="nofollow noreferrer">link</a>. The params of the Batch Norm layer should be left alone.</p> <blockquote> <p>Importantly, although the base model becomes trainable, it is still running in inference mode since we passed training=False when calling it when we built the model. This means that the batch normalization layers inside won't update their batch statistics. If they did, they would wreck havoc on the representations learned by the model so far.</p> </blockquote> <p>Below is what I did that fixed the issue of sudden increase in loss after unfreeze layers:</p> <pre><code>from tensorflow.keras import layers from tensorflow.keras.applications import MobileNet img_width, img_height, num_channel = 128, 128, 3 conv_base = MobileNet( include_top=False, input_shape=(img_width, img_height, num_channel), pooling=&quot;avg&quot;) conv_base.trainable = False check_layer = layers.BatchNormalization() # a dummy layer for layer in conv_base.layers[-50:]: # unfreeze 50 layers from the top # check if the layer is of type BatchNorm if type(layer) != type(check_layer): layer.trainable = True print(conv_base.summary(show_trainable=True)) # checking the layers' trainability </code></pre>
1,195
fine-tuning
What is Fine Tuning in reference to Neural Network?
https://stackoverflow.com/questions/56680727/what-is-fine-tuning-in-reference-to-neural-network
<p>I'm going through a few research papers based on neural network, where I came across the word Fine Tuning on pre-trained CNN network. What does it actually do?</p>
<p><strong>Pre-trained:</strong></p> <p>Firstly we have to understand pre-trained model. Pre-trained models are the models which weights are already trained by someone on a data-set. e.g VGG16 is trained on image-net. Now we want to classify imagenet images. than we can say that If we use pre-trained VGG16 we can classify them easily. Because VGG16 is already trained to classify imagenet objects we don't need to train that again.</p> <p><strong>Fine-Tuning:</strong></p> <p>Now I want to classify Cifar-10(classes-10) with VGG16 (classes-1000) and I want to use pre-trained models for this work. Now I have a model which is trained on Image-net which have 1000 classes. So Now I will change the last layer with 10 neurons with softmax activation because Now I want to classify 10 classes not 1000. Now I will fine-tune(change according to my need) my model. I will add a dense layer at the last of the model which have 10 neurons. Now I can use VGG16 (pre-trained for image-net). changing pre-trained model according to our need is known as fine-tuning.</p> <p><strong>Transfer Learning:</strong></p> <p>Now the whole concept using pre-trained model and use it to classify our data-set by fine-tuning model is known as transfer-learning</p> <p><strong>Transfer-learning Example(Using Pre-trained model and Fine-tune it for using it on my data-set)</strong></p> <p>Here I am using Dense-net pre-trained on image-net and fine-tune my model because I want to use VGG16 net model to classify images in my data-set. and my data set have 5 classes So I am adding last dense-layer having 5 neurons</p> <pre><code>model=Sequential() dense_model=keras.applications.densenet.DenseNet121(include_top=False, weights='imagenet', input_tensor=None, input_shape=(224,224,3), pooling=None, classes=1000) dense_model.trainable = False dense_model.summary() # Add the vgg convolutional base model model.add(dense_model) # Add new layers model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(128, activation='relu')) model.add(keras.layers.Dense(5, activation='softmax')) model.summary() </code></pre> <p>Pre-trained model link: <a href="https://www.kaggle.com/sohaibanwaar1203/pretrained-densenet" rel="nofollow noreferrer">https://www.kaggle.com/sohaibanwaar1203/pretrained-densenet</a></p> <p>Now what if I want to change the hyper-parameters of the pre-trained model. I want to check which (optimizer,loss-function,number of layers, number of neurons) is working well on my data-set if I use VGG16 (on my data-set). For this reason I will optimize my parameter known as hyper-parameter Optimization</p> <p><strong>Hyper-parameter Optimization:</strong> if you have knowledge about neural networks you will know that we give random numbers to our neural network. e.g No of dense layers, Number of dense units, Activation's, Dropout percentage. We don't know that neural network with 3 layers will perform well on our data or neural network with 6 layers will perform well on our data. We do experimentation to get the best number for our model. Now experimentation in which you are finding best number for your model is known as fine tuning. Now we have some techniques to Optimize our model like Grid Search, Random Search. I am sharing notebook by which you will be able to Optimize your model parameters with the help of code.</p> <pre><code> import math from keras.wrappers.scikit_learn import KerasRegressor import keras from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import RandomizedSearchCV, KFold from sklearn.metrics import make_scorer from keras.models import Sequential,Model from keras.layers import Dense,Dropout,Activation,BatchNormalization from keras import losses from keras import optimizers from keras.callbacks import EarlyStopping from keras import regularizers def Randomized_Model(lr=0.0001,dropout=0.5,optimizer='Adam',loss='mean_squared_error', activation=&quot;relu&quot;,clipnorm=0.1, decay=1e-2,momentum=0.5,l1=0.01,l2=0.001, ): #Setting Numbers of units in Every dense layer according to the number of dense layers no_of_units_in_dense_layer=[] #backwards loop #setting up loss functions loss=losses.mean_squared_error if(loss=='mean_squared_error'): loss=losses.mean_squared_error if(loss==&quot;poisson&quot;): loss=keras.losses.poisson if(loss==&quot;mean_absolute_error&quot;): loss=keras.losses.mean_absolute_percentage_error if(loss==&quot;mean_squared_logarithmic_error&quot;): loss=keras.losses.mean_squared_logarithmic_error if(loss==&quot;binary_crossentropy&quot;): loss=keras.losses.binary_crossentropy if(loss==&quot;hinge&quot;): loss=keras.losses.hinge #setting up Optimizers opt=keras.optimizers.Adam(lr=lr, decay=decay, beta_1=0.9, beta_2=0.999) if optimizer==&quot;Adam&quot;: opt=keras.optimizers.Adam(lr=lr, decay=decay, beta_1=0.9, beta_2=0.999) if optimizer==&quot;Adagrad&quot;: opt=keras.optimizers.Adagrad(lr=lr, epsilon=None, decay=decay) if optimizer==&quot;sgd&quot;: opt=keras.optimizers.SGD(lr=lr, momentum=momentum, decay=decay, nesterov=False) if optimizer==&quot;RMSprop&quot;: opt=keras.optimizers.RMSprop(lr=lr, rho=0.9, epsilon=None, decay=0.0) if optimizer==&quot;Adamax&quot;: opt=keras.optimizers.Adamax(lr=lr, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0) #model sequential model=Sequential() model.add(Dense(units=64,input_dim=30,activation=activation)) model.add(Dense(units=32,activation=activation)) model.add(Dense(units=8,activation=activation)) model.add(Dense(units=1)) model.compile(loss=loss ,optimizer=opt) return model params = {'lr': (0.0001, 0.01,0.0009,0.001,0.002 ), 'epochs': [50,100,25], 'dropout': (0, 0.2,0.4, 0.8), 'optimizer': ['Adam','Adagrad','sgd','RMSprop','Adamax'], 'loss': [&quot;mean_squared_error&quot;,&quot;hinge&quot;,&quot;mean_absolute_error&quot;,&quot;mean_squared_logarithmic_error&quot;,&quot;poisson&quot;], 'activation' :[&quot;relu&quot;,&quot;selu&quot;,&quot;linear&quot;,&quot;sigmoid&quot;], 'clipnorm':(0.0,0.5,1), 'decay':(1e-6,1e-4,1e-8), 'momentum':(0.9,0.5,0.2), 'l1': (0.01,0.001,0.0001), 'l2': (0.01,0.001,0.0001), } from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import RandomizedSearchCV, KFold from sklearn.metrics import make_scorer # model class to use in the scikit random search CV model = KerasRegressor(build_fn=Randomized_Model, epochs=30, batch_size=3, verbose=1) RandomizedSearchfit = RandomizedSearchCV(estimator=model, cv=KFold(3), param_distributions=params, verbose=1, n_iter=10, n_jobs=1) #having some problem in this line RandomizedSearch_result = RandomizedSearchfit.fit(X, Y ) </code></pre> <p>Now give your X and Y to this model it will find the best parameter selected by you in the <code>param_dict variable</code>. You can also check fine-tuning of CNN in this notebook (<a href="https://www.kaggle.com/sohaibanwaar1203/talos-hyper-parameter-optimization" rel="nofollow noreferrer">Click Here</a>) In this Notebook I am using Talos Library to fine tune my model.</p> <p>This is another notebook in which I am using SKLearn (Randomised and grid search )to fine tune my model (<a href="https://www.kaggle.com/sohaibanwaar1203/hyperparameter-tunning-and-cnn-visualization" rel="nofollow noreferrer">Click Here</a>)</p>
1,196
fine-tuning
Nodejs worker pool fine tuning
https://stackoverflow.com/questions/79201418/nodejs-worker-pool-fine-tuning
<p>If I understood correctly:</p> <ol> <li>by default nodejs provides a worker pool, whose size can be tuned with UV_THREADPOOL_SIZE (default: 4)</li> <li>a cpu or io intensive task can use thread from this worker pool (this is the case for node modules like DNS or Crypto) or create a dedicated worker pool</li> <li>total number of worker pools should not exceed the actual number of cpu capability of where the application is running</li> </ol> <p>My questions:</p> <ol> <li>how do you track the actual number of workers being used by your application ?</li> <li>when using third party modules (for instance mongoose or JSONStream), one should be aware of those modules strategy (use of default worker pool or dedicated worker pool). Does that mean that for every module used one should look into the code to determine its strategy?</li> <li>the use of node cluster (for instance with pm2) seems that it complicates the fine tuning of the number of workers, and that it'd be better to just scale out. Your advice on this matter?</li> </ol> <p>Thanks for your feedback</p>
1,197
fine-tuning
Fine-tuning BERT on SequenceClassification using Transformers framework
https://stackoverflow.com/questions/64805769/fine-tuning-bert-on-sequenceclassification-using-transformers-framework
<p>I am currently fine-tuning a BERT model on a sequence classification task. To do this, I am using the transformers framework. This requires a Batch input in a Trainer: <a href="https://huggingface.co/transformers/_modules/transformers/trainer.html" rel="nofollow noreferrer">https://huggingface.co/transformers/_modules/transformers/trainer.html</a></p> <p>The way fine-tuning works, is described here: <a href="https://huggingface.co/transformers/custom_datasets.html" rel="nofollow noreferrer">https://huggingface.co/transformers/custom_datasets.html</a> I think the Batch needs to look like I created it, but for some reason I keep getting errors. The picture shows a single item from the dataset.</p> <p>If I add the labels as tensor, a part of the model that converts labels to tensors gives an error. But when I add the labels as list I get: Expected input batch_size (16) to match target batch_size (2016). What is the correct way to give a Batch to the BERT model?</p> <p><a href="https://i.sstatic.net/d8hop.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d8hop.png" alt="What my dataSet object looks like" /></a></p> <p>Here is how I initialise the model:</p> <pre><code>training_args = TrainingArguments( output_dir='C:', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='C:', # directory for storing logs logging_steps=10, ) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForSequenceClassification.from_pretrained(&quot;bert-base-uncased&quot;) data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above data_collator=data_collator, # train_dataset=train_dataset, # training dataset eval_dataset=test_dataset # evaluation dataset ) trainer.train() </code></pre>
1,198
fine-tuning
Fine tuning LayoutLmv3 using Cord-V2 dataset
https://stackoverflow.com/questions/78606543/fine-tuning-layoutlmv3-using-cord-v2-dataset
<p>I'm working on fine-tuning LayoutLMv3 using the CORD-v2 dataset. I'm struggling with the data preprocessing part, specifically on how to correctly extract the total amount (TTC) from the images. The examples I've found online seem to use the older CORD dataset, which has a different format. The new CORD-v2 dataset only includes images and ground truth labels.</p> <p>How to approach this?</p> <p>I've tried examples from YouTube and Hugging Face but haven't had any success.</p>
<p><em>I found a solution, you should create a label map for the data you want to extract, then Scale Bounding Boxes after that, and Detect Currency in Text because the problem is the dataset has a lot of different currencies and languages also so this is what I did</em></p> <p>label_map = { &quot;total.total_price&quot;: 1, &quot;other&quot;: 0 }</p> <pre><code>def scale_bbox(box, original_size, target_size=(1000, 1000)): x_scale = target_size[0] / original_size[0] y_scale = target_size[1] / original_size[1] return [int(box[0] * x_scale), int(box[1] * y_scale), int(box[2] * x_scale), int(box[3] * y_scale)] def detect_currency(text): currency_symbols = { '$': 'USD', '€': 'EUR', '£': 'GBP', '¥': 'JPY', '₹': 'INR', '₩': 'KRW', } for symbol, currency in currency_symbols.items(): if symbol in text: return currency return None def preprocess_data(examples): images = [] words = [] boxes = [] labels = [] original_size = (224, 224) currency_converter = CurrencyRates() for image, gt in zip(examples['image'], examples['ground_truth']): img = image.convert(&quot;RGB&quot;).resize(original_size) images.append(img) gt = json.loads(gt) batch_words = [] batch_boxes = [] batch_labels = [] for item in gt['valid_line']: for w in item['words']: text = w['text'] quad = w['quad'] bbox = scale_bbox([quad['x1'], quad['y1'], quad['x3'], quad['y3']], original_size) bbox = [min(max(0, coord), 1000) for coord in bbox] batch_words.append(text) batch_boxes.append(bbox) if item['category'] == 'total.total_price': try: total_amount_match = re.findall(r&quot;\d+\.\d{2}&quot;, text) if total_amount_match: total_amount = float(total_amount_match[0]) detected_currency = detect_currency(text) if detected_currency and detected_currency != 'USD': total_amount = currency_converter.convert(detected_currency, 'USD', total_amount) text = f&quot;{total_amount:.2f} USD&quot; except Exception as e: print(f&quot;Error processing text: {e}&quot;) batch_labels.append(label_map[&quot;total.total_price&quot;]) else: batch_labels.append(label_map[&quot;other&quot;]) words.append(batch_words) boxes.append(batch_boxes) labels.append(batch_labels) encoding = processor(images, words, boxes=boxes, word_labels=labels, truncation=True, padding=&quot;max_length&quot;, max_length=512, return_tensors=&quot;pt&quot;) return encoding`enter code here` </code></pre>
1,199
text classification
Text classification vs. Sentence classification
https://stackoverflow.com/questions/23460996/text-classification-vs-sentence-classification
<p>What's the difference between the two? Articles seem to treat them differently... that is, a paper would show research on either text classification <strong>or</strong> on sentence classification.</p> <p>I wonder - if one applied sentence classification on a whole text, and then classified the paragraph according to what most of its sentences were classified to - would that count as proper text classification? or does text classification have a different 'catch'? </p>
<p>Task, problem is about <strong>what to do</strong> not <strong>how</strong>. So it does not matter <strong>how</strong> you approach text classification it is always text classification if you <strong>classify text</strong>. That's all. You could toss a coin to classify it, it would still "count as proper text classification" if it achieves good scores. </p> <p>Sentence classification can be seen as a "smaller scale" problem, as text classification is rather used in context of bigger chunks of text (like documents). But there are no strict distinctions/lines drawn here. I would rather treat text classification as a bag, general term under which you can put word-level tasks (like POS tagging); sentence classification; sentiment analysis (on the level of words, sentences, paragraphs or documents) etc.</p>
1,200
text classification
Multilabel Text Classification using TensorFlow
https://stackoverflow.com/questions/35400065/multilabel-text-classification-using-tensorflow
<p>The text data is organized as vector with 20,000 elements, like [2, 1, 0, 0, 5, ...., 0]. i-th element indicates the frequency of the i-th word in a text. </p> <p>The ground truth label data is also represented as vector with 4,000 elements, like [0, 0, 1, 0, 1, ...., 0]. i-th element indicates whether the i-th label is a positive label for a text. The number of labels for a text differs depending on texts. </p> <p>I have a code for single-label text classification. </p> <p>How can I edit the following code for multilabel text classification?</p> <p>Especially, I would like to know following points. </p> <ul> <li>How to compute accuracy using TensorFlow. </li> <li>How to set a threshold which judges whether a label is positive or negative. For instance, if the output is [0.80, 0.43, 0.21, 0.01, 0.32] and the ground truth is [1, 1, 0, 0, 1], the labels with scores over 0.25 should be judged as positive. </li> </ul> <p>Thank you. </p> <pre><code>import tensorflow as tf # hidden Layer class HiddenLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_h = tf.Variable(tf.random_normal([n_in, n_out],mean = 0.0,stddev = 0.05)) b_h = tf.Variable(tf.zeros([n_out])) self.w = w_h self.b = b_h self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b self.output = tf.nn.relu(linarg) return self.output # output Layer class OutputLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_o = tf.Variable(tf.random_normal([n_in, n_out], mean = 0.0, stddev = 0.05)) b_o = tf.Variable(tf.zeros([n_out])) self.w = w_o self.b = b_o self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b self.output = tf.nn.relu(linarg) return self.output # model def model(): h_layer = HiddenLayer(input = x, n_in = 20000, n_out = 1000) o_layer = OutputLayer(input = h_layer.output(), n_in = 1000, n_out = 4000) # loss function out = o_layer.output() cross_entropy = -tf.reduce_sum(y_*tf.log(out + 1e-9), name='xentropy') # regularization l2 = (tf.nn.l2_loss(h_layer.w) + tf.nn.l2_loss(o_layer.w)) lambda_2 = 0.01 # compute loss loss = cross_entropy + lambda_2 * l2 # compute accuracy for single label classification task correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, "float")) return loss, accuracy </code></pre>
<p>Change relu to sigmoid of output layer. Modify cross entropy loss to explicit mathematical formula of sigmoid cross entropy loss (explicit loss was working in my case/version of tensorflow )</p> <pre><code>import tensorflow as tf # hidden Layer class HiddenLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_h = tf.Variable(tf.random_normal([n_in, n_out],mean = 0.0,stddev = 0.05)) b_h = tf.Variable(tf.zeros([n_out])) self.w = w_h self.b = b_h self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b self.output = tf.nn.relu(linarg) return self.output # output Layer class OutputLayer(object): def __init__(self, input, n_in, n_out): self.input = input w_o = tf.Variable(tf.random_normal([n_in, n_out], mean = 0.0, stddev = 0.05)) b_o = tf.Variable(tf.zeros([n_out])) self.w = w_o self.b = b_o self.params = [self.w, self.b] def output(self): linarg = tf.matmul(self.input, self.w) + self.b #changed relu to sigmoid self.output = tf.nn.sigmoid(linarg) return self.output # model def model(): h_layer = HiddenLayer(input = x, n_in = 20000, n_out = 1000) o_layer = OutputLayer(input = h_layer.output(), n_in = 1000, n_out = 4000) # loss function out = o_layer.output() # modified cross entropy to explicit mathematical formula of sigmoid cross entropy loss cross_entropy = -tf.reduce_sum( ( (y_*tf.log(out + 1e-9)) + ((1-y_) * tf.log(1 - out + 1e-9)) ) , name='xentropy' ) # regularization l2 = (tf.nn.l2_loss(h_layer.w) + tf.nn.l2_loss(o_layer.w)) lambda_2 = 0.01 # compute loss loss = cross_entropy + lambda_2 * l2 # compute accuracy for single label classification task correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, "float")) return loss, accuracy </code></pre>
1,201
text classification
Text Classification using NLTK3 error?
https://stackoverflow.com/questions/27572923/text-classification-using-nltk3-error
<p>I'm working on Arabic text classification using NLTK3.. I got the following error ,can you please help me to figure out the bug</p> <pre><code>"calssifier =NaiveBayesClassifier.train(train_set) File "/usr/local/lib/python2.7/dist-packages/nltk/classify/naivebayes.py", line 194, in train for fname, fval in featureset.items(): AttributeError: 'unicode' object has no attribute 'items'" </code></pre> <p>Thanks</p>
<p><strong>train_set</strong> is a string, when it should have been a dict.</p>
1,202
text classification
Feature Selection in Text Classification
https://stackoverflow.com/questions/10309203/feature-selection-in-text-classification
<p>I'm currently studying on text classification, focusing on feature selection. Can anyone suggest me any software/program that I can use for text classification that provides feature selection function (particularly Information Gain, Chi Square, Odds Ratio, Mutual Information, etc.)? </p> <p>Thanks and best regards =)</p>
1,203
text classification
unsupervised text classification with php
https://stackoverflow.com/questions/15305817/unsupervised-text-classification-with-php
<p>Are there any pre-made libraries for PHP that can be used to help with tasks involving unsupervised text classification <sup><a href="http://en.wikipedia.org/wiki/Document_classification#Automatic_document_classification" rel="nofollow">information</a></sup>?</p> <p>I've looked around the site at other questions, but I have been unable to find a similar problem.</p> <p>I would like to learn how to implement an unsupervised classification system.</p>
<p><a href="https://github.com/gburtini/Learning-Library-for-PHP" rel="nofollow">https://github.com/gburtini/Learning-Library-for-PHP</a></p> <p>Some general unsupervised algorithms already implemented here. Maybe it will be you useful for you.</p>
1,204
text classification
classification report for multilabel text classification?
https://stackoverflow.com/questions/67751096/classification-report-for-multilabel-text-classification
<p>I'm working on multilabel text classification. I'm tried to print the classification report for the machine learning but its print for each class alone. how I can get the classification report for all classes together? This part of the code</p> <p>this code for the labels</p> <pre><code>categories = list(data_raw.columns.values) categories = categories[1:] </code></pre> <p>The Evaluation:</p> <pre><code>def modelEvaluation(predictions, y_test_set): print(&quot;\nAccuracy on validation set: {:.4f}&quot;.format(accuracy_score(y_test_set, predictions))) print(&quot;\nClassification report : \n&quot;, metrics.classification_report(y_test_set, predictions)) print(&quot;\nConfusion Matrix : \n&quot;, multilabel_confusion_matrix(y_test_set, predictions)) </code></pre> <p>and this for ML</p> <pre><code>from sklearn.svm import LinearSVC SVC_pipeline = Pipeline([ ('clf', OneVsRestClassifier(LinearSVC(), n_jobs=1)), ]) for category in categories: printmd('**Processing {} comments...**'.format(category)) # Training logistic regression model on train data SVC_pipeline.fit(x_train, train[category]) # calculating test accuracy prediction = SVC_pipeline.predict(x_test) print('Test accuracy is {}'.format(accuracy_score(test[category], prediction))) print(&quot;\n&quot;) modelEvaluation(prediction, test[category]) </code></pre> <p>if I tried to print the classification report alone like the below code, it gives me the result for the last class</p> <pre><code>from sklearn.metrics import classification_report print(&quot;\nClassification report : \n&quot;, metrics.classification_report(test[category], prediction)) </code></pre>
<p>Use <strong>without</strong> <code>test[category]</code> and provide the whole test set which contains all classes that you build your model for.</p> <pre><code>print(&quot;\nClassification report : \n&quot;, metrics.classification_report(y_test, predictions)) </code></pre> <p>Where <code>y_test</code> is ground truth labels (True outputs) for test set <code>X_test</code>.</p> <p>You are passing test set (<code>X_test</code>) instead of labels (<code>y_test</code>) for that test set.</p>
1,205
text classification
BERT Text Classification
https://stackoverflow.com/questions/67140627/bert-text-classification
<p>I am new to BERT and try to learn BERT Fine-Tuning for Text Classification via a coursera course <a href="https://www.coursera.org/projects/fine-tune-bert-tensorflow/" rel="nofollow noreferrer">https://www.coursera.org/projects/fine-tune-bert-tensorflow/</a></p> <p>Based on the course, I would like to compare the text classification performance between BERT-12 and BERT-24 using 'SGD' and 'ADAM' optimizer respectively.</p> <p>I found that when I use BERT-12, the result is normal. However, when switching to BERT-24, though the accuracy is good (9X%), the recall and precision value are extremely low (even close to zero).</p> <p>May I know if there are anything wrong with my code?</p> <p>Also, in order to improve the precision and recall, should I add more dense layers and change the activation functions? And what are the optimal learning rate values that I should use?</p> <pre><code>import numpy as np import tensorflow as tf import tensorflow_hub as hub import sys sys.path.append('models') from official.nlp.data import classifier_data_lib from official.nlp.bert import tokenization from official.nlp import optimization import numpy as np import pandas as pd from sklearn.model_selection import train_test_split df= pd.read_csv('https://archive.org/download/fine-tune-bert-tensorflow-train.csv/train.csv.zip', compression='zip', low_memory=False) train_data_ratio = 0.1 val_data_ratio = 0.1 rand_seed = 42 train_df, remaining = train_test_split(df, random_state=rand_seed, train_size=train_data_ratio, stratify=df.target.values) valid_df, _ = train_test_split (remaining , random_state=rand_seed, train_size=val_data_ratio, stratify=remaining.target.values) #load data from main memory to cpu with tf.device('/cpu:0'): train_data = tf.data.Dataset.from_tensor_slices ((train_df['question_text'].values, train_df['target'].values)) valid_data = tf.data.Dataset.from_tensor_slices ((valid_df.question_text.values, valid_df.target.values)) &quot;&quot;&quot; Each line of the dataset is composed of the review text and its label - Data preprocessing consists of transforming text to BERT input features: input_word_ids, input_mask, segment_ids - In the process, tokenizing the text is done with the provided BERT model tokenizer &quot;&quot;&quot; label_list = [0,1] # Label categories max_seq_length = 128 # maximum length of (token) input sequences train_batch_size= 32 learning_rate = 0.001 num_layer = 24 # change between bert-12 and bert-24 to compare the diff epochs = 4 optimizer = 'SGD' assert num_layer in [12, 24] if num_layer == 12: train_batch_size = 32 elif num_layer == 24: train_batch_size = 4 assert optimizer in ['SGD', 'Adam'] if optimizer == 'Adam': opt = tf.keras.optimizers.Adam(learning_rate=learning_rate) elif optimizer == 'SGD': opt = tf.keras.optimizers.SGD(learning_rate=learning_rate) # Get BERT layer and tokenizer: https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2 bert_12 = &quot;https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2&quot; bert_24 = &quot;https://tfhub.dev/tensorflow/bert_en_wwm_uncased_L-24_H-1024_A-16/2&quot; if num_layer == 12: bert_layer = hub.KerasLayer(bert_12, trainable=True) elif num_layer == 24: bert_layer = hub.KerasLayer(bert_24, trainable=True) vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy() #from tensor to numpy do_lower_case = bert_layer.resolved_object.do_lower_case.numpy() #check if it is lower case (no conversion. to check better) tokenizer = tokenization.FullTokenizer (vocab_file, do_lower_case) # from data to features that can be understood by bert def to_feature(text, label, label_list=label_list, max_seq_length=max_seq_length, tokenizer=tokenizer): example = classifier_data_lib.InputExample(guid=None, text_a=text.numpy(), text_b=None, label=label.numpy()) feature=classifier_data_lib.convert_single_example(0,example,label_list,max_seq_length, tokenizer) return (feature.input_ids, feature.input_mask, feature.segment_ids, feature.label_id) def to_feature_map(text, label): input_ids, input_mask, segment_ids, label_id = tf.py_function(to_feature, inp=[text, label], Tout=[tf.int32, tf.int32, tf.int32, tf.int32]) input_ids.set_shape([max_seq_length]) input_mask.set_shape([max_seq_length]) segment_ids.set_shape([max_seq_length]) label_id.set_shape([]) x = { 'input_word_ids': input_ids, 'input_mask': input_mask, 'input_type_ids': segment_ids } return (x, label_id) with tf.device('/cpu:0'): # train train_data = (train_data.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) #.cache() .shuffle(1000) .batch(train_batch_size, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) # valid valid_data = (valid_data.map(to_feature_map, num_parallel_calls=tf.data.experimental.AUTOTUNE) .batch(train_batch_size, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) # Building the model def create_model(): input_word_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name=&quot;input_word_ids&quot;) input_mask = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name=&quot;input_mask&quot;) input_type_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32, name=&quot;input_type_ids&quot;) pooled_output, sequence_output = bert_layer([input_word_ids, input_mask, input_type_ids]) drop = tf.keras.layers.Dropout(0.4)(pooled_output) output = tf.keras.layers.Dense(1, activation=&quot;sigmoid&quot;, name=&quot;output&quot;)(drop) model = tf.keras.Model( inputs={ 'input_word_ids': input_word_ids, 'input_mask': input_mask, 'input_type_ids': input_type_ids }, outputs=output) return model model = create_model() model.compile(optimizer=optimizer, loss=tf.keras.losses.BinaryCrossentropy(), #metrics=[tf.keras.metrics.BinaryAccuracy()]) metrics=[tf.keras.metrics.Recall(),tf.keras.metrics.Precision()]) epochs = epochs history = model.fit(train_data, validation_data=valid_data, epochs=epochs, verbose=1) import matplotlib.pyplot as plt def plot_graphs(history, metric): plt.plot(history.history[metric]) plt.plot(history.history['val_'+metric], '') plt.xlabel(&quot;Epochs&quot;) plt.ylabel(metric) plt.legend([metric, 'val_'+metric]) plt.show() </code></pre> <p>Thank you very much !</p>
<p>Maybe try adding precision and recall to a custom callback function so you can inspect what's going on. I've added a debug point in (<code>pdb.set_trace()</code>) so the process will pause once the first epoch has ended and you can step through each point to investigate the data.</p> <pre><code>from sklearn.metrics import precision_score, recall_score import pdb class Callbacks(tf.keras.callbacks.Callback): def __init__(self, valid_data): super(myCallback, self).__init__() self.valid_data = valid_data def on_epoch_end(self, epoch, logs={}): pdb.set_trace() val_x = valid_data[:-1] # Get bert inputs val_y = valid_data[-1] # Get labels # Get predictions for the filtered val data val_scores = self.model.predict(val_x) # Get indices of best predictions - you might need to alter this val_y_pred = tf.argmax(val_scores, axis=1) val_y_true = tf.argmax(val_y, axis=1) # Calculate precision and recall precision = precision_score(val_y_true, val_y_pred, average='weighted') recall = recall_score(val_y_true, val_y_pred, average='weighted') # Add scores to logs to see in training output logs['precision'] = precision logs['recall'] = recall </code></pre> <p>To pass the validation data to the callback you'll need to add something like the below to your fit function:</p> <pre><code>cbs = Callbacks(valid_data) model.fit(...., callbacks=[cbs]) </code></pre>
1,206
text classification
TensorFlow - Text Classification using Neural Networks
https://stackoverflow.com/questions/33705284/tensorflow-text-classification-using-neural-networks
<p>Is there any example on how can TensorFlow be used for text classification using neural networks?</p>
<p>I've started putting together a set of examples for text classification on DBPedia dataset (predicting class of object from its description) as part of examples for <strong>Scikit Flow</strong>: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/text_classification.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/text_classification.py</a></p> <p>Going to expand this example and write a blog post when will have enough different models showcased. Feel free to suggest other datasets and models you would be interested to see.</p>
1,207
text classification
How to do Text classification using word2vec
https://stackoverflow.com/questions/49643974/how-to-do-text-classification-using-word2vec
<p>I want to perform text classification using word2vec. I got vectors of words.</p> <pre><code>ls = [] sentences = lines.split(".") for i in sentences: ls.append(i.split()) model = Word2Vec(ls, min_count=1, size = 4) words = list(model.wv.vocab) print(words) vectors = [] for word in words: vectors.append(model[word].tolist()) data = np.array(vectors) data </code></pre> <p>output:</p> <pre><code>array([[ 0.00933912, 0.07960335, -0.04559333, 0.10600036], [ 0.10576613, 0.07267512, -0.10718666, -0.00804013], [ 0.09459028, -0.09901826, -0.07074171, -0.12022413], [-0.09893986, 0.01500741, -0.04796079, -0.04447284], [ 0.04403428, -0.07966098, -0.06460238, -0.07369237], [ 0.09352681, -0.03864434, -0.01743148, 0.11251986],.....]) </code></pre> <p>How can i perform classification (product &amp; non product)?</p>
<p>You already have the array of word vectors using <code>model.wv.syn0</code>. If you print it, you can see an array with each corresponding vector of a word.</p> <p>You can see an example here using <strong>Python3</strong>:</p> <pre><code>import pandas as pd import os import gensim import nltk as nl from sklearn.linear_model import LogisticRegression #Reading a csv file with text data dbFilepandas = pd.read_csv('machine learning\\Python\\dbSubset.csv').apply(lambda x: x.astype(str).str.lower()) train = [] #getting only the first 4 columns of the file for sentences in dbFilepandas[dbFilepandas.columns[0:4]].values: train.extend(sentences) # Create an array of tokens using nltk tokens = [nl.word_tokenize(sentences) for sentences in train] </code></pre> <p>Now it's time to use the vector model, in this example we will calculate the LogisticRegression.</p> <pre><code># method 1 - using tokens in Word2Vec class itself so you don't need to train again with train method model = gensim.models.Word2Vec(tokens, size=300, min_count=1, workers=4) # method 2 - creating an object 'model' of Word2Vec and building vocabulary for training our model model = gensim.models.Word2vec(size=300, min_count=1, workers=4) # building vocabulary for training model.build_vocab(tokens) print(&quot;\n Training the word2vec model...\n&quot;) # reducing the epochs will decrease the computation time model.train(tokens, total_examples=len(tokens), epochs=4000) # You can save your model if you want.... # The two datasets must be the same size max_dataset_size = len(model.wv.syn0) Y_dataset = [] # get the last number of each file. In this case is the department number # this will be the 0 or 1, or another kind of classification. ( to use words you need to extract them differently, this way is to numbers) with open(&quot;dbSubset.csv&quot;, &quot;r&quot;) as f: for line in f: lastchar = line.strip()[-1] if lastchar.isdigit(): result = int(lastchar) Y_dataset.append(result) else: result = 40 clf = LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial').fit(model.wv.syn0, Y_dataset[:max_dataset_size]) # Prediction of the first 15 samples of all features predict = clf.predict(model.wv.syn0[:15, :]) # Calculating the score of the predictions score = clf.score(model.wv.syn0, Y_dataset[:max_dataset_size]) print(&quot;\nPrediction word2vec : \n&quot;, predict) print(&quot;Score word2vec : \n&quot;, score) </code></pre> <p>You can also calculate the similarity of words belonging to your created model dictionary:</p> <pre><code>print(&quot;\n\nSimilarity value : &quot;,model.wv.similarity('women','men')) </code></pre> <p>You can find more functions to use <a href="https://radimrehurek.com/gensim/similarities/docsim.html" rel="nofollow noreferrer">here</a>.</p>
1,208
text classification
Text classification using Weka
https://stackoverflow.com/questions/22587682/text-classification-using-weka
<p>I'm a beginner to Weka and I'm trying to use it for text classification. I have seen how to StringToWordVector filter for classification. My question is, is there any way to add more features to the text I'm classifying? For example, if I wanted to add POS tags and named entity tags to the text, how would I use these features in a classifier?</p>
<p>It depends of the format of your dataset and the preprocessing steps you perform. For instance, let us suppose that you have pre-POS-tagged your texts, looking like:</p> <blockquote> <p>The_det dog_n barks_v ._p</p> </blockquote> <p>So you can build an specific tokenizer (see <code>weka.core.tokenizers</code>) to generate two tokens per word, one would be "The" and the other one would be "The_det" so you keep the tag information.</p> <p>If you want only tagged words, then you can just ensure that "_" is not a delimiter in the <code>weka.core.tokenizers.WordTokenizer</code>.</p> <p>My advice is to have both the words and tagged words, so a simpler way would be to write an script that joins the texts and the tagged texts. From a file containing "The dog barks" and another one cointaining "The_det dog_n barks_v ._p", it would generate a file with "The The_det dog dog_n barks barks_v . ._p". You may even forget about the order unless you are going to make use of n-grams.</p>
1,209
text classification
How to do text classification with DeepPavlov
https://stackoverflow.com/questions/53445949/how-to-do-text-classification-with-deeppavlov
<p>I am interested in doing text classification with <a href="http://deeppavlov.ai" rel="nofollow noreferrer">DeepPavlov</a> chatbot framework. </p> <p>The problem is I don't have enough training data. Ideally, I would like to do text classification with just few samples for each class.</p>
<p>You should check out <a href="http://deeppavlov.ai" rel="noreferrer">DeepPavlov's</a> <a href="https://github.com/deepmipt/DeepPavlov/blob/master/deeppavlov/configs/faq/" rel="noreferrer">autoFAQ models</a>. There models were specifically developed to be effective when training data is limited.</p> <p>There are few models at your disposal</p> <ul> <li><p>tf-idf based models</p></li> <li><p>fastText models</p></li> <li><p>and mix of both</p></li> </ul> <p>Change the dataset source in the configuration file and train the model by running</p> <pre><code>python -m deeppavlov train tfidf_logreg_en_faq </code></pre> <p>You can interact with the trained model either via a command line </p> <pre><code>python -m deeppavlov interact tfidf_logreg_en_faq -d </code></pre> <p>or via the Python code</p> <pre><code>from deeppavlov.core.commands.infer import build_model faq = build_model("tfidf_logreg_en_faq", load_trained = True, download = True) a = faq(["I need help"]) a </code></pre> <p>You can find all required code snippets in the <a href="https://colab.research.google.com/drive/1AJMXgnAZ1PyPM4sDLIpk7mNqhQx0lRSe" rel="noreferrer">colab notebook</a></p>
1,210
text classification
Text Classification - DNN
https://stackoverflow.com/questions/60962683/text-classification-dnn
<p>I am performing text classification using a Deep Neural network. My problem is that I am receiving high accuracy 98 on train data whereas my validation accuracy is 49.</p> <p>I have tried the following:</p> <ol> <li>Shuffled the data</li> <li>My train and validation data is 80:20 split</li> <li>I am using 100 dimensions Glov vector</li> </ol> <p>Any suggestions?</p> <pre class="lang-py prettyprint-override"><code>def get_Model(): model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size+1, embedding_dim, input_length=max_length, weights= . [embeddings_matrix], trainable=False), tf.keras.layers.Dropout(0.2), tf.keras.layers.Conv1D(64, 5, activation='relu'), tf.keras.layers.MaxPooling1D(pool_size=4), tf.keras.layers.LSTM(64), tf.keras.layers.Dense(5, activation='softmax') ]) model.compile(loss='sparse_categorical_crossentropy',optimizer="adam",metrics=['acc']) model.summary() return model </code></pre>
<p>Your model is clearly overfitting. Standard tricks to prevent overfitting are:</p> <ul> <li>Adding dropout,</li> <li>L2 regularization,</li> <li>Trying a smaller model.</li> </ul> <p>It is rather unusual to use convolutions and LSTMs at the same time (although it is perfectly fine). Perhaps keeping only one of them is the best way to make the network smaller.</p> <p>My guess is that you are working with a rather small dataset. Having a bigger dataset also helps to prevent overfitting but it is not usually a piece of applicable advice.</p>
1,211
text classification
multiclass text classification
https://stackoverflow.com/questions/65656826/multiclass-text-classification
<p>Why my <code>lstm</code> model is getting better accuracy than my bi <code>lstm</code> model? (multi-class text classification with 5 classes using word2vec and <code>lstm</code>) I tried to find the answer in any paper but I can't find it, almost all the paper said <code>bilstm</code> can improve the accuracy, can someone explain and give the references? Thanks</p> <p>This is for 5 classes using <code>lstm</code></p> <pre><code>Epoch 45/50 205/205 [==============================] - 284s 1s/step - loss: 0.6703 - accuracy: 0.7712 - val_loss: 0.9680 - val_accuracy: 0.6946 Epoch 46/50 205/205 [==============================] - 286s 1s/step - loss: 0.6571 - accuracy: 0.7709 - val_loss: 0.9682 - val_accuracy: 0.6937 Epoch 47/50 205/205 [==============================] - 282s 1s/step - loss: 0.6682 - accuracy: 0.7687 - val_loss: 0.9707 - val_accuracy: 0.6995 Epoch 48/50 205/205 [==============================] - 292s 1s/step - loss: 0.6658 - accuracy: 0.7681 - val_loss: 0.9847 - val_accuracy: 0.6961 Epoch 49/50 205/205 [==============================] - 288s 1s/step - loss: 0.6650 - accuracy: 0.7658 - val_loss: 0.9901 - val_accuracy: 0.6961 Epoch 50/50 205/205 [==============================] - 279s 1s/step - loss: 0.6629 - accuracy: 0.7711 - val_loss: 0.9821 - val_accuracy: 0.6921 this is for 5 classes using bilstm Epoch 45/50 205/205 [==============================] - 313s 2s/step - loss: 0.6071 - accuracy: 0.7859 - val_loss: 0.9831 - val_accuracy: 0.7025 Epoch 46/50 205/205 [==============================] - 310s 2s/step - loss: 0.5971 - accuracy: 0.8002 - val_loss: 0.9834 - val_accuracy: 0.6888 Epoch 47/50 205/205 [==============================] - 316s 2s/step - loss: 0.5976 - accuracy: 0.7966 - val_loss: 1.0056 - val_accuracy: 0.6989 Epoch 48/50 205/205 [==============================] - 346s 2s/step - loss: 0.5858 - accuracy: 0.8020 - val_loss: 0.9978 - val_accuracy: 0.6964 Epoch 49/50 205/205 [==============================] - 322s 2s/step - loss: 0.5941 - accuracy: 0.7977 - val_loss: 1.0099 - val_accuracy: 0.6912 Epoch 50/50 205/205 [==============================] - 327s 2s/step - loss: 0.5886 - accuracy: 0.7987 - val_loss: 1.0049 - val_accuracy: 0.6986 </code></pre>
1,212
text classification
Text Classification with word2vec
https://stackoverflow.com/questions/57525190/text-classification-with-word2vec
<p>I am doing text classification and plan to use word2vec word embeddings. I have used gensim module for word2vec Training.</p> <p>I have tried several Options. But I am getting error that word 'xyz' not in vocabulary. I am not able to find my mistake.</p> <h1>Text processing</h1> <pre><code>def clean_text(text): text = text.translate(string.punctuation) text = text.lower().split() stops = set(stopwords.words("english")) text = [w for w in text if not w in stops] text = " ".join(text) text = re.sub(r"[^\w\s]", " ",text) text = re.sub(r"[^A-Za-z0-9^,!.\/'+-=]", " ",text) text = text.split() lemmatizer = WordNetLemmatizer() lemmatized_words = [lemmatizer.lemmatize(w) for w in text] text = " ".join(lemmatized_words) return text data['text'] = data['text'].map(lambda x: clean_text(x)) </code></pre> <p>Please help me to solve my issue.</p> <h1>Definig Corpus</h1> <pre><code>def build_corpus(data): "Creates a list of lists containing words from each sentence" corpus = [] for col in ['text']: for sentence in data[col].iteritems(): word_list = sentence[1].split(" ") corpus.append(word_list) return corpus corpus = build_corpus(data) </code></pre> <h1>Word2vec model</h1> <pre><code>from gensim.models import word2vec model = word2vec.Word2Vec(corpus, size=100, window=20, min_count=20, workers=12, sg=1) words = list(model.wv.vocab) tokenizer = Tokenizer() X = data.text tokenizer.fit_on_texts(X) sequences = tokenizer.texts_to_sequences(X) X = pad_sequences(sequences, maxlen=10000) embedding_vector_size=100 vocab_size = len(words) embedding_matrix = np.zeros((vocab_size, embedding_vector_size)) for index, word in enumerate(words): embedding_vector = model.wv[word] if embedding_vector is not None: embedding_matrix[index] = embedding_vector </code></pre> <p>Now I am using my created word embeddings on the downstream classification task.</p> <h1>classification model</h1> <pre><code>labels = data['Priority'] </code></pre> <p>where I have two priorities. I want to classify it.</p> <pre><code>X_train, X_test, y_train, y_test = train_test_split(X , labels, test_size=0.25, random_state=42) </code></pre> <p>I am using folllowing network for classification</p> <pre><code>model3 = Sequential() model3.add(Embedding(input_dim = vocab_size, output_dim = embedding_vector_size, input_length = max_len, weights=[embedding_matrix])) model3.add(SpatialDropout1D(0.7)) model3.add(LSTM(64, dropout=0.7, recurrent_dropout=0.7)) model3.add(Dense(2, activation='softmax')) model3.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc']) print(model3.summary()) </code></pre> <p>I am getting error here:</p> <pre><code>'ValueError: "input_length" is 10000, but received input has shape (None, 3)' </code></pre> <p>Please help me to solve it out.Thank you.</p>
<p>Not all words from <em>corpus</em> will be kept in the word2vec model. </p> <p>Replace:</p> <pre><code>vocab_size = len(tokenizer.word_index) + 1 </code></pre> <p>With:</p> <pre><code>vocab_size = len(words) </code></pre> <p>And replace:</p> <pre><code>for word, i in tokenizer.word_index.items(): </code></pre> <p>With:</p> <pre><code>for i, word in enumerate(words): </code></pre> <p>Thus ensuring your embedding matrix contains only words that are in the model. </p>
1,213
text classification
How to represent text documents as feature vectors for text classification?
https://stackoverflow.com/questions/9273536/how-to-represent-text-documents-as-feature-vectors-for-text-classification
<p>I have around 10,000 text documents.</p> <p>How to represent them as feature vectors, so that I can use them for text classification?</p> <p>Is there any tool which does the feature vector representation automatically?</p>
<p>The easiest approach is to go with the <a href="http://en.wikipedia.org/wiki/Bag_of_words_model">bag of words</a> model. You represent each document as an unordered collection of words.</p> <p>You probably want to strip out punctuation and you may want to ignore case. You might also want to remove common words like 'and', 'or' and 'the'.</p> <p>To adapt this into a feature vector you could choose (say) 10,000 representative words from your sample, and have a binary vector <code>v[i,j] = 1</code> if document <code>i</code> contains word <code>j</code> and <code>v[i,j] = 0</code> otherwise.</p>
1,214
text classification
LSTM Text Classification Bad Accuracy Keras
https://stackoverflow.com/questions/51962128/lstm-text-classification-bad-accuracy-keras
<p>I'm going crazy in this project. This is multi-label text-classification with lstm in keras. My model is this: </p> <pre><code>model = Sequential() model.add(Embedding(max_features, embeddings_dim, input_length=max_sent_len, mask_zero=True, weights=[embedding_weights] )) model.add(Dropout(0.25)) model.add(LSTM(output_dim=embeddings_dim , activation='sigmoid', inner_activation='hard_sigmoid', return_sequences=True)) model.add(Dropout(0.25)) model.add(LSTM(activation='sigmoid', units=embeddings_dim, recurrent_activation='hard_sigmoid', return_sequences=False)) model.add(Dropout(0.25)) model.add(Dense(num_classes)) model.add(Activation('sigmoid')) adam=keras.optimizers.Adam(lr=0.04) model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy']) </code></pre> <p>Only that I have too low an accuracy .. with the binary-crossentropy I get a good accuracy, but the results are wrong !!!!! changing to categorical-crossentropy, I get very low accuracy. Do you have any suggestions?</p> <p>there is my code: <a href="http://github.com/ancileddu/multi-label-text-classification" rel="nofollow noreferrer">GitHubProject - Multi-Label-Text-Classification</a></p>
<p>In last layer, the activation function you are using is <code>sigmoid</code>, so <code>binary_crossentropy</code> should be used. Incase you want to use <code>categorical_crossentropy</code> then use <code>softmax</code> as activation function in last layer.</p> <p>Now, coming to the other part of your model, since you are working with text, i would tell you to go for <code>tanh</code> as activation function in LSTM layers.</p> <p>And you can try using LSTM's dropouts as well like <code>dropout</code> and <code>recurrent dropout</code></p> <pre><code>LSTM(units, dropout=0.2, recurrent_dropout=0.2, activation='tanh') </code></pre> <p>You can define units as <code>64</code> or <code>128</code>. Start from small number and after testing you take them till <code>1024</code>. </p> <p>You can try adding <code>convolution</code> layer as well for extracting features or use <code>Bidirectional LSTM</code> But models based <code>Bidirectional</code> takes time to train.</p> <p>Moreover, since you are working on text, <code>pre-processing of text and size of training data</code> always play much bigger role than expected.</p> <p><strong>Edited</strong></p> <p>Add Class weights in fit parameter</p> <pre><code>class_weights = class_weight.compute_class_weight('balanced', np.unique(labels), labels) class_weights_dict = dict(zip(le.transform(list(le.classes_)), class_weights)) model.fit(x_train, y_train, validation_split, class_weight=class_weights_dict) </code></pre>
1,215
text classification
Text Classification Using spaCy
https://stackoverflow.com/questions/73457037/text-classification-using-spacy
<p>I was trying to do some text classification with spacy but i get an error about my vucabulary being empty.</p> <p>I tried a classic dataset but i get the same error, i've seen some suggestion to split the text part but i have many lines not a huge one.</p> <p>this is the code:</p> <pre><code># df_amazon = pd.read_csv(&quot;amazon_alexa.tsv&quot;,sep=&quot;\t&quot;) bow_vector = CountVectorizer(tokenizer = spacy_tokenizer, ngram_range = (1,1)) tfidf_vector = TfidfVectorizer(tokenizer = spacy_tokenizer) classifier = LogisticRegression() pipe = Pipeline ([(&quot;cleaner&quot;, predictors()), (&quot;vectorizer&quot;, bow_vector), (&quot;classifier&quot;, classifier)]) pipe.fit(X_train, y_train) -------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-91-b5a14e655d5a&gt; in &lt;module&gt; 10 11 # Model generation ---&gt; 12 pipe.fit(X_train, y_train) ~\anaconda3\lib\site-packages\sklearn\pipeline.py in fit(self, X, y, **fit_params) 339 &quot;&quot;&quot; 340 fit_params_steps = self._check_fit_params(**fit_params) --&gt; 341 Xt = self._fit(X, y, **fit_params_steps) 342 with _print_elapsed_time('Pipeline', 343 self._log_message(len(self.steps) - 1)): ~\anaconda3\lib\site-packages\sklearn\pipeline.py in _fit(self, X, y, **fit_params_steps) 301 cloned_transformer = clone(transformer) 302 # Fit or load from cache the current transformer --&gt; 303 X, fitted_transformer = fit_transform_one_cached( 304 cloned_transformer, X, y, None, 305 message_clsname='Pipeline', ~\anaconda3\lib\site-packages\joblib\memory.py in __call__(self, *args, **kwargs) 350 351 def __call__(self, *args, **kwargs): --&gt; 352 return self.func(*args, **kwargs) 353 354 def call_and_shelve(self, *args, **kwargs): ~\anaconda3\lib\site-packages\sklearn\pipeline.py in _fit_transform_one(transformer, X, y, weight, message_clsname, message, **fit_params) 752 with _print_elapsed_time(message_clsname, message): 753 if hasattr(transformer, 'fit_transform'): --&gt; 754 res = transformer.fit_transform(X, y, **fit_params) 755 else: 756 res = transformer.fit(X, y, **fit_params).transform(X) ~\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in fit_transform(self, raw_documents, y) 1200 max_features = self.max_features 1201 -&gt; 1202 vocabulary, X = self._count_vocab(raw_documents, 1203 self.fixed_vocabulary_) 1204 ~\anaconda3\lib\site-packages\sklearn\feature_extraction\text.py in _count_vocab(self, raw_documents, fixed_vocab) 1131 vocabulary = dict(vocabulary) 1132 if not vocabulary: -&gt; 1133 raise ValueError(&quot;empty vocabulary; perhaps the documents only&quot; 1134 &quot; contain stop words&quot;) 1135 ValueError: empty vocabulary; perhaps the documents only contain stop words </code></pre>
<p>It looks like you're just using the spaCy tokenizer? I'm not sure what's going on, but you should check the output of the tokenizer on your documents.</p> <p>Note that while I think you can use the tokenizer that way, it would be more typical to use a blank pipeline, like this:</p> <pre><code>import spacy nlp = spacy.blank(&quot;en&quot;) words = [tok.text for tok in nlp(&quot;this is my input text&quot;)] </code></pre>
1,216
text classification
Text Classification using MALLET
https://stackoverflow.com/questions/31367381/text-classification-using-mallet
<p>I'm new to using Mallet. I usually use WEKA for classification, and now I'm trying to use Mallet for text classification. In Weka, there are attributes (such as word length or top-n word occurrence) that we choose ourselves and make the .arff file. </p> <p>I have read about the input format for Mallet in <a href="http://mallet.cs.umass.edu/import.php" rel="nofollow">http://mallet.cs.umass.edu/import.php</a> but I'm still confused. How do we assign attribute in the input format? How do we tell this document belongs to a certain class? For example, a document belongs to "sports" class?</p> <p>Any example of input format file will be very appreciated.</p> <p>Thanks!</p>
<p>-How do we tell this document belongs to a certain class?:</p> <p>You can have one folder per class, for example: C:/Corpus/Class1 C:/Corpus/Class2 C:/Corpus/Classn and each folder contains the documents which belong to that class.</p> <p>How do we assign attribute in the input format?</p> <p>If you want to know the options of file importing,go to: C:/mallet/bin and once you are there: mallet import-dir --help and the options to import files will be displayed, for example --remove-stopwords, --gram sizes.</p> <p>Example code to import files:</p> <p>bin/mallet import-dir --input C:/Corpus/* --output corpus.mallet --gram sizes 1,2 --preserve-case</p>
1,217
text classification
Stemming in Text Classification - Degrades Accuracy?
https://stackoverflow.com/questions/22603332/stemming-in-text-classification-degrades-accuracy
<p>I am implementing a text classification system using Mahout. I have read stop-words removal and stemming helps to improve accuracy of Text classification. In my case removing stop-words giving better accuracy, but stemming is not helping much. I found 3-5% decrease in accuracy after applying stemmer. I tried with porter stemmer and k-stem but got almost same result in both the cases. </p> <p>I am using Naive Bayes algorithm for classification.</p> <p>Any help is greatly appreciated in advance.</p>
<p>First of all, you need to understand <em>why</em> stemming normally improve accuracy. Imagine following sentence in a training set: </p> <blockquote> <p>He played below-average football in 2013, but was viewed as an ascending player before that and can play guard or center.</p> </blockquote> <p>and following in a test set: </p> <blockquote> <p>We’re looking at a number of players, including Mark</p> </blockquote> <p>First sentence contains number of words referring to sports, including word "player". Second sentence from test set also mentions player, but, oh, it's in plural - "players", not "player" - so for classifier it is a distinct, unrelated variable. </p> <p>Stemming tries to cut off details like exact form of a word and produce word bases as features for classification. In example above, stemming could shorten both words to "player" (or even "play") and use them as the same feature, thus having more chances to classify second sentence as belonging to "sports" class. </p> <p>Sometimes, however, these details play important role by themselves. For example, phrase "runs today" may refer to a runner, while "long running" may be about phone battery lifetime. In this case stemming makes classification worse, not better.</p> <p>What you can do here is to use additional features that can help to distinguish between different meanings of same words/stems. Two popular approaches are <a href="http://en.wikipedia.org/wiki/N-gram" rel="noreferrer"><strong>n-grams</strong></a> (e.g. bigrams, features made of word pairs instead of individual words) and <a href="http://en.wikipedia.org/wiki/Part_of_speech" rel="noreferrer"><strong>part-of-speech</strong></a> (<strong>POS</strong>) tags. You can try any combination of them, e.g. stems + bigrams of stems, or words + bigrams of words, or stems + POS tags, or stems, bigrams and POS tags, etc. </p> <p>Also, try out other algorithms. E.g. <a href="http://en.wikipedia.org/wiki/Support_vector_machine" rel="noreferrer">SVM</a> uses very different approach than Naive Bayes, so it can catch things in data that NB ignores. </p>
1,218
text classification
Mallet vs Weka for text classification
https://stackoverflow.com/questions/7953935/mallet-vs-weka-for-text-classification
<p>Which product (Mallet or Weka) is better for text classification task:</p> <ol> <li>Simpler to train</li> <li>Better results</li> <li>Documentation</li> </ol> <p>I'm new for this problem so any comments will be great</p>
<p>MALLET is much easier to use and does most of its job invisibly. You don't have to convert the format of anything either, you just give it text files and it gives you back results.</p> <p>Weka requires converting the text into a particular format (the Weka script for doing so it so slow and inefficient that I would recommend you write your own).</p> <p>The problem with MALLET is that the training uses GB of memory and it can take hours, if you have large training sets.</p> <p>Weka has more documentation, but most of it makes no sense. MALLET has very little documentation but is very simple to use.</p> <p>To be honest, after testing the both of them, I opted for writing my own classifier.</p>
1,219
text classification
Text Classification and VIF
https://stackoverflow.com/questions/68411148/text-classification-and-vif
<p>Does VIF matter when it comes to dealing with Text Classification. I have a large dataset, after tokenizing the text, I have 400 columns of stemmed words. After running tests for VIF, I noticed that most variable's VIF were less than 5. The remaining VIF's fluctuated between 6-37, should I be removing these variables?</p>
1,220
text classification
Model.fit Value Error (Text Classification Model)
https://stackoverflow.com/questions/60104840/model-fit-value-error-text-classification-model
<p>I need your help please...</p> <p>I am trying go get the following Text Classification Module working:</p> <pre class="lang-py prettyprint-override"><code> # Train and validate model. history = model.fit(x_train, train_labels, epochs=epochs, callbacks=callbacks, validation_data=(x_val, val_labels), verbose=2, batch_size=batch_size) # Logs once per epoch. </code></pre> <p><a href="https://github.com/google/eng-edu/blob/master/ml/guides/text_classification/train_ngram_model.py" rel="nofollow noreferrer">Source File Can be Found Here: Google - Git Hub Text Classification Code </a></p> <p>However I am getting the following error on execution:</p> <pre class="lang-sh prettyprint-override"><code> Traceback (most recent call last): File "train_ngram_model.py", line 113, in &lt;module&gt; train_ngram_model(data) File "train_ngram_model.py", line 93, in train_ngram_model batch_size=batch_size) # Logs once per epoch. File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit use_multiprocessing=use_multiprocessing) File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 235, in fit use_multiprocessing=use_multiprocessing) File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_training_inputs use_multiprocessing=use_multiprocessing) File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 646, in _process_inputs x, y, sample_weight=sample_weights) File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 2383, in _standardize_user_data batch_size=batch_size) File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 2428, in _standardize_tensors converted_x.append(_convert_scipy_sparse_tensor(a, b)) File "C:\Users\joebloggs\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\keras\engine\training.py", line 3198, in _convert_scipy_sparse_tensor raise ValueError('A SciPy sparse matrix was passed to a model ' ValueError: A SciPy sparse matrix was passed to a model that expects dense inputs. Please densify your inputs first, such as by calling `x.toarray()`. </code></pre> <p>I have spent several hours now to find a solution, and I haven't gotten anywhere. </p> <p>Thank you in advance for your reply.</p>
1,221
text classification
Text Classification with Python
https://stackoverflow.com/questions/64353644/text-classification-with-python
<p>HI i am new to python programming language, based on the various reference i have build the text classification model using logistic regression, Below is the code.</p> <pre><code>from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer import pandas as pd import numpy as np import string import nltk from collections import Counter from nltk.corpus import stopwords from nltk.stem import PorterStemmer from nltk.tokenize import sent_tokenize, word_tokenize from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix, accuracy_score, classification_report Train = pd.read_excel(&quot;/Desktop/ML Based Text classification/test.xlsx&quot;) real = pd.read_excel(&quot;/Desktop/ML Based Text classification/test.xlsx&quot;, sheet_name = 'Test') Train_data = Train['description'] Test_data = real['description'] stop = stopwords.words('english') porter = PorterStemmer() def remove_stopwords(text): text = [word.lower() for word in text.split() if word.lower() not in stop] return &quot; &quot;.join(text) def stemmer(stem_text): stem_text = [porter.stem(word) for word in stem_text.split()] return &quot; &quot;.join(stem_text) def clean_data(data): text_clean = (data.str.replace('[^\w\s]','') .str.replace('\d+', '') .apply(remove_stopwords) .apply(stemmer) .astype(str)) return (text_clean) Train_data = clean_data(Train_data) counter = Counter(Train['tags'].tolist()) top_10_varieties = {i[0]: idx for idx, i in enumerate(counter.most_common(50))} Train['Mapping'] = Train['tags'].map(top_10_varieties) #top_10_varieties = {'Outlook Related Issue': 0, 'Password Reset': 1, 'VPN Issue': 2} tfidf_converter = TfidfVectorizer() model_log = LogisticRegression() X = Train_data Y = Train['Mapping'] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.95, random_state = 0) svc = Pipeline([('tfidf', TfidfVectorizer()), ('clf',LogisticRegression()), ]) svc.fit(X_train, y_train) ytest = np.array(y_test) y_pred = svc.predict(X_test) Test_data = clean_data(Test_data) y_pred = svc.predict(Test_data) </code></pre> <p>Now i have no error running this code, when i print &quot;y_pred&quot; i am getting an output as</p> <pre><code>array([0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 2, 1, 2, 0, 2, 2, 2, 1, 0, 1, 1, 2, 1, 2, 0, 0, 2, 2, 1, 0, 0, 2, 0, 0, 0], dtype=int64) </code></pre> <p>I am not sure, how do i convert this to the mapping string and tag this against my raw data, i want an output like this:</p> <p><a href="https://i.sstatic.net/RLif2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RLif2.jpg" alt="enter image description here" /></a></p>
<p>Please try:</p> <pre><code>reverse_top_10_varieties = {idx:i[0] for idx, i in enumerate(counter.most_common(50))} [reverse_top_10_varieties[id] for id in y_pred] </code></pre> <p>and see if this solves your problem</p>
1,222
text classification
Centroid algorithm for text classification, tools?
https://stackoverflow.com/questions/10663854/centroid-algorithm-for-text-classification-tools
<p>As discussed <a href="http://classes.seattleu.edu/computer_science/csse470/Madani/ABCs.html#centroid" rel="nofollow">here</a>, Do you know of any tools which provides a centroid algorithm for text classification in java?</p>
<p><a href="http://scikit-learn.org/" rel="nofollow">scikit-learn</a> includes this as the class <a href="http://scikit-learn.org/stable/modules/neighbors.html#nearest-centroid-classifier" rel="nofollow"><code>NearestCentroid</code></a>. It also includes an implementation of L2-normalized tf-idf.</p> <p>[Disclaimer: I'm a scikit-learn developer.]</p>
1,223
text classification
Keyword based text classification
https://stackoverflow.com/questions/62127122/keyword-based-text-classification
<p>I want to classify some texts based on available keywords in each class. In other words, I have a list of keywords for each category. I need some heuristic methods using these keywords and determine top similar categories for each text. I should say that in the current phase of the project, I didn't want to use a machine learning-based method for text classification.</p>
1,224
text classification
Text classification NaiveBayes Accord.NET
https://stackoverflow.com/questions/59123503/text-classification-naivebayes-accord-net
<p>I am particularly new to accord.net</p> <p>My case: Classifying short text of various length into 100+ different categories.</p> <p>Input sample (10k Records in a .csv file)</p> <p>Text ------------------------- Category</p> <p>Cabinet -------------------- Furniture and Fittings</p> <p>Coffee Table -------------- Furniture and Fittings</p> <p>Stainless steel table ----- Furniture and Fittings</p> <pre><code>private static void bagOfWords(int[][] inputs, int[] outputs) { var bow = new BagOfWords&lt;int&gt;(); var quantizer = bow.Learn(inputs); string filenamebow = Path.Combine(Application.StartupPath, "News_BOW.accord"); Serializer.Save(obj: bow, path: filenamebow); double[][] histograms = quantizer.Transform(inputs); // One way to perform sequence classification with an SVM is to use // a kernel defined over sequences, such as DynamicTimeWarping. // Create the multi-class learning algorithm as one-vs-one with DTW: var teacher = new MulticlassSupportVectorLearning&lt;ChiSquare, double[]&gt;() { Learner = (p) =&gt; new SequentialMinimalOptimization&lt;ChiSquare, double[]&gt;() { // Complexity = 100 // Create a hard SVM } }; // Learn a multi-label SVM using the teacher var svm = teacher.Learn(histograms, outputs); // Get the predictions for the inputs int[] predicted = svm.Decide(histograms); // Create a confusion matrix to check the quality of the predictions: var cm = new GeneralConfusionMatrix(predicted: predicted, expected: outputs); // Check the accuracy measure: double accuracy = cm.Accuracy; string filename = Path.Combine(Application.StartupPath, "News_SVM.accord"); Serializer.Save(obj: svm, path: filename); } private void Form1_Load(object sender, EventArgs e) { ........... .......... ........... dTable = worksheet.ExportDataTable(); ///////////////////////////////////////////////// StringBuilder sWords = new StringBuilder(); string[][] swords = new string[dTable.Rows.Count][]; int i = 0; foreach (DataRowView dr in dTable.DefaultView) { swords[i] = Tokenize(dr[0].ToString()); i++; } Codification codebook = new Codification(dTable, new string[] { "Title", "Category" }); DataTable symbols = codebook.Apply(dTable); int[][] inputs = symbols.ToJagged&lt;int&gt;(new string[] { "Title" }); int[] outputs = symbols.ToArray&lt;int&gt;("Category"); bagOfWords(inputs, outputs); DataTable input_dTable = worksheetInput.ExportDataTable(); //How to continue from here and get the batch result as output DataTable } </code></pre> <p>How do we pass in a DataTable as input and get the batch results as output as DataTable after training the model?</p> <p>Similar github project: <a href="https://stackoverflow.com/questions/47505910/text-classification-naivebayes/59123363#59123363">Text classification NaiveBayes</a></p>
1,225
text classification
Text classification extract tags from text
https://stackoverflow.com/questions/8990804/text-classification-extract-tags-from-text
<p>I have a lucene index with a lot of text data, each item has a description, I want to extract the more common words from the description and generate tags to classify each item based on the description, is there a lucene.net library for doing this or any other library for text classification?</p>
<p>No, lucene.net can make search, index, text normalization, "find more like this" funtionalty, but not a text classification.</p> <p>What to suggest to you depends from your requirements. So, maybe more description needed. But, generally, easiest way try to use external services. All external services have REST API, and it's very easy to interact with it using C#.</p> <p>From external services:</p> <ul> <li><a href="http://opencalais.com/" rel="nofollow">Open Calais</a> </li> <li><a href="http://uclassify.com" rel="nofollow">uClassify</a> </li> <li><a href="http://code.google.com/apis/predict" rel="nofollow">Google Prediction API</a> </li> <li><a href="http://textclassify.com" rel="nofollow">Text Classify</a> </li> <li><a href="http://alchemyapi.com" rel="nofollow">Alchemy API</a> </li> </ul> <p>Also there good Java SDK like Mahout. As I remember interactions with Mahout could be also done like with service, so integration with it is not a problem at all.</p> <p>I had similar "auto tagging" task using c#, and I've used for that Open Calais. It's free to make 50,000 transactions per day. It was enough for me. Also uClassify has good pricing, as example "Indie" license 99$ per year.</p> <p>But maybe external services and Mahout is not your way. Than take a look at <a href="http://wiki.dbpedia.org" rel="nofollow">DBpedia</a> project and RDF. And the last, you can use some implementations of Naive Bayes algorithm, at least. It's easy, and all will be under your control. </p>
1,226
text classification
Deploying a text classification model on new (unseen) text
https://stackoverflow.com/questions/64536994/deploying-a-text-classification-model-on-new-unseen-text
<p>I am working on a text classification problem. I have attached a simple dummy snippet of a text classification model I have trained.</p> <p>How do I deploy the model on new_text? When the model is used on <code>check_predictions</code>, it classifies text correctly, however, when new data is used, the classification is incorrect.</p> <p>Is this because the <code>new_text</code> would need to be vectorised? Am I missing something fundamental?</p> <pre><code>from collections import Counter from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score, precision_score, recall_score df = pd.read_csv(&quot;/Users/veg.csv&quot;) print (df) </code></pre> <p><a href="https://i.sstatic.net/OXgSQm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OXgSQm.png" alt="first 15 rows of df" /></a></p> <pre><code>X_train, X_test, y_train, y_test = train_test_split(df['Text'], df['Label'],random_state=1, test_size=0.2) cv = CountVectorizer() X_train_vectorized = cv.fit_transform(X_train) X_test_vectorized = cv.transform(X_test) naive_bayes = MultinomialNB() naive_bayes.fit(X_train_vectorized, y_train) predictions = naive_bayes.predict(X_test_vectorized) print(&quot;Accuracy score: &quot;, accuracy_score(y_test, predictions)) print('accuracy %s' % accuracy_score(predictions, y_test)) print(classification_report(y_test, predictions)) </code></pre> <p><a href="https://i.sstatic.net/pjrzMm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pjrzMm.png" alt="Output" /></a></p> <pre><code>check_predictions = [] for i in range(len(X_test)): if predictions[i] == 0: check_predictions.append('vegetable') if predictions[i] == 1: check_predictions.append('fruit') if predictions[i] == 2: check_predictions.append('tree') dummy_df = pd.DataFrame({'actual_label': list(y_test), 'prediction': check_predictions, 'Text':list(X_test)}) dummy_df.replace(to_replace=0, value='vegetable', inplace=True) dummy_df.replace(to_replace=1, value='fruit', inplace=True) dummy_df.replace(to_replace=2, value='tree', inplace=True) print(&quot;DUMMY DF&quot;) print(dummy_df.head(10)) </code></pre> <p><a href="https://i.sstatic.net/F5o1am.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F5o1am.png" alt="test df" /></a></p> <pre><code>new_data=['carrot', 'grapes', 'banana', 'potato', 'birch','carrot', 'grapes', 'banana', 'potato', 'birch','carrot','grapes', 'banana', 'potato', 'birch','carrot', 'grapes', 'banana', 'potato', 'birch','grapes', 'banana', 'potato', 'birch'] new_predictions = [] for i in range(len(new_data)): if predictions[i] == 0: new_predictions.append('vegetable') if predictions[i] == 1: new_predictions.append('fruit') if predictions[i] == 2: new_predictions.append('tree') new_df = pd.DataFrame({'actual_label': list(y_test), 'prediction': new_predictions, 'Text':list(new_data)}) new_df.replace(to_replace=0, value='vegetable', inplace=True) new_df.replace(to_replace=1, value='fruit', inplace=True) new_df.replace(to_replace=2, value='tree', inplace=True) print(&quot;NEW DF&quot;) print(new_df.head(10)) </code></pre> <p><a href="https://i.sstatic.net/zBOi0m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zBOi0m.png" alt="New data df" /></a></p>
<p>Whatever (new) text you are feeding into your model must go through the exact same preprocessing steps as your training data - here the <code>CountVectorizer</code> as already fitted with your <code>X_train</code>:</p> <pre><code>new_data_vectorized = cv.transform(new_data) # NOT fit_transform new_predictions = naive_bayes.predict(new_data_vectorized) </code></pre>
1,227
text classification
Generate PMML for text classification pipeline in python
https://stackoverflow.com/questions/44560823/generate-pmml-for-text-classification-pipeline-in-python
<p>I am trying to generate PMML (using jpmml-sklearn) for text classification pipeline. The last line in the code - sklearn2pmml(Textpipeline, "TextMiningClassifier.pmml", with_repr = True) - crashes.</p> <pre><code>from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.linear_model import SGDClassifier from sklearn2pmml import PMMLPipeline categories = [ 'alt.atheism', 'talk.religion.misc', ] print("Loading 20 newsgroups dataset for categories:") print(categories) data = fetch_20newsgroups(subset='train', categories=categories) print("%d documents" % len(data.filenames)) print("%d categories" % len(data.target_names)) Textpipeline = PMMLPipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', SGDClassifier()), ]) Textpipeline.fit(data.data, data.target) from sklearn2pmml import sklearn2pmml sklearn2pmml(Textpipeline, "TextMiningClassifier.pmml", with_repr = True) </code></pre> <p>Looks like sklearn2pmml() is not able to take Textpipeline as input. The code works fine for other pipelines (examples here: <a href="https://github.com/jpmml/sklearn2pmml" rel="nofollow noreferrer">https://github.com/jpmml/sklearn2pmml</a>) but not for text classification pipeline above. So my question is: how do I generate PMML for text classification problem?</p> <p>Error I get:</p> <pre><code> Jun 15, 2017 12:48:00 PM org.jpmml.sklearn.Main run INFO: Parsing PKL.. Jun 15, 2017 12:48:01 PM org.jpmml.sklearn.Main run INFO: Parsed PKL in 489 ms. Jun 15, 2017 12:48:01 PM org.jpmml.sklearn.Main run INFO: Converting.. Jun 15, 2017 12:48:01 PM sklearn2pmml.PMMLPipeline encodePMML WARNING: The 'target_field' attribute is not set. Assuming y as the name of the target field Jun 15, 2017 12:48:01 PM sklearn2pmml.PMMLPipeline initFeatures WARNING: The 'active_fields' attribute is not set. Assuming [x1] as the names of active fields Jun 15, 2017 12:48:01 PM org.jpmml.sklearn.Main run SEVERE: Failed to convert java.lang.IllegalArgumentException: The tokenizer object (null) is not Splitter at sklearn.feature_extraction.text.CountVectorizer.getTokenizer(CountVectorizer.java:263) at sklearn.feature_extraction.text.CountVectorizer.encodeDefineFunction(CountVectorizer.java:164) at sklearn.feature_extraction.text.CountVectorizer.encodeFeatures(CountVectorizer.java:124) at sklearn.pipeline.Pipeline.encodeFeatures(Pipeline.java:93) at sklearn2pmml.PMMLPipeline.encodePMML(PMMLPipeline.java:122) at org.jpmml.sklearn.Main.run(Main.java:144) at org.jpmml.sklearn.Main.main(Main.java:93) Exception in thread "main" java.lang.IllegalArgumentException: The tokenizer object (null) is not Splitter at sklearn.feature_extraction.text.CountVectorizer.getTokenizer(CountVectorizer.java:263) at sklearn.feature_extraction.text.CountVectorizer.encodeDefineFunction(CountVectorizer.java:164) at sklearn.feature_extraction.text.CountVectorizer.encodeFeatures(CountVectorizer.java:124) at sklearn.pipeline.Pipeline.encodeFeatures(Pipeline.java:93) at sklearn2pmml.PMMLPipeline.encodePMML(PMMLPipeline.java:122) at org.jpmml.sklearn.Main.run(Main.java:144) at org.jpmml.sklearn.Main.main(Main.java:93) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Data\Anaconda2\lib\site-packages\sklearn2pmml\__init__.py", line 142, in sklearn2pmml raise RuntimeError("The JPMML-SkLearn conversion application has failed. The Java process should have printed more information about the failure into its standard output and/or error streams") RuntimeError: The JPMML-SkLearn conversion application has failed. The Java process should have printed more information about the failure into its standard output and/or error streams </code></pre>
<p>You need to use PMML-compatible text tokenization function. The default implementation is class <code>sklearn2pmml.feature_extraction.text.Splitter</code>:</p> <pre><code>from sklearn.feature_extraction.text import TfidfVectorizer from sklearn2pmml.feature_extraction.text import Splitter vectorizer = TfidfVectorizer(analyzer = "word", token_pattern = None, tokenizer = Splitter()) </code></pre> <p>More details and references are available in the JPMML mailing list: <a href="https://groups.google.com/forum/#!topic/jpmml/wi-0rxzUn1o" rel="nofollow noreferrer">https://groups.google.com/forum/#!topic/jpmml/wi-0rxzUn1o</a></p>
1,228
text classification
Training LLM to perform text classification
https://stackoverflow.com/questions/76476765/training-llm-to-perform-text-classification
<p>I am trying to perform text classification using GPTNeo, using the tweet_eval dataset from huggingface. I am following this example <a href="https://huggingface.co/docs/transformers/tasks/sequence_classification" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/tasks/sequence_classification</a>, but there is some error. I am a beginner at LLMs and it will be very helpful if someone can help me solve the issue. Thanks in advance. This is my code:</p> <pre><code>from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer import datasets import torch as t from transformers import DataCollatorWithPadding import evaluate import numpy as np dataset = datasets.load_dataset(&quot;tweet_eval&quot;,&quot;emotion&quot;) x_train = dataset[&quot;train&quot;][&quot;text&quot;] y_train = dataset[&quot;train&quot;][&quot;label&quot;] x_test = dataset[&quot;test&quot;][&quot;text&quot;] y_test = dataset[&quot;test&quot;][&quot;label&quot;] def load_LLM(llm, device): num_labels = 4 id2label = {0: &quot;Anger&quot;, 1: &quot;Joy&quot;, 2: &quot;Optimism&quot;, 3: &quot;Sadness&quot;} label2id = {&quot;Anger&quot;: 0, &quot;Joy&quot;: 1, &quot;Optimism&quot;: 2, &quot;Sadness&quot;:3} model = AutoModelForSequenceClassification.from_pretrained(llm,num_labels=num_labels,id2label=id2label, label2id=label2id) model.to(device) tokenizer = AutoTokenizer.from_pretrained(llm) return model, tokenizer llm = &quot;EleutherAI/gpt-neo-2.7B&quot; device = t.device('cuda' if t.cuda.is_available() else 'cpu') model,tokenizer = load_LLM(llm,device) tokenizer.add_special_tokens({'pad_token': '[PAD]'}) tokenizer.pad_token = '[PAD]' train_inputs = tokenizer(x_train, truncation=True, padding=True) test_inputs = tokenizer(x_test, truncation=True, padding=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) accuracy = evaluate.load(&quot;accuracy&quot;) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) training_args = TrainingArguments( output_dir=&quot;my_awesome_model&quot;, learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, evaluation_strategy=&quot;epoch&quot;, save_strategy=&quot;epoch&quot;, load_best_model_at_end=True, push_to_hub=True ) trainer = Trainer( model=model, args=training_args, train_dataset=train_inputs, eval_dataset=test_inputs, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics ) trainer.train() </code></pre> <p>I am getting this error:</p> <pre><code>type here--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[18], line 1 ----&gt; 1 trainer.train() File ~\anaconda3\envs\pt\lib\site-packages\transformers\trainer.py:1664, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1659 self.model_wrapped = self.model 1661 inner_training_loop = find_executable_batch_size( 1662 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1663 ) -&gt; 1664 return inner_training_loop( 1665 args=args, 1666 resume_from_checkpoint=resume_from_checkpoint, 1667 trial=trial, 1668 ignore_keys_for_eval=ignore_keys_for_eval, 1669 ) File ~\anaconda3\envs\pt\lib\site-packages\transformers\trainer.py:1909, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1906 rng_to_sync = True 1908 step = -1 -&gt; 1909 for step, inputs in enumerate(epoch_iterator): 1910 total_batched_samples += 1 1911 if rng_to_sync: File ~\anaconda3\envs\pt\lib\site-packages\torch\utils\data\dataloader.py:633, in _BaseDataLoaderIter.__next__(self) 630 if self._sampler_iter is None: 631 # TODO(https://github.com/pytorch/pytorch/issues/76750) 632 self._reset() # type: ignore[call-arg] --&gt; 633 data = self._next_data() 634 self._num_yielded += 1 635 if self._dataset_kind == _DatasetKind.Iterable and \ 636 self._IterableDataset_len_called is not None and \ 637 self._num_yielded &gt; self._IterableDataset_len_called: File ~\anaconda3\envs\pt\lib\site-packages\torch\utils\data\dataloader.py:677, in _SingleProcessDataLoaderIter._next_data(self) 675 def _next_data(self): 676 index = self._next_index() # may raise StopIteration --&gt; 677 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 678 if self._pin_memory: 679 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File ~\anaconda3\envs\pt\lib\site-packages\torch\utils\data\_utils\fetch.py:54, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 52 else: 53 data = self.dataset[possibly_batched_index] ---&gt; 54 return self.collate_fn(data) File ~\anaconda3\envs\pt\lib\site-packages\transformers\trainer_utils.py:704, in RemoveColumnsCollator.__call__(self, features) 702 def __call__(self, features: List[dict]): 703 features = [self._remove_columns(feature) for feature in features] --&gt; 704 return self.data_collator(features) File ~\anaconda3\envs\pt\lib\site-packages\transformers\data\data_collator.py:249, in DataCollatorWithPadding.__call__(self, features) 248 def __call__(self, features: List[Dict[str, Any]]) -&gt; Dict[str, Any]: --&gt; 249 batch = self.tokenizer.pad( 250 features, 251 padding=self.padding, 252 max_length=self.max_length, 253 pad_to_multiple_of=self.pad_to_multiple_of, 254 return_tensors=self.return_tensors, 255 ) 256 if &quot;label&quot; in batch: 257 batch[&quot;labels&quot;] = batch[&quot;label&quot;] File ~\anaconda3\envs\pt\lib\site-packages\transformers\tokenization_utils_base.py:2966, in PreTrainedTokenizerBase.pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose) 2962 # The model's main input name, usually `input_ids`, has be passed for padding 2963 if self.model_input_names[0] not in encoded_inputs: 2964 raise ValueError( 2965 &quot;You should supply an encoding or a list of encodings to this method &quot; -&gt; 2966 f&quot;that includes {self.model_input_names[0]}, but you provided {list(encoded_inputs.keys())}&quot; 2967 ) 2969 required_input = encoded_inputs[self.model_input_names[0]] 2971 if required_input is None or (isinstance(required_input, Sized) and len(required_input) == 0): AttributeError: 'list' object has no attribute 'keys' </code></pre> <p>I was trying to perform text classification and wanted to fine tune the model before using it to make predictions.</p>
<p>I think it could be your python version in anaconda. When I load OPT from facebook with Anaconda Python 3.6, it also missing the key <code>opt</code> but when I uninstall anaconda and use Python 3.8, it load successfully.</p>
1,229
text classification
Text preprocessing for text classification using fastText
https://stackoverflow.com/questions/62244474/text-preprocessing-for-text-classification-using-fasttext
<p>What text preprocessing produces the best results for supervised text classification using <a href="https://github.com/facebookresearch/fastText" rel="nofollow noreferrer">fastText</a>?</p> <p>The official documentation shows a only a <a href="https://fasttext.cc/docs/en/supervised-tutorial.html#preprocessing-the-data" rel="nofollow noreferrer">simple prepocessing</a> consisting of lower-casing and separating punctuations. Would classic preprocessing like lemmatization, stopwords removal, masking numbers would help?</p>
<p>There is no general answer. It very much depends on what task you are trying to solve, how big data you have, and what language the text is in. Usually, if you have enough data, simple tokenization that you described is all you need.</p> <p><em>Lemmatization</em>: FastText computes the word embeddings from embeddings of character <em>n</em>-grams, it should cover most morphology in most (at least European) languages, given you don't have very small data. In that case, lemmatization might help.</p> <p><em>Removing stopwords</em>: It depends on the task. If the task is based on grammar/syntax, you definitely should not remove the stopwords, because they form the grammar. If the task depends more on lexical semantics, removing stopwords should help. If your training data is large enough, the model should learn non-informative stopword embeddings that would not influence the classification.</p> <p><em>Masking numbers:</em> If you are sure that your task does not benefit from knowing the numbers, you can mask them out. Usually, the problem is that numbers do not appear frequently in the training data, so you don't learn appropriate weights/embeddings for them. Not so much in FastText which will compose their embeddings from embeddings of their substrings. It will make them probably uninformative at the end, not influencing the classification.</p>
1,230
text classification
Encoding data&#39;s label for text classification
https://stackoverflow.com/questions/38710993/encoding-datas-label-for-text-classification
<p>I am doing a project in clinical text classification. In my corpus ,data are already labelled by code (For examples: 768.2, V13.02, V13.09, 599.0 ...). I already separated text and labels then using word-embedded for text. I am going to feed them into convolution neural network. However, the labels are needs to encode, I read examples of sentiment text classification and mnist but they all used integers to classify their data, my label in text form that why I cannot use one-hot encoding like them. Could anyone suggest any way to do it ? Thanks </p>
<p>Discrete text label is easily convertible to discrete numeric data by creating an enumeration mapping. For example, assuming the labels "Yes", "No" and "Maybe":</p> <pre><code>No -&gt; 0 Yes -&gt; 1 Maybe -&gt; 2 </code></pre> <p>And now you have numeric data, which can later be converted back (as long as the algorithm treat those as discrete values and do not return 0.5 or something like that).</p> <p>In the case each instance can have multiples labels, as you said in a comment, you can create the encoding by putting each label in a column ("one-hot encoding"). Even if some software do not implement that off-the-shelf, it is not hard to do by hand.</p> <p>Here's a very simple (and not well-written to be honest) example using Panda's get_dummies function:</p> <pre><code>import numpy as np import pandas as pd labels = np.array(['a', 'b', 'a', 'c', 'ab', 'a', 'ac']) df = pd.DataFrame(labels, columns=['label']) ndf = pd.get_dummies(df) ndf.label_a = ndf.label_a + ndf.label_ab + ndf.label_ac ndf.label_b = ndf.label_b + ndf.label_ab ndf.label_c = ndf.label_c + ndf.label_ac ndf = ndf.drop(['label_ab', 'label_ac'], axis=1) ndf label_a label_b label_c 0 1.0 0.0 0.0 1 0.0 1.0 0.0 2 1.0 0.0 0.0 3 0.0 0.0 1.0 4 1.0 1.0 0.0 5 1.0 0.0 0.0 6 1.0 0.0 1.0 </code></pre> <p>You can now train a multivariate model to output the values of <code>label_a</code>, <code>label_b</code> and <code>label_c</code> and then reconstruct the original labels like "ab". Just make sure the output is in the set [0, 1] (by applying softmax-layer or something like that).</p>
1,231
text classification
text classification using svm
https://stackoverflow.com/questions/17700308/text-classification-using-svm
<p>i read this article :<strong>A hybrid classification method of k nearest neighbor, Bayesian methods and genetic algorithm</strong> <br> it's proposed to use genetic algorithm in order to improve text classification<br> i want to replace Genetic algorithm with SVM but i don't know if it works or not<br> i mean i do not know if the new idea and the result will be better than this article<br> i read somewhere <strong><em>Ga</em></strong> is better than <strong><em>SVM</em></strong> but i dono if it's right or not?</p>
<p>SVM and Genetic Algorithms are in fact completely different methods. SVM is basicaly a <strong>classification</strong> tool, while genetic algorithms are <strong>meta optimisation heuristic</strong>. Unfortunately I do not have access to the cited paper, but I can hardly imagine, how putting sVM in the place of GA could work.</p> <blockquote> <p>i read somewhere Ga is better than SVM but i dono if it's right or not?</p> </blockquote> <p>No, it is not true. These methods are <strong>not comparable</strong> as they are completely different tools.</p>
1,232
text classification
Does stemming harm precision in text classification?
https://stackoverflow.com/questions/10369479/does-stemming-harm-precision-in-text-classification
<p>I have read stemming harms precision but improves recall in text classification. How does that happen? When you stem you increase the number of matches between the query and the sample documents right?</p>
<p>It's always the same, if you raise recall, your doing a generalisation. Because of that, you're losing precision. Stemming merge words together.</p> <blockquote> <p>On the one hand, words which ought to be merged together (such as "adhere" and "adhesion") may remain distinct after stemming; on the other, words which are really distinct may be wrongly conflated (e.g., "experiment" and "experience"). These are known as understemming errors and overstemming errors respectively.</p> </blockquote> <p>Overstemming lowers precision and understemming lowers recall. So, since no stemming at all means no over- but max understemming errors, you have a low recall there and a high precision.</p> <p>Btw, precision means how many of your found 'documents' are those you were looking for. Recall means how many of all 'documents', which were correct, you received.</p>
1,233