category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
BERT model
How to download bert models and load in python?
https://stackoverflow.com/questions/67950523/how-to-download-bert-models-and-load-in-python
<p>How to download bert models and load in python?</p> <pre class="lang-py prettyprint-override"><code>from sentence_transformers import SentenceTransformer model = SentenceTransformer('bert-base-nli-mean-tokens') </code></pre> <p><strong>How to save the pretrained model and load in python?</strong></p>
134
BERT model
How to add LSTM layer on top of Huggingface BERT model
https://stackoverflow.com/questions/65763465/how-to-add-lstm-layer-on-top-of-huggingface-bert-model
<p>I am working on a binary classification task and would like to try adding lstm layer on top of the last hidden layer of huggingface BERT model, however, I couldn't reach the last hidden layer. Is it possible to combine BERT with LSTM?</p> <pre class="lang-py prettyprint-override"><code>tokenizer = BertTokenizer.from_pretrained(model_path) tain_inputs, train_labels, train_masks = data_prepare_BERT( train_file, lab2ind, tokenizer, content_col, label_col, max_seq_length) validation_inputs, validation_labels, validation_masks = data_prepare_BERT( dev_file, lab2ind, tokenizer, content_col, label_col,max_seq_length) # Load BertForSequenceClassification, the pretrained BERT model with a single linear classification layer on top. model = BertForSequenceClassification.from_pretrained( model_path, num_labels=len(lab2ind)) </code></pre>
<p>Indeed it is possible, but you need to implement it yourself. <a href="https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1449" rel="nofollow noreferrer"><code>BertForSequenceClassification</code></a> class is a wrapper for <code>BertModel</code>. It runs the model, takes the hidden state corresponding to the <code>[CLS]</code> tokens, and applies a classifier on top of that.</p> <p>In your case, you can the class as a starting point, and add there an LSTM layer between the BertModel and the classifier. The BertModel returns both the hidden states and a pooled state for classification in a tuple. Just take the other tuple member than is used in the original class.</p> <p>Although it is technically possible, I would expect any performance gain compared to using <code>BertForSequenceClassification</code>. Finetuning of the Transformer layers can learn anything that an additional LSTM layer is capable of.</p>
135
BERT model
Slow training of BERT model Hugging face
https://stackoverflow.com/questions/68871329/slow-training-of-bert-model-hugging-face
<p>I am training the binary classfier using BERT model implement in hugging face library</p> <pre><code>training_args = TrainingArguments( &quot;deleted_tweets_trainer&quot;, num_train_epochs = 1, #logging_steps=100, evaluation_strategy='steps', remove_unused_columns = True ) </code></pre> <p>I am using Colab TPU still the training time is a lot, 38 hours for 60 hours cleaned tweets.</p> <p>Is there any way to optimise the training?</p>
<p>You are currently evaluating every 500 steps and have a training and eval batch size of 8.</p> <p>Depending on your current memory consumption, you can increase the batch sizes (eval much more as training consumes more memory):</p> <ul> <li>per_device_train_batch_size</li> <li>per_device_eval_batch_size</li> </ul> <p>In case it matches your use case, you can also increase the steps after an evaluation is started;</p> <ul> <li>eval_steps</li> </ul>
136
BERT model
One single-batch training on Huggingface Bert model &quot;ruins&quot; the model
https://stackoverflow.com/questions/69127607/one-single-batch-training-on-huggingface-bert-model-ruins-the-model
<p>For some reason, I need to do further (2nd-stage) pre-training on Huggingface Bert model, and I find my training outcome is very bad.</p> <p>After debugging for hours, surprisingly, I find even training one single batch after loading the base model, will cause the model to predict a very bad choice when I ask it to unmask some test sentences. I boil down my code to the minimal reproducible version here:</p> <pre class="lang-py prettyprint-override"><code>import torch from transformers import AdamW, BertTokenizer from transformers import BertForPreTraining MSK_CODE = 103 CE_IGN_IDX = -100 # CrossEntropyLoss ignore index value def sanity_check(tokenizer, inputs): print(tokenizer.decode(inputs['input_ids'][0])) print(tokenizer.convert_ids_to_tokens( inputs[&quot;labels&quot;][0] )) print('Label:', inputs[&quot;next_sentence_label&quot;][0]) def test(tokenizer, model, topk=3): test_data = &quot;She needs to [MASK] that [MASK] has only ten minutes.&quot; print('\n \033[92m', test_data, '\033[0m') test_inputs = tokenizer([test_data], padding=True, truncation=True, return_tensors=&quot;pt&quot;) def classifier_hook(module, inputs, outputs): unmask_scores, seq_rel_scores = outputs token_ids = test_inputs['input_ids'][0] masked_idx = ( token_ids == torch.tensor([MSK_CODE]) ) scores = unmask_scores[0][masked_idx] cands = torch.argsort(scores, dim=1, descending=True) for i, mask_cands in enumerate(cands): top_cands = mask_cands[:topk].detach().cpu() print(f'MASK[{i}] top candidates:', end=&quot; &quot;) print(tokenizer.convert_ids_to_tokens(top_cands)) classifier = model.cls hook = classifier.register_forward_hook(classifier_hook) model.eval() model(**test_inputs) hook.remove() print() # load model model = BertForPreTraining.from_pretrained('bert-base-uncased') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') optimizer = AdamW(model.parameters(), lr=1e-3, weight_decay=0.01) # first test test(tokenizer, model) # our single-iteration inputs # [CLS] 1 2 3 4 5 6 [SEP] 8 9 10 11 12 [SEP] pair = [['the man went to the store', 'penguins are flightless birds']] relation_label = 1 # construct inputs inputs = tokenizer(pair, padding=True, truncation=True, return_tensors=&quot;pt&quot;) inputs[&quot;next_sentence_label&quot;] = torch.tensor([relation_label]) mask_labels = torch.full(inputs[&quot;input_ids&quot;].shape, fill_value=CE_IGN_IDX) inputs[&quot;labels&quot;] = mask_labels # mask two words inputs[&quot;input_ids&quot;][0][4] = MSK_CODE inputs[&quot;input_ids&quot;][0][9] = MSK_CODE mask_labels[0][4] = tokenizer.convert_tokens_to_ids('to') mask_labels[0][9] = tokenizer.convert_tokens_to_ids('are') # train for one single iteration sanity_check(tokenizer, inputs) model.train() optimizer.zero_grad() outputs = model(**inputs) loss = outputs.loss loss.backward(loss) optimizer.step() # second test test(tokenizer, model) </code></pre> <p>output:</p> <pre class="lang-sh prettyprint-override"><code>Some weights of BertForPreTraining were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. She needs to [MASK] that [MASK] has only ten minutes. MASK[0] top candidates: ['know', 'understand', 'remember'] MASK[1] top candidates: ['she', 'he', 'it'] [CLS] the man went [MASK] the store [SEP] penguins [MASK] flightless birds [SEP] ['[UNK]', '[UNK]', '[UNK]', '[UNK]', 'to', '[UNK]', '[UNK]', '[UNK]', '[UNK]', 'are', '[UNK]', '[UNK]', '[UNK]', '[UNK]'] Label: tensor(1) She needs to [MASK] that [MASK] has only ten minutes. MASK[0] top candidates: ['are', 'know', 'be'] MASK[1] top candidates: ['are', 'is', 'she'] </code></pre> <p>Basically, I use <code>She needs to [MASK] that [MASK] has only ten minutes.</code> as a test sentence to test the unmasking. As you may see, at the beginning when I tested the base model, it works perfectly. However, after I feed the pre-train model with a single pair of training batch:</p> <p><code>[CLS] the man went [MASK] the store [SEP] penguins [MASK] flightless birds [SEP]</code></p> <p>The updated model no longer makes sense, it unmasks <code>She needs to [MASK] that [MASK] has only ten minutes.</code> into <code>She needs to [are] that [are] has only ten minutes. </code></p> <p>I can think of two possibilities why this happens...</p> <ol> <li>Bert model is extremely sensitive to training batch size, a small batch causes unacceptable bias.</li> <li>there is a bug in the code?</li> </ol> <p>So, any idea?</p>
137
BERT model
How to make BERT model converge?
https://stackoverflow.com/questions/60732018/how-to-make-bert-model-converge
<p>I am trying to use BERT for sentiment analysis but I suspect I am doing something wrong. In my code I am fine tuning bert using <code>bert-for-tf2</code> but after 1 epoch I am getting an accuracy of 42% when a simple GRU model was getting around 73% accuracy. What should I be doing different to effectively use BERT. I suspect I am traning the bert layers from the first batch which may be an issue as the dense layer is randomly initialized. Any advice would be appreciated, Thanks!</p> <pre><code>import bert-for-tf2 #gets imported as bert but relabeled for clarity model_name = "uncased_L-12_H-768_A-12" model_dir = bert.fetch_google_bert_model(model_name, ".models") model_ckpt = os.path.join(model_dir, "bert_model.ckpt") bert_params = bert.params_from_pretrained_ckpt(model_dir) l_bert = bert.BertModelLayer.from_params(bert_params, name="bert") max_seq_len = 100 l_input_ids = tensorflow.keras.layers.Input(shape=(max_seq_len,), dtype='int32') bertLayer = l_bert(l_input_ids) flat = Flatten()(bertLayer) output = Dense(1,activation = 'sigmoid')(flat) model = tensorflow.keras.Model(inputs=l_input_ids, outputs=output) model.build(input_shape=(None, max_seq_len)) bert.load_bert_weights(l_bert, model_ckpt) with open('../preprocessing_scripts/new_train_data.txt', 'r') as f: tweets = f.readlines() with open('../preprocessing_scripts/targets.csv', 'r') as f: targets = f.readlines() max_words = 14000 tokenizer = Tokenizer(num_words=max_words) trainX = tweets[:6000] trainY = targets[:6000] testX = tweets[6000:] testY = tweets[6000:] maxlen = 100 tokenizer.fit_on_texts(trainX) tokenized_version = tokenizer.texts_to_sequences(trainX) tokenized_version = pad_sequences(tokenized_version, maxlen=maxlen)trainY = np.array(trainY,dtype = 'int32') model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy']) history = model.fit(x=tokenized_version, y=trainY, batch_size = 32, epochs=1, validation_split = 0.2) </code></pre>
<p>I think your learning rate LR (default for adam : 0.001) is too high, leading to <code>catastrophic forgetting</code> . <strong>Refer: How to Fine-Tune BERT for Text Classification?</strong> <a href="https://arxiv.org/pdf/1905.05583.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1905.05583.pdf</a></p> <p>Ideally, LR should be of the order of e-5 . Try changing the code as follows and it should work</p> <pre><code>from keras_radam import RAdam model.compile( RAdam(lr=2e-5), loss='binary_crossentropy', metrics=['accuracy'], ) </code></pre>
138
BERT model
What does the embedding elements stand for in huggingFace bert model?
https://stackoverflow.com/questions/75491528/what-does-the-embedding-elements-stand-for-in-huggingface-bert-model
<p>Prior to passing my tokens through encoder in BERT model, I would like to perform some processing on their embeddings. I extracted the embedding weight using:</p> <pre><code>from transformers import TFBertModel # Load a pre-trained BERT model model = TFBertModel.from_pretrained('bert-base-uncased') # Get the embedding layer of the model embedding_layer = model.get_layer('bert').get_input_embeddings() # Extract the embedding weights embedding_weights = embedding_layer.get_weights() </code></pre> <p>I found it contains 5 elements as shown in Figure. <a href="https://i.sstatic.net/sSCfy.png" rel="nofollow noreferrer">enter image description here</a></p> <p>In my understanding, the first three elements are the word embedding weights, token type embedding weights, and positional embedding weights. My question is what does the last two elements stand for?</p> <p>I dive deep into the source code of bert model. But I cannot figure out the meaning of the last two elements.</p>
<p>In bert model, there is a post-processing of the embedding tensor that uses layer normalization followed by dropout , <a href="https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/modeling.py#L362" rel="nofollow noreferrer">https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/modeling.py#L362</a></p> <p>I think that those two arrays are the gamma and beta of the normalization layer, <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/LayerNormalization" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/LayerNormalization</a> They are learned parameters, and will span the axes of inputs specified in param &quot;axis&quot; which defaults to -1 (corresponding to 768 in embedding tensor).</p>
139
BERT model
Does BERT model and tokenizer should be trained with same data?
https://stackoverflow.com/questions/71547846/does-bert-model-and-tokenizer-should-be-trained-with-same-data
<p>I want to make BERT model by training with more data (not a fine-tuning, the base model which will be trained is 'bert-base-uncased'). However, do i always need to create own tokenizer for one model? when i use 'bert-base-uncased' tokenizer to train model, it give me some error.</p> <pre><code>Traceback (most recent call last): File &quot;log.py&quot;, line 10, in &lt;module&gt; print(model(**input_idx)) File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 1102, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.8/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 989, in forward embedding_output = self.embeddings( File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 1102, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.8/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 214, in forward inputs_embeds = self.word_embeddings(input_ids) File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py&quot;, line 1102, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/modules/sparse.py&quot;, line 158, in forward return F.embedding( File &quot;/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py&quot;, line 2044, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self </code></pre> <p>so does the model should have it's own tokenizer which trained with same data?</p>
<p>I recommend you to actually resize the embeedding matrix to match the size of the tokenizer you want to use:</p> <pre class="lang-py prettyprint-override"><code>model.resize_token_embeddings(len(tokenizer)) </code></pre> <p>Huggingface docs: <a href="https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.PreTrainedModel.resize_token_embeddings" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.PreTrainedModel.resize_token_embeddings</a></p>
140
BERT model
Can&#39;t save model in saved_model format when finetune bert model
https://stackoverflow.com/questions/72674057/cant-save-model-in-saved-model-format-when-finetune-bert-model
<p>When training the bert model, the weights are saved well, but the entire model is not saved.</p> <p>After <code>model.fit</code>, save model as <code>model.save_weights('bert_xxx.h5')</code> and <code>load_weights</code> works fine, but since only weights are saved, the model frame must be loaded separately.</p> <p>So I want to save the entire model at once.</p> <hr /> <p>However, the following error occurs.</p> <p><a href="https://i.sstatic.net/tNtx4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tNtx4.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/BSZfk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BSZfk.png" alt="enter image description here" /></a></p> <hr /> <p>The tensorflow version was <strong>2.4</strong>, and the bert code used <strong><a href="https://qiita.com/namakemono/items/4c779c9898028fc36ff3" rel="nofollow noreferrer">https://qiita.com/namakemono/items/4c779c9898028fc36ff3</a></strong></p> <p>Why is only the weights saved and not the entire model? And how can I save the whole model??</p>
141
BERT model
Debugging TensorFlow serving on BERT model
https://stackoverflow.com/questions/60777281/debugging-tensorflow-serving-on-bert-model
<p>I was able to deploy a NLP model using BERT embedding following this example (using TF 1.14.0 on CPU and tensorflow-model-server): <a href="https://mc.ai/how-to-ship-machine-learning-models-into-production-with-tensorflow-serving-and-kubernetes/" rel="nofollow noreferrer">https://mc.ai/how-to-ship-machine-learning-models-into-production-with-tensorflow-serving-and-kubernetes/</a></p> <p>The model description is pretty clean:</p> <pre><code>!saved_model_cli show --dir {'tf_bert_model/1'} --all MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['Input-Segment:0'] tensor_info: dtype: DT_FLOAT shape: (-1, 64) name: Input-Segment:0 inputs['Input-Token:0'] tensor_info: dtype: DT_FLOAT shape: (-1, 64) name: Input-Token:0 The given SavedModel SignatureDef contains the following output(s): outputs['dense/Softmax:0'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: dense/Softmax:0 Method name is: tensorflow/serving/predict </code></pre> <p>And the data input formatting for the served model is a list of dictionaries:</p> <pre><code>data '{"instances": [{"Input-Token:0": [101, 101, 1962, 7770, 1069, 102, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "Input-Segment:0": [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]}]}' r = requests.post("http://127.0.0.1:8501/v1/models/tf_bert_model:predict", json=data) </code></pre> <p>I'm now trying to deploy a BERT model using TF2.1, HuggingFace transformer library and on GPU but the deployed model is returning either a 400 error or a 200 error and I don't know how to debug it. I suspect that it may be a data input formatting issue.</p> <p>My model description is messier:</p> <pre><code>2020-03-20 14:47:03.465762: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2020-03-20 14:47:03.465883: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2020-03-20 14:47:03.465900: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['attention_mask'] tensor_info: dtype: DT_INT32 shape: (-1, 128) name: serving_default_attention_mask:0 inputs['input_ids'] tensor_info: dtype: DT_INT32 shape: (-1, 128) name: serving_default_input_ids:0 inputs['labels'] tensor_info: dtype: DT_INT32 shape: (-1, 1) name: serving_default_labels:0 inputs['token_type_ids'] tensor_info: dtype: DT_INT32 shape: (-1, 128) name: serving_default_token_type_ids:0 The given SavedModel SignatureDef contains the following output(s): outputs['output_1'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. Defined Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 DType: dict Value: {'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/attention_mask'), 'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='inputs/labels')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] Option #2 Callable with: Argument #1 DType: dict Value: {'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='attention_mask'), 'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='labels')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] Option #3 Callable with: Argument #1 DType: dict Value: {'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='attention_mask'), 'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='labels')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] Option #4 Callable with: Argument #1 DType: dict Value: {'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='inputs/labels'), 'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/attention_mask')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] Function Name: '_default_save_signature' Option #1 Callable with: Argument #1 DType: dict Value: {'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='attention_mask'), 'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='labels')} Function Name: 'call_and_return_all_conditional_losses' Option #1 Callable with: Argument #1 DType: dict Value: {'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='attention_mask'), 'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='labels')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] Option #2 Callable with: Argument #1 DType: dict Value: {'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='attention_mask'), 'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='labels'), 'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='input_ids')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] Option #3 Callable with: Argument #1 DType: dict Value: {'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='inputs/labels'), 'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/attention_mask')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] Option #4 Callable with: Argument #1 DType: dict Value: {'labels': TensorSpec(shape=(None, 1), dtype=tf.int32, name='inputs/labels'), 'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='inputs/attention_mask')} Named Argument #1 DType: str Value: ['t', 'r', 'a', 'i', 'n', 'i', 'n', 'g'] </code></pre> <p>I formatted my data input as a list of dictionaries as well:</p> <pre><code>data = {"instances": test_deploy_inputs2} data {'instances': [{'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'input_ids': [101, 1999, 5688, 1010, 12328, 5845, 2007, 5423, 3593, 28991, 19362, 4588, 4244, 4820, 12553, 12987, 10737, 2008, 23150, 14719, 1011, 20802, 3662, 2896, 3798, 1997, 17953, 14536, 2509, 1998, 6335, 1011, 1015, 29720, 1998, 2020, 11914, 5123, 2013, 6388, 2135, 10572, 27441, 7315, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'labels': 0, 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}]} </code></pre> <p>And when testing the deployed model I get a 200 error:</p> <pre><code>r = requests.post("http://127.0.0.1:8501/v1/models/fashion_model:predict", json=data) r &lt;Response [200]&gt; </code></pre> <p>Any idea on how I can debug this? Thanks</p>
<p>My bad! A Response [200] doesn't mean that it's not working, you can see the results with</p> <pre><code>predictions = json.loads(json_response.text)['predictions'] predictions </code></pre>
142
BERT model
Make sure BERT model does not load pretrained weights?
https://stackoverflow.com/questions/65072694/make-sure-bert-model-does-not-load-pretrained-weights
<p>I want to make sure my BertModel does not loads pre-trained weights. I am using auto class (hugging face) which loads model automatically.</p> <p>My question is how do I load bert model without pretrained weights?</p>
<p>Use AutoConfig instead of AutoModel:</p> <pre><code>from transformers import AutoConfig, AutoModel config = AutoConfig.from_pretrained('bert-base-uncased') model = AutoModel.from_config(config) </code></pre> <p>this should set up the model without loading the weights.</p> <p><a href="https://huggingface.co/transformers/model_doc/auto.html?highlight=from_pretrained#transformers.AutoConfig.from_pretrained" rel="nofollow noreferrer">Documentation here</a> <a href="https://huggingface.co/transformers/_modules/transformers/models/auto/modeling_auto.html#AutoModel.from_config" rel="nofollow noreferrer">and here</a></p>
143
BERT model
Persist BERT model on disk as pickle file
https://stackoverflow.com/questions/59881819/persist-bert-model-on-disk-as-pickle-file
<p>I have managed to get the BERT model to work on johnsnowlabs-spark-nlp library. I am able to save the "trained model" on disk as follows.</p> <h1>Fit Model</h1> <pre><code>df_bert_trained = bert_pipeline.fit(textRDD) df_bert=df_bert_trained.transform(textRDD) </code></pre> <h1>save model</h1> <pre><code>df_bert_trained.write().overwrite().save("/home/XX/XX/trained_model") </code></pre> <p>However, </p> <p>First, as per the docs here <a href="https://nlp.johnsnowlabs.com/docs/en/concepts" rel="nofollow noreferrer">https://nlp.johnsnowlabs.com/docs/en/concepts</a>, it's stated that one can load the model as </p> <pre><code>EmbeddingsHelper.load(path, spark, format, reference, dims, caseSensitive) </code></pre> <p>but it's unclear to me what the variable "reference" represents at this point. </p> <p>Second, has anyone managed to save the BERT embeddings as a pickle file in python? </p>
<p>In Spark NLP, BERT comes as a pre-trained model. It means it's already a model that was trained, fit, etc. and saved in the right format. </p> <p>That's being said, there is no reason to fit or save it again. You can, however, save the result of it once you transform your DataFrame to a new DataFrame that has BERT embeddings for each token.</p> <p>Example:</p> <p>Start a Spark Session in spark-shell with Spark NLP package</p> <pre class="lang-sh prettyprint-override"><code>spark-shell --packages JohnSnowLabs:spark-nlp:2.4.0 </code></pre> <pre class="lang-scala prettyprint-override"><code>import com.johnsnowlabs.nlp.annotators._ import com.johnsnowlabs.nlp.base._ val documentAssembler = new DocumentAssembler() .setInputCol("text") .setOutputCol("document") val sentence = new SentenceDetector() .setInputCols("document") .setOutputCol("sentence") val tokenizer = new Tokenizer() .setInputCols(Array("sentence")) .setOutputCol("token") // Download and load the pretrained BERT model val embeddings = BertEmbeddings.pretrained(name = "bert_base_cased", lang = "en") .setInputCols("sentence", "token") .setOutputCol("embeddings") .setCaseSensitive(true) .setPoolingLayer(0) val pipeline = new Pipeline() .setStages(Array( documentAssembler, sentence, tokenizer, embeddings )) // Test and transform val testData = Seq( "I like pancakes in the summer. I hate ice cream in winter.", "If I had asked people what they wanted, they would have said faster horses" ).toDF("text") val predictionDF = pipeline.fit(testData).transform(testData) </code></pre> <p>The <code>predictionDF</code> is a DataFrame that contains BERT embeddings for each token inside your dataset. The <code>BertEmbeddings</code> pre-trained models are coming from TF Hub, which means they are the exact same pre-trained weights published by Google. All 5 models are available: </p> <ul> <li>bert_base_cased (en)</li> <li>bert_base_uncased (en)</li> <li>bert_large_cased (en)</li> <li>bert_large_uncased (en)</li> <li>bert_multi_cased (xx)</li> </ul> <p>Let me know if you have any questions or problems and I'll update my answer.</p> <p><strong>References</strong>:</p> <ul> <li><a href="https://github.com/JohnSnowLabs/spark-nlp" rel="nofollow noreferrer">https://github.com/JohnSnowLabs/spark-nlp</a></li> <li><a href="https://github.com/JohnSnowLabs/spark-nlp-models" rel="nofollow noreferrer">https://github.com/JohnSnowLabs/spark-nlp-models</a></li> <li><a href="https://github.com/JohnSnowLabs/spark-nlp-workshop" rel="nofollow noreferrer">https://github.com/JohnSnowLabs/spark-nlp-workshop</a> </li> </ul>
144
BERT model
How to Fine-tune HuggingFace BERT model for Text Classification
https://stackoverflow.com/questions/69025750/how-to-fine-tune-huggingface-bert-model-for-text-classification
<p>Is there a <em>Step by step explanation</em> on how to <strong>Fine-tune HuggingFace BERT</strong> model for text classification?</p>
<h1>Fine Tuning Approach</h1> <p>There are multiple approaches to fine-tune BERT for the target tasks.</p> <ol> <li>Further Pre-training the base BERT model</li> <li>Custom classification layer(s) on top of the base BERT model being trainable</li> <li>Custom classification layer(s) on top of the base BERT model being non-trainable (frozen)</li> </ol> <p>Note that the BERT base model has been pre-trained only for two tasks as in the original paper.</p> <ul> <li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li> </ul> <blockquote> <p>3.1 Pre-training BERT ...we pre-train BERT using two unsupervised tasks<br></p> <ul> <li>Task #1: Masked LM<br></li> <li>Task #2: Next Sentence Prediction (NSP)<br></li> </ul> </blockquote> <p>Hence, the base BERT model is like half-baked which can be fully baked for the target domain (1st way). We can use it as part of our custom model training with the base trainable (2nd) or not-trainable (3rd).</p> <hr /> <h1>1st approach</h1> <p><a href="https://arxiv.org/abs/1905.05583" rel="noreferrer">How to Fine-Tune BERT for Text Classification?</a> demonstrated the 1st approach of Further Pre-training, and pointed out the learning rate is the key to avoid <strong>Catastrophic Forgetting</strong> where the pre-trained knowledge is erased during learning of new knowledge.</p> <blockquote> <p>We find that a lower learning rate, such as 2e-5, is necessary to make BERT overcome the catastrophic forgetting problem. With an aggressive learn rate of 4e-4, the training set fails to converge.<br> <a href="https://i.sstatic.net/pm2EV.png" rel="noreferrer"><img src="https://i.sstatic.net/pm2EV.png" alt="enter image description here" /></a></p> </blockquote> <p>Probably this is the reason why the <a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT paper</a> used 5e-5, 4e-5, 3e-5, and 2e-5 for <strong>fine-tuning</strong>.</p> <blockquote> <p>We use a batch size of 32 and fine-tune for 3 epochs over the data for all GLUE tasks. For each task, we selected the best fine-tuning learning rate (among 5e-5, 4e-5, 3e-5, and 2e-5) on the Dev set</p> </blockquote> <p>Note that the base model pre-training itself used higher learning rate.</p> <ul> <li><a href="https://huggingface.co/bert-base-uncased#pretraining" rel="noreferrer">bert-base-uncased - pretraining</a></li> </ul> <blockquote> <p>The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of <code>1e-4</code>, β1=<code>0.9</code> and β2=<code>0.999</code>, a weight decay of <code>0.01</code>, learning rate warmup for 10,000 steps and linear decay of the learning rate after.</p> </blockquote> <p>Will describe the 1st way as part of the 3rd approach below.</p> <p>FYI: <a href="https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertmodel" rel="noreferrer">TFDistilBertModel</a> is the bare base model with the name <code>distilbert</code>.</p> <pre><code>Model: &quot;tf_distil_bert_model_1&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= distilbert (TFDistilBertMain multiple 66362880 ================================================================= Total params: 66,362,880 Trainable params: 66,362,880 Non-trainable params: 0 </code></pre> <hr /> <h1>2nd approach</h1> <p>Huggingface takes the 2nd approach as in <a href="https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-native-pytorch-tensorflow" rel="noreferrer">Fine-tuning with native PyTorch/TensorFlow</a> where <code>TFDistilBertForSequenceClassification</code> has added the custom classification layer <code>classifier</code> on top of the base <code>distilbert</code> model being trainable. The small learning rate requirement will apply as well to avoid the catastrophic forgetting.</p> <pre><code>from transformers import TFDistilBertForSequenceClassification model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased') optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16) </code></pre> <pre><code>Model: &quot;tf_distil_bert_for_sequence_classification_2&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= distilbert (TFDistilBertMain multiple 66362880 _________________________________________________________________ pre_classifier (Dense) multiple 590592 _________________________________________________________________ classifier (Dense) multiple 1538 _________________________________________________________________ dropout_59 (Dropout) multiple 0 ================================================================= Total params: 66,955,010 Trainable params: 66,955,010 &lt;--- All parameters are trainable Non-trainable params: 0 </code></pre> <h3>Implementation of the 2nd approach</h3> <pre><code>import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split from transformers import ( DistilBertTokenizerFast, TFDistilBertForSequenceClassification, ) DATA_COLUMN = 'text' LABEL_COLUMN = 'category_index' MAX_SEQUENCE_LENGTH = 512 LEARNING_RATE = 5e-5 BATCH_SIZE = 16 NUM_EPOCHS = 3 # -------------------------------------------------------------------------------- # Tokenizer # -------------------------------------------------------------------------------- tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'): &quot;&quot;&quot;Tokenize using the Huggingface tokenizer Args: sentences: String or list of string to tokenize padding: Padding method ['do_not_pad'|'longest'|'max_length'] &quot;&quot;&quot; return tokenizer( sentences, truncation=True, padding=padding, max_length=max_length, return_tensors=&quot;tf&quot; ) # -------------------------------------------------------------------------------- # Load data # -------------------------------------------------------------------------------- raw_train = pd.read_csv(&quot;./train.csv&quot;) train_data, validation_data, train_label, validation_label = train_test_split( raw_train[DATA_COLUMN].tolist(), raw_train[LABEL_COLUMN].tolist(), test_size=.2, shuffle=True ) # -------------------------------------------------------------------------------- # Prepare TF dataset # -------------------------------------------------------------------------------- train_dataset = tf.data.Dataset.from_tensor_slices(( dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary train_label )).shuffle(1000).batch(BATCH_SIZE).prefetch(1) validation_dataset = tf.data.Dataset.from_tensor_slices(( dict(tokenize(validation_data)), validation_label )).batch(BATCH_SIZE).prefetch(1) # -------------------------------------------------------------------------------- # training # -------------------------------------------------------------------------------- model = TFDistilBertForSequenceClassification.from_pretrained( 'distilbert-base-uncased', num_labels=NUM_LABELS ) optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE) model.compile( optimizer=optimizer, loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), ) model.fit( x=train_dataset, y=None, validation_data=validation_dataset, batch_size=BATCH_SIZE, epochs=NUM_EPOCHS, ) </code></pre> <hr /> <h1>3rd approach</h1> <h2>Basics</h2> <p>Please note that the images are taken from <a href="http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/" rel="noreferrer">A Visual Guide to Using BERT for the First Time</a> and modified.</p> <h3>Tokenizer</h3> <p><a href="https://huggingface.co/transformers/main_classes/tokenizer.html" rel="noreferrer">Tokenizer</a> generates the instance of BatchEncoding which can be used like a Python dictionary and the input to the BERT model.</p> <ul> <li><a href="https://huggingface.co/transformers/main_classes/tokenizer.html#batchencoding" rel="noreferrer">BatchEncoding</a></li> </ul> <blockquote> <p>Holds the output of the encode_plus() and batch_encode() methods (tokens, attention_masks, etc). <br> This class is derived from a python dictionary and <strong>can be used as a dictionary</strong>. In addition, this class exposes utility methods to map from word/character space to token space.<br><br> Parameters<br></p> <ul> <li>data (dict) – Dictionary of lists/arrays/tensors returned by the encode/batch_encode methods (‘input_ids’, ‘attention_mask’, etc.).</li> </ul> </blockquote> <p>The <code>data</code> attribute of the class is the tokens generated which has <code>input_ids</code> and <code>attention_mask</code> elements.</p> <h3>input_ids</h3> <ul> <li><a href="https://huggingface.co/transformers/glossary.html#input-ids" rel="noreferrer">input_ids</a></li> </ul> <blockquote> <p>The input ids are often the only required parameters to be passed to the model as input. They are <strong>token indices, numerical representations of tokens</strong> building the sequences that will be used as input by the model.</p> </blockquote> <h3>attention_mask</h3> <ul> <li><a href="https://huggingface.co/transformers/glossary.html#attention-mask" rel="noreferrer">Attention mask</a></li> </ul> <blockquote> <p>This argument indicates to the model which tokens should be attended to, and which should not.</p> </blockquote> <p>If the attention_mask is <code>0</code>, the token id is ignored. For instance if a sequence is padded to adjust the sequence length, the padded words should be ignored hence their attention_mask are 0.</p> <h3>Special Tokens</h3> <p>BertTokenizer addes special tokens, enclosing a sequence with <code>[CLS]</code> and <code>[SEP]</code>. <code>[CLS]</code> represents <strong>Classification</strong> and <code>[SEP]</code> separates sequences. For Question Answer or Paraphrase tasks, <code>[SEP]</code> separates the two sentences to compare.</p> <p><a href="https://huggingface.co/transformers/model_doc/bert.html#berttokenizer" rel="noreferrer">BertTokenizer</a></p> <blockquote> <ul> <li>cls_token (str, optional, defaults to &quot;<strong>[CLS]</strong>&quot;)<BR>The <strong>Classifier Token which is used when doing sequence classification</strong> (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.</li> <li>sep_token (str, optional, defaults to &quot;[SEP]&quot;)<BR>The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.</li> </ul> </blockquote> <p><a href="http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/" rel="noreferrer">A Visual Guide to Using BERT for the First Time</a> show the tokenization.</p> <p><a href="https://i.sstatic.net/zQtff.png" rel="noreferrer"><img src="https://i.sstatic.net/zQtff.png" alt="enter image description here" /></a></p> <h3>[CLS]</h3> <p>The embedding vector for <strong><code>[CLS]</code></strong> in the output from the base model final layer represents the classification that has been learned by the base model. Hence feed the embedding vector of <strong><code>[CLS]</code></strong> token into the classification layer added on top of the base model.</p> <ul> <li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li> </ul> <blockquote> <p>The first token of every sequence is always <code>a special classification token ([CLS])</code>. The final hidden state corresponding to this token is <strong>used as the aggregate sequence representation for classification tasks</strong>. Sentence pairs are packed together into a single sequence. We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B.</p> </blockquote> <p>The model structure will be illustrated as below.</p> <p><a href="https://i.sstatic.net/VAq7v.jpg" rel="noreferrer"><img src="https://i.sstatic.net/VAq7v.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/tjpn4.jpg" rel="noreferrer"><img src="https://i.sstatic.net/tjpn4.jpg" alt="enter image description here" /></a></p> <h3>Vector size</h3> <p>In the model <code>distilbert-base-uncased</code>, each token is embedded into a vector of size <strong>768</strong>. The shape of the output from the base model is <code>(batch_size, max_sequence_length, embedding_vector_size=768)</code>. This accords with the BERT paper about the BERT/BASE model (as indicated in distilbert-<em><strong>base</strong></em>-uncased).</p> <ul> <li><a href="https://arxiv.org/abs/1810.04805" rel="noreferrer">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</a></li> </ul> <blockquote> <p>BERT/BASE (L=12, H=<strong>768</strong>, A=12, Total Parameters=110M) and BERT/LARGE (L=24, H=1024, A=16, Total Parameters=340M).</p> </blockquote> <h3>Base Model - TFDistilBertModel</h3> <ul> <li><a href="https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379" rel="noreferrer">Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks</a></li> </ul> <blockquote> <p>TFDistilBertModel class to instantiate the base DistilBERT model <strong>without any specific head on top</strong> (as opposed to other classes such as TFDistilBertForSequenceClassification that do have an added classification head). <br><br> We do not want any task-specific head attached because we simply want the pre-trained weights of the base model to provide a general understanding of the English language, and it will be our job to add our own classification head during the fine-tuning process in order to help the model distinguish between toxic comments.</p> </blockquote> <p><code>TFDistilBertModel</code> generates an instance of <code>TFBaseModelOutput</code> whose <code>last_hidden_state</code> parameter is the output from the model last layer.</p> <pre><code>TFBaseModelOutput([( 'last_hidden_state', &lt;tf.Tensor: shape=(batch_size, sequence_lendgth, 768), dtype=float32, numpy=array([[[...]]], dtype=float32)&gt; )]) </code></pre> <ul> <li><a href="https://huggingface.co/transformers/main_classes/output.html#tfbasemodeloutput" rel="noreferrer">TFBaseModelOutput</a></li> </ul> <blockquote> <p>Parameters<br></p> <ul> <li>last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.</li> </ul> </blockquote> <h2>Implementation</h2> <h3>Python modules</h3> <pre><code>import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split from transformers import ( DistilBertTokenizerFast, TFDistilBertModel, ) </code></pre> <h3>Configuration</h3> <pre><code>TIMESTAMP = datetime.datetime.now().strftime(&quot;%Y%b%d%H%M&quot;).upper() DATA_COLUMN = 'text' LABEL_COLUMN = 'category_index' MAX_SEQUENCE_LENGTH = 512 # Max length allowed for BERT is 512. NUM_LABELS = len(raw_train[LABEL_COLUMN].unique()) MODEL_NAME = 'distilbert-base-uncased' NUM_BASE_MODEL_OUTPUT = 768 # Flag to freeze base model FREEZE_BASE = True # Flag to add custom classification heads USE_CUSTOM_HEAD = True if USE_CUSTOM_HEAD == False: # Make the base trainable when no classification head exists. FREEZE_BASE = False BATCH_SIZE = 16 LEARNING_RATE = 1e-2 if FREEZE_BASE else 5e-5 L2 = 0.01 </code></pre> <h3>Tokenizer</h3> <pre><code>tokenizer = DistilBertTokenizerFast.from_pretrained(MODEL_NAME) def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'): &quot;&quot;&quot;Tokenize using the Huggingface tokenizer Args: sentences: String or list of string to tokenize padding: Padding method ['do_not_pad'|'longest'|'max_length'] &quot;&quot;&quot; return tokenizer( sentences, truncation=True, padding=padding, max_length=max_length, return_tensors=&quot;tf&quot; ) </code></pre> <h3>Input layer</h3> <p>The base model expects <code>input_ids</code> and <code>attention_mask</code> whose shape is <code>(max_sequence_length,)</code>. Generate Keras Tensors for them with <code>Input</code> layer respectively.</p> <pre><code># Inputs for token indices and attention masks input_ids = tf.keras.layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_ids') attention_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='attention_mask') </code></pre> <h3>Base model layer</h3> <p>Generate the output from the base model. The base model generates <code>TFBaseModelOutput</code>. Feed the embedding of <strong><code>[CLS]</code></strong> to the next layer.</p> <pre><code>base = TFDistilBertModel.from_pretrained( MODEL_NAME, num_labels=NUM_LABELS ) # Freeze the base model weights. if FREEZE_BASE: for layer in base.layers: layer.trainable = False base.summary() # [CLS] embedding is last_hidden_state[:, 0, :] output = base([input_ids, attention_mask]).last_hidden_state[:, 0, :] </code></pre> <h3>Classification layers</h3> <pre><code>if USE_CUSTOM_HEAD: # ------------------------------------------------------------------------------- # Classifiation leayer 01 # -------------------------------------------------------------------------------- output = tf.keras.layers.Dropout( rate=0.15, name=&quot;01_dropout&quot;, )(output) output = tf.keras.layers.Dense( units=NUM_BASE_MODEL_OUTPUT, kernel_initializer='glorot_uniform', activation=None, name=&quot;01_dense_relu_no_regularizer&quot;, )(output) output = tf.keras.layers.BatchNormalization( name=&quot;01_bn&quot; )(output) output = tf.keras.layers.Activation( &quot;relu&quot;, name=&quot;01_relu&quot; )(output) # -------------------------------------------------------------------------------- # Classifiation leayer 02 # -------------------------------------------------------------------------------- output = tf.keras.layers.Dense( units=NUM_BASE_MODEL_OUTPUT, kernel_initializer='glorot_uniform', activation=None, name=&quot;02_dense_relu_no_regularizer&quot;, )(output) output = tf.keras.layers.BatchNormalization( name=&quot;02_bn&quot; )(output) output = tf.keras.layers.Activation( &quot;relu&quot;, name=&quot;02_relu&quot; )(output) </code></pre> <h3>Softmax Layer</h3> <pre><code>output = tf.keras.layers.Dense( units=NUM_LABELS, kernel_initializer='glorot_uniform', kernel_regularizer=tf.keras.regularizers.l2(l2=L2), activation='softmax', name=&quot;softmax&quot; )(output) </code></pre> <h3>Final Custom Model</h3> <pre><code>name = f&quot;{TIMESTAMP}_{MODEL_NAME.upper()}&quot; model = tf.keras.models.Model(inputs=[input_ids, attention_mask], outputs=output, name=name) model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE), metrics=['accuracy'] ) model.summary() --- Layer (type) Output Shape Param # Connected to ================================================================================================== input_ids (InputLayer) [(None, 256)] 0 __________________________________________________________________________________________________ attention_mask (InputLayer) [(None, 256)] 0 __________________________________________________________________________________________________ tf_distil_bert_model (TFDistilB TFBaseModelOutput(la 66362880 input_ids[0][0] attention_mask[0][0] __________________________________________________________________________________________________ tf.__operators__.getitem_1 (Sli (None, 768) 0 tf_distil_bert_model[1][0] __________________________________________________________________________________________________ 01_dropout (Dropout) (None, 768) 0 tf.__operators__.getitem_1[0][0] __________________________________________________________________________________________________ 01_dense_relu_no_regularizer (D (None, 768) 590592 01_dropout[0][0] __________________________________________________________________________________________________ 01_bn (BatchNormalization) (None, 768) 3072 01_dense_relu_no_regularizer[0][0 __________________________________________________________________________________________________ 01_relu (Activation) (None, 768) 0 01_bn[0][0] __________________________________________________________________________________________________ 02_dense_relu_no_regularizer (D (None, 768) 590592 01_relu[0][0] __________________________________________________________________________________________________ 02_bn (BatchNormalization) (None, 768) 3072 02_dense_relu_no_regularizer[0][0 __________________________________________________________________________________________________ 02_relu (Activation) (None, 768) 0 02_bn[0][0] __________________________________________________________________________________________________ softmax (Dense) (None, 2) 1538 02_relu[0][0] ================================================================================================== Total params: 67,551,746 Trainable params: 1,185,794 Non-trainable params: 66,365,952 &lt;--- Base BERT model is frozen </code></pre> <h3>Data allocation</h3> <pre><code># -------------------------------------------------------------------------------- # Split data into training and validation # -------------------------------------------------------------------------------- raw_train = pd.read_csv(&quot;./train.csv&quot;) train_data, validation_data, train_label, validation_label = train_test_split( raw_train[DATA_COLUMN].tolist(), raw_train[LABEL_COLUMN].tolist(), test_size=.2, shuffle=True ) # X = dict(tokenize(train_data)) # Y = tf.convert_to_tensor(train_label) X = tf.data.Dataset.from_tensor_slices(( dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary train_label )).batch(BATCH_SIZE).prefetch(1) V = tf.data.Dataset.from_tensor_slices(( dict(tokenize(validation_data)), # Convert BatchEncoding instance to dictionary validation_label )).batch(BATCH_SIZE).prefetch(1) </code></pre> <h3>Train</h3> <pre><code># -------------------------------------------------------------------------------- # Train the model # https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit # Input data x can be a dict mapping input names to the corresponding array/tensors, # if the model has named inputs. Beware of the &quot;names&quot;. y should be consistent with x # (you cannot have Numpy inputs and tensor targets, or inversely). # -------------------------------------------------------------------------------- history = model.fit( x=X, # dictionary # y=Y, y=None, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, validation_data=V, ) </code></pre> <p>To implement the 1st approach, change the configuration as below.</p> <pre><code>USE_CUSTOM_HEAD = False </code></pre> <p>Then <code>FREEZE_BASE</code> is changed to <code>False</code> and <code>LEARNING_RATE</code> is changed to <code>5e-5</code> which will run Further Pre-training on the base BERT model.</p> <h3>Saving the model</h3> <p>For the 3rd approach, saving the model will cause issues. The <a href="https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.save_pretrained" rel="noreferrer">save_pretrained</a> method of the Huggingface Model cannot be used as the model is not a direct sub class from of Huggingface <a href="https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel" rel="noreferrer">PreTrainedModel</a>.</p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model" rel="noreferrer">Keras save_model</a> causes an error with the default <code>save_traces=True</code>, or causes a different error with <code>save_traces=True</code> when loading the model with <a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model" rel="noreferrer">Keras load_model</a>.</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-71-01d66991d115&gt; in &lt;module&gt;() ----&gt; 1 tf.keras.models.load_model(MODEL_DIRECTORY) 11 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/load.py in _unable_to_call_layer_due_to_serialization_issue(layer, *unused_args, **unused_kwargs) 865 'recorded when the object is called, and used when saving. To manually ' 866 'specify the input shape/dtype, decorate the call function with ' --&gt; 867 '`@tf.function(input_signature=...)`.'.format(layer.name, type(layer))) 868 869 ValueError: Cannot call custom layer tf_distil_bert_model of type &lt;class 'tensorflow.python.keras.saving.saved_model.load.TFDistilBertModel'&gt;, because the call function was not serialized to the SavedModel.Please try one of the following methods to fix this issue: (1) Implement `get_config` and `from_config` in the layer/model class, and pass the object to the `custom_objects` argument when loading the model. For more details, see: https://www.tensorflow.org/guide/keras/save_and_serialize (2) Ensure that the subclassed model or layer overwrites `call` and not `__call__`. The input shape and dtype will be automatically recorded when the object is called, and used when saving. To manually specify the input shape/dtype, decorate the call function with `@tf.function(input_signature=...)`. </code></pre> <p>Only <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights" rel="noreferrer">Keras Model save_weights</a> worked as far as I tested.</p> <h1>Experiments</h1> <p>As far as I tested with <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge" rel="noreferrer">Toxic Comment Classification Challenge</a>, the 1st approach gave better recall (identify true toxic comment, true non-toxic comment). Code can be accessed as below. Please provide correction/suggestion if anything.</p> <ul> <li><a href="https://nbviewer.jupyter.org/github/omontasama/nlp-huggingface/blob/main/fine_tuning/huggingface_fine_tuning.ipynb" rel="noreferrer">Code for 1st and 3rd approach</a></li> </ul> <hr /> <h1>Related</h1> <ul> <li><a href="https://www.youtube.com/watch?v=_eSGWNqKeeY" rel="noreferrer">BERT Document Classification Tutorial with Code</a> - Fine tuning using TFDistilBertForSequenceClassification and Pytorch</li> <li><a href="https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379" rel="noreferrer">Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks</a> - Fine tuning using TFDistilBertModel</li> </ul>
145
BERT model
Training the BERT model with pytorch
https://stackoverflow.com/questions/72753200/training-the-bert-model-with-pytorch
<p>I am unable to figure out why my BERT model dosen't get pas the training command. I am using pytorch-lightning. I am running the code on AWS EC2(p3.2xLarge) and it does show me the available GPU but I can't really figure out the device side error. Could someone please guide me towards a direction? I really appreciate you time and consideration. PS: The results are after setting CUDA_LAUNCH_BLOCKING=1.</p> <pre><code>trainer = pl.Trainer( logger=logger, checkpoint_callback=checkpoint_callback, callbacks=[early_stopping_callback], max_epochs=N_EPOCHS, gpus=1, progress_bar_refresh_rate=30, ) GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs In [155]: trainer.fit(model, data_module) LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-155-7b6b8391c42e&gt; in &lt;module&gt; ----&gt; 1 trainer.fit(model, data_module) ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloaders, val_dataloaders, datamodule, train_dataloader, ckpt_path) 739 train_dataloaders = train_dataloader 740 self._call_and_handle_interrupt( --&gt; 741 self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path 742 ) 743 ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _call_and_handle_interrupt(self, trainer_fn, *args, **kwargs) 683 &quot;&quot;&quot; 684 try: --&gt; 685 return trainer_fn(*args, **kwargs) 686 # TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7 687 except KeyboardInterrupt as exception: ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 775 # TODO: ckpt_path only in v1.7 776 ckpt_path = ckpt_path or self.resume_from_checkpoint --&gt; 777 self._run(model, ckpt_path=ckpt_path) 778 779 assert self.state.stopped ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model, ckpt_path) 1143 1144 self._call_configure_sharded_model() # allow user to setup in model sharded environment -&gt; 1145 self.accelerator.setup(self) 1146 1147 # ---------------------------- ~/.local/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu.py in setup(self, trainer) 44 def setup(self, trainer: &quot;pl.Trainer&quot;) -&gt; None: 45 self.set_nvidia_flags(trainer.local_rank) ---&gt; 46 return super().setup(trainer) 47 48 def on_train_start(self) -&gt; None: ~/.local/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py in setup(self, trainer) 89 trainer: the trainer instance 90 &quot;&quot;&quot; ---&gt; 91 self.setup_training_type_plugin() 92 if not self.training_type_plugin.setup_optimizers_in_pre_dispatch: 93 self.setup_optimizers(trainer) ~/.local/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py in setup_training_type_plugin(self) 361 def setup_training_type_plugin(self) -&gt; None: 362 &quot;&quot;&quot;Attaches the training type plugin to the accelerator.&quot;&quot;&quot; --&gt; 363 self.training_type_plugin.setup() 364 365 def setup_precision_plugin(self) -&gt; None: ~/.local/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/single_device.py in setup(self) 69 70 def setup(self) -&gt; None: ---&gt; 71 self.model_to_device() 72 73 @property ~/.local/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/single_device.py in model_to_device(self) 66 67 def model_to_device(self) -&gt; None: ---&gt; 68 self._model.to(self.root_device) 69 70 def setup(self) -&gt; None: ~/.local/lib/python3.6/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py in to(self, *args, **kwargs) 109 out = torch._C._nn._parse_to(*args, **kwargs) 110 self.__update_properties(device=out[0], dtype=out[1]) --&gt; 111 return super().to(*args, **kwargs) 112 113 def cuda(self, device: Optional[Union[torch.device, int]] = None) -&gt; &quot;DeviceDtypeModuleMixin&quot;: ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in to(self, *args, **kwargs) 897 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) 898 --&gt; 899 return self._apply(convert) 900 901 def register_backward_hook( ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _apply(self, fn) 568 def _apply(self, fn): 569 for module in self.children(): --&gt; 570 module._apply(fn) 571 572 def compute_should_use_set_data(tensor, tensor_applied): ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _apply(self, fn) 568 def _apply(self, fn): 569 for module in self.children(): --&gt; 570 module._apply(fn) 571 572 def compute_should_use_set_data(tensor, tensor_applied): ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _apply(self, fn) 568 def _apply(self, fn): 569 for module in self.children(): --&gt; 570 module._apply(fn) 571 572 def compute_should_use_set_data(tensor, tensor_applied): ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in _apply(self, fn) 591 # `with torch.no_grad():` 592 with torch.no_grad(): --&gt; 593 param_applied = fn(param) 594 should_use_set_data = compute_should_use_set_data(param, param_applied) 595 if should_use_set_data: ~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in convert(t) 895 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, 896 non_blocking, memory_format=convert_to_format) --&gt; 897 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) 898 899 return self._apply(convert) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. </code></pre> <p>Restarting the machine returned this:</p> <pre><code>LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Missing logger folder: lightning_logs/nara-comments | Name | Type | Params ----------------------------------------- 0 | bert | BertModel | 108 M 1 | classifier | Linear | 288 K 2 | criterion | BCELoss | 0 ----------------------------------------- 108 M Trainable params 0 Non-trainable params 108 M Total params 434.395 Total estimated model params size (MB) /home/ubuntu/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/data.py:60: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 4540. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`. &quot;Trying to infer the `batch_size` from an ambiguous collection. The batch size we&quot; /home/ubuntu/.local/lib/python3.6/site-packages/pytorch_lightning/utilities/data.py:60: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 4374. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`. &quot;Trying to infer the `batch_size` from an ambiguous collection. The batch size we&quot; Global seed set to 42 Epoch 0: 0% 0/397 [00:00&lt;?, ?it/s] /home/ubuntu/.local/lib/python3.6/site-packages/pytorch_lightning/loops/optimization/closure.py:36: LightningDeprecationWarning: One of the returned values {'predictions', 'labels'} has a `grad_fn`. We will detach it automatically but this behaviour will change in v1.6. Please detach it manually: `return {'loss': ..., 'something': something.detach()}` f&quot;One of the returned values {set(extra.keys())} has a `grad_fn`. We will detach it automatically&quot; --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-48-7b6b8391c42e&gt; in &lt;module&gt; ----&gt; 1 trainer.fit(model, data_module) ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloaders, val_dataloaders, datamodule, train_dataloader, ckpt_path) 739 train_dataloaders = train_dataloader 740 self._call_and_handle_interrupt( --&gt; 741 self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path 742 ) 743 ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _call_and_handle_interrupt(self, trainer_fn, *args, **kwargs) 683 &quot;&quot;&quot; 684 try: --&gt; 685 return trainer_fn(*args, **kwargs) 686 # TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7 687 except KeyboardInterrupt as exception: ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 775 # TODO: ckpt_path only in v1.7 776 ckpt_path = ckpt_path or self.resume_from_checkpoint --&gt; 777 self._run(model, ckpt_path=ckpt_path) 778 779 assert self.state.stopped ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model, ckpt_path) 1197 1198 # dispatch `start_training` or `start_evaluating` or `start_predicting` -&gt; 1199 self._dispatch() 1200 1201 # plugin will finalized fitting (e.g. ddp_spawn will load trained model) ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _dispatch(self) 1277 self.training_type_plugin.start_predicting(self) 1278 else: -&gt; 1279 self.training_type_plugin.start_training(self) 1280 1281 def run_stage(self): ~/.local/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in start_training(self, trainer) 200 def start_training(self, trainer: &quot;pl.Trainer&quot;) -&gt; None: 201 # double dispatch to initiate the training loop --&gt; 202 self._results = trainer.run_stage() 203 204 def start_evaluating(self, trainer: &quot;pl.Trainer&quot;) -&gt; None: ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in run_stage(self) 1287 if self.predicting: 1288 return self._run_predict() -&gt; 1289 return self._run_train() 1290 1291 def _pre_training_routine(self): ~/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py in _run_train(self) 1317 self.fit_loop.trainer = self 1318 with torch.autograd.set_detect_anomaly(self._detect_anomaly): -&gt; 1319 self.fit_loop.run() 1320 1321 def _run_evaluate(self) -&gt; _EVALUATE_OUTPUT: ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs) 143 try: 144 self.on_advance_start(*args, **kwargs) --&gt; 145 self.advance(*args, **kwargs) 146 self.on_advance_end() 147 self.restarting = False ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/fit_loop.py in advance(self) 232 233 with self.trainer.profiler.profile(&quot;run_training_epoch&quot;): --&gt; 234 self.epoch_loop.run(data_fetcher) 235 236 # the global step is manually decreased here due to backwards compatibility with existing loggers ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs) 143 try: 144 self.on_advance_start(*args, **kwargs) --&gt; 145 self.advance(*args, **kwargs) 146 self.on_advance_end() 147 self.restarting = False ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py in advance(self, *args, **kwargs) 191 192 with self.trainer.profiler.profile(&quot;run_training_batch&quot;): --&gt; 193 batch_output = self.batch_loop.run(batch, batch_idx) 194 195 self.batch_progress.increment_processed() ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs) 143 try: 144 self.on_advance_start(*args, **kwargs) --&gt; 145 self.advance(*args, **kwargs) 146 self.on_advance_end() 147 self.restarting = False ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py in advance(self, batch, batch_idx) 86 if self.trainer.lightning_module.automatic_optimization: 87 optimizers = _get_active_optimizers(self.trainer.optimizers, self.trainer.optimizer_frequencies, batch_idx) ---&gt; 88 outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx) 89 else: 90 outputs = self.manual_loop.run(split_batch, batch_idx) ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs) 143 try: 144 self.on_advance_start(*args, **kwargs) --&gt; 145 self.advance(*args, **kwargs) 146 self.on_advance_end() 147 self.restarting = False ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py in advance(self, batch, *args, **kwargs) 217 self._batch_idx, 218 self._optimizers[self.optim_progress.optimizer_position], --&gt; 219 self.optimizer_idx, 220 ) 221 if result.loss is not None: ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py in _run_optimization(self, split_batch, batch_idx, optimizer, opt_idx) 264 # gradient update with accumulated gradients 265 else: --&gt; 266 self._optimizer_step(optimizer, opt_idx, batch_idx, closure) 267 268 result = closure.consume_result() ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py in _optimizer_step(self, optimizer, opt_idx, batch_idx, train_step_and_backward_closure) 384 on_tpu=(self.trainer._device_type == DeviceType.TPU and _TPU_AVAILABLE), 385 using_native_amp=(self.trainer.amp_backend is not None and self.trainer.amp_backend == AMPType.NATIVE), --&gt; 386 using_lbfgs=is_lbfgs, 387 ) 388 ~/.local/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py in optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu, using_native_amp, using_lbfgs) 1650 1651 &quot;&quot;&quot; -&gt; 1652 optimizer.step(closure=optimizer_closure) 1653 1654 def optimizer_zero_grad(self, epoch: int, batch_idx: int, optimizer: Optimizer, optimizer_idx: int): ~/.local/lib/python3.6/site-packages/pytorch_lightning/core/optimizer.py in step(self, closure, **kwargs) 162 assert trainer is not None 163 with trainer.profiler.profile(profiler_action): --&gt; 164 trainer.accelerator.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs) ~/.local/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py in optimizer_step(self, optimizer, opt_idx, closure, model, **kwargs) 337 &quot;&quot;&quot; 338 model = model or self.lightning_module --&gt; 339 self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs) 340 341 def optimizer_zero_grad(self, current_epoch: int, batch_idx: int, optimizer: Optimizer, opt_idx: int) -&gt; None: ~/.local/lib/python3.6/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py in optimizer_step(self, model, optimizer, optimizer_idx, closure, **kwargs) 161 if isinstance(model, pl.LightningModule): 162 closure = partial(self._wrap_closure, model, optimizer, optimizer_idx, closure) --&gt; 163 optimizer.step(closure=closure, **kwargs) 164 165 def _track_grad_norm(self, trainer: &quot;pl.Trainer&quot;) -&gt; None: ~/.local/lib/python3.6/site-packages/torch/optim/lr_scheduler.py in wrapper(*args, **kwargs) 63 instance._step_count += 1 64 wrapped = func.__get__(instance, cls) ---&gt; 65 return wrapped(*args, **kwargs) 66 67 # Note that the returned function here is no longer a bound method, ~/.local/lib/python3.6/site-packages/torch/optim/optimizer.py in wrapper(*args, **kwargs) 86 profile_name = &quot;Optimizer.step#{}.step&quot;.format(obj.__class__.__name__) 87 with torch.autograd.profiler.record_function(profile_name): ---&gt; 88 return func(*args, **kwargs) 89 return wrapper 90 ~/.local/lib/python3.6/site-packages/transformers/optimization.py in step(self, closure) 330 loss = None 331 if closure is not None: --&gt; 332 loss = closure() 333 334 for group in self.param_groups: ~/.local/lib/python3.6/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py in _wrap_closure(self, model, optimizer, optimizer_idx, closure) 146 consistent with the ``PrecisionPlugin`` subclasses that cannot pass ``optimizer.step(closure)`` directly. 147 &quot;&quot;&quot; --&gt; 148 closure_result = closure() 149 self._after_closure(model, optimizer, optimizer_idx) 150 return closure_result ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py in __call__(self, *args, **kwargs) 158 159 def __call__(self, *args: Any, **kwargs: Any) -&gt; Optional[Tensor]: --&gt; 160 self._result = self.closure(*args, **kwargs) 161 return self._result.loss 162 ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py in closure(self, *args, **kwargs) 153 if self._backward_fn is not None and step_output.closure_loss is not None: 154 with self._profiler.profile(&quot;backward&quot;): --&gt; 155 self._backward_fn(step_output.closure_loss) 156 157 return step_output ~/.local/lib/python3.6/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py in backward_fn(loss) 325 326 def backward_fn(loss: Tensor) -&gt; None: --&gt; 327 self.trainer.accelerator.backward(loss, optimizer, opt_idx) 328 329 # check if model weights are nan ~/.local/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py in backward(self, closure_loss, *args, **kwargs) 312 closure_loss = self.precision_plugin.pre_backward(self.lightning_module, closure_loss) 313 --&gt; 314 self.precision_plugin.backward(self.lightning_module, closure_loss, *args, **kwargs) 315 316 closure_loss = self.precision_plugin.post_backward(self.lightning_module, closure_loss) ~/.local/lib/python3.6/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py in backward(self, model, closure_loss, optimizer, *args, **kwargs) 89 # do backward pass 90 if model is not None and isinstance(model, pl.LightningModule): ---&gt; 91 model.backward(closure_loss, optimizer, *args, **kwargs) 92 else: 93 self._run_backward(closure_loss, *args, **kwargs) ~/.local/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py in backward(self, loss, optimizer, optimizer_idx, *args, **kwargs) 1432 loss.backward() 1433 &quot;&quot;&quot; -&gt; 1434 loss.backward(*args, **kwargs) 1435 1436 def toggle_optimizer(self, optimizer: Union[Optimizer, LightningOptimizer], optimizer_idx: int) -&gt; None: ~/.local/lib/python3.6/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs) 305 create_graph=create_graph, 306 inputs=inputs) --&gt; 307 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) 308 309 def register_hook(self, hook): ~/.local/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 154 Variable._execution_engine.run_backward( 155 tensors, grad_tensors_, retain_graph, create_graph, inputs, --&gt; 156 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag 157 158 RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. </code></pre>
146
BERT model
bert model as model.ckpt-1400000
https://stackoverflow.com/questions/63484093/bert-model-as-model-ckpt-1400000
<p>It is the first time that I want to use BERT. I'm trying to execute this code.</p> <pre><code>from keras_bert import load_trained_model_from_checkpoint config_path = './model/bert_config.json' checkpoint_path = './model/model.ckpt-1400000' bert = load_trained_model_from_checkpoint(config_path, checkpoint_path) bert.summary() </code></pre> <p>I have a &quot;download-model.sh&quot; file in &quot;model&quot; folder like this:</p> <pre><code>#!/bin/bash wget &quot;https://drive.google.com/uc?export=download&amp;id=1jjZmgSo8C9xMIos8cUMhqJfNbyyqR0MY&quot; -O wiki-ja.model wget &quot;https://drive.google.com/uc?export=download&amp;id=1uzPpW38LcS4YS431GgdG0Hsj4gNgE5X1&quot; -O wiki-ja.vocab wget &quot;https://drive.google.com/uc?export=download&amp;id=1LB00MDQJjb-xLmgBMhdQE3wKDOLjgum-&quot; -O model.ckpt-1400000.index wget &quot;https://drive.google.com/uc?export=download&amp;id=1V9TIUn5wc-mB_wabYiz9ikvLsscONOKB&quot; -O model.ckpt-1400000.meta curl -sc /tmp/cookie &quot;https://drive.google.com/uc?export=download&amp;id=1F4b_u-5zzqabA6OfLxDkLh0lzqVIEZuN&quot; &gt; /dev/null CODE=&quot;$(awk '/_warning_/ {print $NF}' /tmp/cookie)&quot; curl -Lb /tmp/cookie &quot;https://drive.google.com/uc?export=download&amp;confirm=${CODE}&amp;id=1F4b_u-5zzqabA6OfLxDkLh0lzqVIEZuN&quot; -o model.ckpt-1400000.data-00000-of-00001 </code></pre> <p>when I run that code I get this error:</p> <pre><code>NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./model/model.ckpt-1400000 </code></pre> <p>what should I do?</p>
147
BERT model
How many neurons (units) are there in the BERT model?
https://stackoverflow.com/questions/75844264/how-many-neurons-units-are-there-in-the-bert-model
<p>How to estimate the number of neurons (units) in the BERT model? <strong>Note</strong> this is different from the number of model parameters.</p>
<p>Depending on which field you come from the &quot;neurons&quot; definition might differ.</p> <p>In general, people in computer science conflates <code>num_neurons = num_parameters</code>. But this might not be the case if one is interested in more neurological/biological perspective.</p> <h3>Q: Why do computer scientist care about no. of parameters not neurons?</h3> <p>Because they determine the effectiveness of the model in terms of FLOPs, see <a href="https://www.lesswrong.com/posts/jJApGWG95495pYM7C/how-to-measure-flop-s-for-neural-networks-empirically" rel="nofollow noreferrer">https://www.lesswrong.com/posts/jJApGWG95495pYM7C/how-to-measure-flop-s-for-neural-networks-empirically</a></p> <h3>Q: How many neurons is one parameter, or vice versa?</h3> <p>For that we can only estimate, naively, we can treat it as 1 neuron = 1000 parameters</p> <p><strong>Reference:</strong> <a href="https://www.beren.io/2022-08-06-The-scale-of-the-brain-vs-machine-learning/" rel="nofollow noreferrer">https://www.beren.io/2022-08-06-The-scale-of-the-brain-vs-machine-learning/</a></p> <h3>Q: How many neurons BERT have?</h3> <p>Depends on which flavor of BERT you are referring to.</p> <p>Using snippets from <a href="https://stackoverflow.com/questions/49201236/check-the-total-number-of-parameters-in-a-pytorch-model">Check the total number of parameters in a PyTorch model</a></p> <pre><code>from transformers import AutoModel model = AutoModel.from_pretrained(&quot;bert-base-cased&quot;) sum(p.numel() for p in model.parameters()) </code></pre> <p>[out]:</p> <pre><code>108310272 </code></pre> <p>So working backwards, given 1000:1 ratio, 108,310,272 parameters -&gt; 0.1M neurons.</p>
148
BERT model
Unable to import BERT model with all packages
https://stackoverflow.com/questions/73359430/unable-to-import-bert-model-with-all-packages
<p>I am trying to learn NLP using BERT. While trying to import bert model and tokenizer in colab. I am facing the below error.</p> <pre><code>ImportError: cannot import name '_LazyModule' from 'transformers.file_utils' (/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py) </code></pre> <p>Here is my code</p> <pre><code>!pip install transformers==4.11.3 from transformers import BertModel, BertTokenizer import torch </code></pre> <p>In order to fix the error. I tried to upgrade both transformers and torch.</p> <p>I have tried the solution from the below link:<a href="https://stackoverflow.com/questions/66590981/transformer-error-importing-packages-importerror-cannot-import-name-save-st">This</a></p> <p>Still i am unable to go forward. Please assist.</p>
<p>Based on these links <a href="https://github.com/Lightning-AI/lightning-flash/issues/630#issuecomment-891031269" rel="nofollow noreferrer">1</a>, <a href="https://stackoverflow.com/questions/71901851/importerror-cannot-import-name-lazymodule-from-transformers-utils">2</a></p> <p>This should help -</p> <pre><code>pip install 'lightning-flash[text]' --upgrade </code></pre> <p>Since the code provided by you is not the cause of the error as it runs on Colab when I tried it hence this might be the culprit in your environment</p>
149
BERT model
ValueError when pre-training BERT model using Trainer API
https://stackoverflow.com/questions/70263251/valueerror-when-pre-training-bert-model-using-trainer-api
<p>I'm trying to fine-tune/pre-train an existing BERT model for sentiment analysis by using Trainer API in <code>transformers</code> library. My training dataset looks like this:</p> <pre><code>Text Sentiment This was good place 1 This was bad place 0 </code></pre> <p>My goal is to be able to classify sentiments as positive/negative. And here is my code:</p> <pre><code>from datasets import load_dataset from datasets import load_dataset_builder import datasets import transformers from transformers import TrainingArguments from transformers import Trainer dataset = load_dataset('csv', data_files='my_data.csv', sep=';') tokenizer = transformers.BertTokenizer.from_pretrained(&quot;bert-base-cased&quot;) model = transformers.BertForMaskedLM.from_pretrained(&quot;bert-base-cased&quot;) print(dataset) def tokenize_function(examples): return tokenizer(examples[&quot;Text&quot;], examples[&quot;Sentiment&quot;], truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) training_args = TrainingArguments(&quot;test_trainer&quot;) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets ) trainer.train() </code></pre> <p>This throws error message:</p> <pre><code>ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). </code></pre> <p>What I'm doing wrong? Any advise is highly appreciated.</p>
<p>There are several points here to which you need to pay attention in order to have your code working.</p> <p>First of all, you are working on a sequence classification task, specifically a binary classification, so you need to instantiate your model accordingly:</p> <pre><code># replace this: # model = transformers.BertForMaskedLM.from_pretrained(&quot;bert-base-cased&quot;) # by this: model = transformers.BertForSequenceClassification.from_pretrained(&quot;bert-base-cased&quot;, num_labels=2) </code></pre> <p>You shouldn't provide the labels (<code>examples[&quot;Sentiment&quot;]</code>) to the tokenizer, as they don't need to be tokenized:</p> <pre><code># use only the text as input # use padding to standardize sequence length return tokenizer(examples[&quot;Text&quot;], truncation=True, padding='max_length') </code></pre> <p>Speaking of labels, your trainer will expect them to be in a column named 'label', so you have to rename your 'Sentiment' accordingly. Note that this method doesn't operate in-place as you could expect, it returns a new dataset that you have to capture.</p> <pre><code># for example, after you tokenized the dataset: tokenized_datasets = tokenized_datasets.rename_column('Sentiment', 'label') </code></pre> <p>Finally, you need to specify the split of the dataset you actually want to use for training. Here, since you did not split the dataset, it should contain only one: 'train'</p> <pre><code>trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets['train'] # here ) </code></pre> <p>That should make your code <em>work</em>, but doesn't mean you'll get any interesting result. As you're interested in working with transformers, I strongly recommend you have a look at the <a href="https://huggingface.co/docs/transformers/notebooks" rel="nofollow noreferrer">series of notebooks by huggingface</a>.</p>
150
BERT model
CNN model and bert with text
https://stackoverflow.com/questions/71309113/cnn-model-and-bert-with-text
<p>I got error in linear function</p> <pre><code>class MixModel(nn.Module): def __init__(self,pre_trained='bert-base-uncased'): super().__init__() self.bert = AutoModel.from_pretrained('distilbert-base-uncased') self.hidden_size = self.bert.config.hidden_size self.conv = nn.Conv1d(in_channels=768, out_channels=256, kernel_size=5, padding='valid', stride=1) self.relu = nn.ReLU() self.pool = nn.MaxPool1d(kernel_size= 64- 5 + 1) self.dropout = nn.Dropout(0.3) self.clf = nn.Linear(self.hidden_size*2,6) def forward(self,inputs, mask , labels): cls_hs = self.bert(input_ids=inputs,attention_mask=mask, return_dict= False) x=cls_hs[0] print(cls_hs[0]) print(len(cls_hs[0])) print(cls_hs[0].size()) #x = torch.cat(cls_hs,0) # x= [416, 64, 768] x = x.permute(0, 2, 1) x = self.conv(x) x = self.relu(x) x = self.pool(x) x = self.dropout(x) x = self.clf(x) return x </code></pre> <p>error is</p> <pre><code>/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1846 if has_torch_function_variadic(input, weight, bias): 1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias) -&gt; 1848 return torch._C._nn.linear(input, weight, bias) 1849 1850 RuntimeError: mat1 and mat2 shapes cannot be multiplied (65536x1 and 1536x6 </code></pre> <p>i am trying to concatenate bert model with Cnn 1d using pytorch as disscused here <a href="https://stackoverflow.com/questions/71301279/output-from-bert-into-cnn-model/71304677?noredirect=1#comment126045460_71304677">output from bert into cnn model</a></p>
<pre><code>class MixModel(nn.Module): def __init__(self,pre_trained='bert-base-uncased'): super().__init__() self.bert = AutoModel.from_pretrained('distilbert-base-uncased') self.hidden_size = self.bert.config.hidden_size self.conv = nn.Conv1d(in_channels=768, out_channels=256, kernel_size=5, padding='valid', stride=1) self.relu = nn.ReLU() self.pool = nn.MaxPool1d(kernel_size= 64- 5 + 1) self.dropout = nn.Dropout(0.3) self.clf1 = nn.Linear(256,256) self.clf2 = nn.Linear(256,6) </code></pre> <p>change linear function</p>
151
BERT model
Fine-tuning a BERT model without answers
https://stackoverflow.com/questions/77783666/fine-tuning-a-bert-model-without-answers
<p>I come here after I have been googling during many hours for to know if it's was possible to make a fine-tune <code>BERT</code> Question Answering with only question and context ? I am beginner with BERT model so I don't know the deep mechanism that it have even after many search. Thanks you for your answer</p>
152
BERT model
pre-trained BERT model learning wrong way
https://stackoverflow.com/questions/69757233/pre-trained-bert-model-learning-wrong-way
<p>I have trained my pre-trained BERT model from the Hugging Face library on the <code>Jigsaw Toxic Comment Classification dataset</code> to detect hateful comments. However, when I try to do infer with the positive sentences, it is giving me wrong results.</p> <p>For example, if I provide the sentence: <code>You are a nice person</code>, sometimes it predicts as a toxic or insult but for the negative sentences, it is giving the right result.</p> <pre class="lang-py prettyprint-override"><code>##################### Inference part ######### test_comment = &quot;You are a nice person&quot; encoding = tokenizer.encode_plus( test_comment, add_special_tokens=True, max_length=512, return_token_type_ids=False, padding=&quot;max_length&quot;, return_attention_mask=True, return_tensors='pt', ) _, test_prediction = trained_model(encoding[&quot;input_ids&quot;], encoding[&quot;attention_mask&quot;]) test_prediction = test_prediction.flatten().numpy() for label, prediction in zip(LABEL_COLUMNS, test_prediction): print(f&quot;{label}: {prediction}&quot;) </code></pre> <pre><code>Result: toxic: 0.289602130651474 severe_toxic: 0.012312621809542179 obscene: 0.26335516571998596 threat: 0.0017053773626685143 insult: 0.54698246717453 identity_hate: 0.0013856851728633046 </code></pre> <p>Here are my training details:</p> <pre><code>BERT model: bert-base-uncased epochs: 4 batch-size: 16 Learning rate: 2e-5 Loss function: BCE loss </code></pre> <p>Since the dataset is imbalanced, I have done undersampling for the majority class. I also tested with augmenting the dataset, but it didn't help much.</p> <p>Is there any reason, why this model is associating the positive sentences with toxicity? <a href="https://i.sstatic.net/6AVs9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6AVs9.png" alt="Classification report" /></a></p>
153
BERT model
How to reuse the BERT model using tensorflow.contrib
https://stackoverflow.com/questions/58115439/how-to-reuse-the-bert-model-using-tensorflow-contrib
<p>I have tried below codes for reusing the saved BERT model.</p> <pre><code>def serving_input_receiver_fn(): feature_spec = { "input_ids" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "input_mask" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "segment_ids" : tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "label_ids" : tf.FixedLenFeature([], tf.int64) } serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_example_tensor') print(serialized_tf_example, "serialized_tf_example") print(serialized_tf_example.shape, "Shape") receiver_tensors = {'example': serialized_tf_example} print(receiver_tensors, "receiver_tensors") features = tf.parse_example(serialized_tf_example, feature_spec) return tf.estimator.export.ServingInputReceiver(features, receiver_tensors) export_path = './BERTmodel/Data/' </code></pre> <p><strong>But I am receiving the error below: ' Cannot feed value of shape () for Tensor 'input_example_tensor:0', which has shape '(?,)'</strong></p> <p>I tried below codes for prediction.</p> <p>Can someone advise me on this?</p> <pre><code>pred_sentences = ["The site is great", "I think it's not good"] def getPrediction(in_sentences): labels = ["Negative", "Positive", "Neutral"] input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer) predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) return predict_input_fn from tensorflow.contrib import predictor with tf.Session() as sess: predict_fn = predictor.from_saved_model('model_path') predictions = predict_fn({"example": getPrediction(pred_sentences)}) print(predictions) </code></pre>
154
BERT model
Adding new labels to an already trained BERT model
https://stackoverflow.com/questions/62337074/adding-new-labels-to-an-already-trained-bert-model
<p>I am using BERT for Named Entity Recognition. Initially I had only 18 labels, and I trained the model using the 18 labels and saved the model. Now I added 2 more new labels, and when I updated the previously saved model I am getting the following error:</p> <pre class="lang-sh prettyprint-override"><code>C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0 ], thread: [0,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. CUDA error: device-side assert triggered Traceback (most recent call last): File "C:\Users\jk2446\Desktop\jeril\repos\jk2446-phoenix\apps\utilities\utils.py", line 48, in catch_errors return func(*args, **kwargs) File "C:\Users\jk2446\Desktop\jeril\repos\jk2446-phoenix\apps\utilities\bert_utils.py", line 414, in start_training loss.backward() File "C:\Users\jk2446\AppData\Roaming\Python\Python36\site-packages\torch\tensor.py", line 166, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\jk2446\AppData\Roaming\Python\Python36\site-packages\torch\autograd\__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA error: device-side assert triggered </code></pre> <p>The following is my code:</p> <pre class="lang-py prettyprint-override"><code>model = BertForTokenClassification.from_pretrained(model_dir) # inititalizing the model to use GPU if torch.cuda.is_available(): __ = model.cuda() torch.cuda.empty_cache() # finetuning the model FULL_FINETUNING = True if FULL_FINETUNING: param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any( nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any( nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] else: param_optimizer = list(model.classifier.named_parameters()) optimizer_grouped_parameters = [ {"params": [p for n, p in param_optimizer]}] optimizer = Adam(optimizer_grouped_parameters, lr=3e-5) model.train() tr_loss = 0 nb_tr_examples, nb_tr_steps = 0, 0 for step, batch in enumerate(train_dataloader): # add batch to gpu batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch b_input_ids, b_input_mask, b_labels = b_input_ids.long( ), b_input_mask.long(), b_labels.long() # forward pass loss, scores = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # backward pass loss.backward() # track train loss tr_loss += loss.item() nb_tr_examples += b_input_ids.size(0) nb_tr_steps += 1 # gradient clipping torch.nn.utils.clip_grad_norm_( parameters=model.parameters(), max_norm=max_grad_norm) # update parameters optimizer.step() model.zero_grad() # print train loss per epoch train_loss = tr_loss / nb_tr_steps print("Train loss: {}".format(train_loss)) </code></pre> <p>Is there a way to update the already trained BERT model with the new labels? Kindly help.</p>
155
BERT model
Bert model not learning using JAX. Results don&#39;t change
https://stackoverflow.com/questions/79383687/bert-model-not-learning-using-jax-results-dont-change
<p>I am training a BERT model on spam classification using JAX on TPUs. My model hasn't been learning nor its results have changed.</p> <pre><code>Epoch 0: Train Loss = 2.7961559295654297: Train Accuracy: 0.30608975887298584 Eval Loss = 3.6600053310394287: Eval Accuracy = 0.0 Epoch 1: Train Loss = 2.7961559295654297: Train Accuracy: 0.30608975887298584 Eval Loss = 3.6600053310394287: Eval Accuracy = 0.0 Epoch 2: Train Loss = 2.7961559295654297: Train Accuracy: 0.30608975887298584 Eval Loss = 3.6600053310394287: Eval Accuracy = 0.0 </code></pre> <p>The code for the training:</p> <pre class="lang-py prettyprint-override"><code>@jax.pmap def train_step(state, batch, labels): def loss_fn(params): # get everything out of the batch to the model and pass the model parameters logits = model(**batch, params = state.params).logits loss = compute_loss(logits, labels) # compute the loss return loss, logits # turn the loss function into a grad differential function grad_fn = jax.value_and_grad(loss_fn, has_aux = True) # has_aux allows the return of the logits # get the loss and grads from the grad_fn (loss, logits), grads = grad_fn(state.params) # update the model state by using the produces gradients new_state = state.apply_gradients(grads = grads) return loss, logits, new_state for epoch in range(epochs): epoch_losses, epoch_accuracies = [], [] for batch in train_dataset: batch[&quot;input_ids&quot;] = jnp.array(batch[&quot;input_ids&quot;]) batch[&quot;attention_mask&quot;] = jnp.array(batch[&quot;attention_mask&quot;]) batch[&quot;token_type_ids&quot;] = jnp.array(batch[&quot;token_type_ids&quot;]) # we will replicate the value over multiple devices (tpus) batch_inputs = {k: jax.device_put_replicated(v, jax.devices()) for k, v in batch.items() if k != &quot;Category&quot;} batch_labels = jax.device_put_replicated(batch[&quot;Category&quot;], jax.devices()) # replicate labels across devices # remove none from data batch_labels = safe_convert_to_jax_array(jnp.array(batch_labels)) batch_labels = batch_labels.transpose(1, 0) loss, logits, state = train_step(state, batch_inputs, batch_labels) cls_logits = logits[:, :, 0, :] classification_logits = cls_logits[:, :, :2] predicted_labels = jnp.argmax(classification_logits, axis = -1) accuracy = compute_accuracy(predicted_labels, batch_labels) </code></pre> <p>The code for initializing the state:</p> <pre class="lang-py prettyprint-override"><code>class TrainState(train_state.TrainState): pass # our model parameters params = model.params # create the intial state for our training state = TrainState.create(apply_fn = model.__call__, params = params, tx = optimizer) def safe_convert_to_jax_array(input_data, default_value = 0): # replace None values with default_value return jnp.array([default_value if x is None else x for x in input_data]) # replicate the state across tpus state = jax.device_put_replicated(state, jax.devices()) </code></pre> <p>To see the full code: <a href="https://www.kaggle.com/code/yousefr/bert-spam-classification-using-jax-and-tpus" rel="nofollow noreferrer">https://www.kaggle.com/code/yousefr/bert-spam-classification-using-jax-and-tpus</a></p> <p>Also, I tried tweaking the learning rate, which didn't help.</p>
156
BERT model
How to convert bert model output to json?
https://stackoverflow.com/questions/73830782/how-to-convert-bert-model-output-to-json
<p>I have fine-tuned a Bert model and testing my output from different layers. I tested this in sagemaker , with my own custom script (see below) and the output i get is of BaseModelOutputWithPoolingAndCrossAttentions class. How can i convert the output of this , specially the tensor values from the last_hidden_state to json?</p> <p>inference.py</p> <pre><code> from transformers import BertModel, BertConfig def model_fn(): config = BertConfig.from_pretrained(&quot;xxx&quot;, output_hidden_states=True) model = BertModel.from_pretrained(&quot;xxx&quot;, config=config) .... def predict_fn(): .... return model(inputs) </code></pre> <p>model output</p> <pre><code>BaseModelOutputWithPoolingAndCrossAttentions( last_hidden_state= tensor([[[-1.6968, 1.9364, -2.1796, -0.0819, 1.8027, 0.3540, 1.3269, 0.1532], [-0.4969, 0.4169, 0.5677, 1.0968, 0.0742, 1.5354, 0.9387, 0.0343]]]) device='cuda:0', grad_fn=&lt;NativeLayerNormBackward&gt;), hidden_states=None, attentions=None, ... </code></pre>
<p>Grab the output, access <code>last_hidden_state</code>, and convert it to a list.</p> <pre class="lang-py prettyprint-override"><code>import json output = predict_fn() tensor = output.last_hidden_state tensor_as_list = tensor.tolist() json_str = json.dumps(tensor_as_list) </code></pre>
157
BERT model
Cannot load German BERT model in spaCy
https://stackoverflow.com/questions/61899118/cannot-load-german-bert-model-in-spacy
<p>Here is my problem: I am working on the German text classification project. I use spacy for that and decided to fine-tune its pretrained BERT model to get better results. However, when I try to load it to the code, it shows me errors.</p> <p>Here is what I've done:</p> <ol> <li>Installed spacy-transformers: <code>pip install spacy-transformers</code></li> <li>Downloaded German BERT model: <code>python -m spacy download de_trf_bertbasecased_lg</code>. It was downloaded successfully and showed me: <code>✔ Download and installation successful You can now load the model via spacy.load('de_trf_bertbasecased_lg')</code></li> <li>Wrote the following code:</li> </ol> <p><code>import spacy nlp = spacy.load('de_trf_bertbasecased_lg')</code></p> <p>And the output was:</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#1&gt;", line 1, in &lt;module&gt; nlp = spacy.load('de_trf_bertbasecased_lg') File "C:\Python\Python37\lib\site-packages\spacy\__init__.py", line 30, in load return util.load_model(name, **overrides) File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 164, in load_model return load_model_from_package(name, **overrides) File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 185, in load_model_from_package return cls.load(**overrides) File "C:\Python\Python37\lib\site-packages\de_trf_bertbasecased_lg\__init__.py", line 12, in load return load_model_from_init_py(__file__, **overrides) File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 228, in load_model_from_init_py return load_model_from_path(data_path, meta, **overrides) File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 196, in load_model_from_path cls = get_lang_class(lang) File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 70, in get_lang_class if lang in registry.languages: File "C:\Python\Python37\lib\site-packages\catalogue.py", line 56, in __contains__ has_entry_point = self.entry_points and self.get_entry_point(name) File "C:\Python\Python37\lib\site-packages\catalogue.py", line 140, in get_entry_point return entry_point.load() File "C:\Python\Python37\lib\site-packages\importlib_metadata\__init__.py", line 94, in load module = import_module(match.group('module')) File "C:\Python\Python37\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 1006, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 983, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 967, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 677, in _load_unlocked File "&lt;frozen importlib._bootstrap_external&gt;", line 728, in exec_module File "&lt;frozen importlib._bootstrap&gt;", line 219, in _call_with_frames_removed File "C:\Python\Python37\lib\site-packages\spacy_transformers\__init__.py", line 1, in &lt;module&gt; from .language import TransformersLanguage File "C:\Python\Python37\lib\site-packages\spacy_transformers\language.py", line 5, in &lt;module&gt; from .util import is_special_token, pkg_meta, ATTRS, PIPES, LANG_FACTORY File "C:\Python\Python37\lib\site-packages\spacy_transformers\util.py", line 2, in &lt;module&gt; import transformers File "C:\Python\Python37\lib\site-packages\transformers\__init__.py", line 20, in &lt;module&gt; from .file_utils import (TRANSFORMERS_CACHE, PYTORCH_TRANSFORMERS_CACHE, PYTORCH_PRETRAINED_BERT_CACHE, File "C:\Python\Python37\lib\site-packages\transformers\file_utils.py", line 37, in &lt;module&gt; import torch File "C:\Python\Python37\lib\site-packages\torch\__init__.py", line 81, in &lt;module&gt; ctypes.CDLL(dll) File "C:\Python\Python37\lib\ctypes\__init__.py", line 356, in __init__ self._handle = _dlopen(self._name, mode) OSError: [WinError 126] The specified module could not be found </code></pre> <p>If I run the same code in PyCharm, it also shows me these two lines before all of those above:</p> <pre><code>2020-05-19 18:00:55.132721: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found 2020-05-19 18:00:55.132990: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. </code></pre> <p>If I got it right, these two lines complain that I don't have a GPU. However, according to the docs, I should be able to use BERT even without GPU.</p> <p>So I am really stuck right now and looking for your help. </p> <p>I should also mention, that I used <code>de_core_news_sm</code> model before and it worked fine.</p> <p>I have also already tried several solutions, but none of them worked. I tried: <a href="https://www.kaggle.com/questions-and-answers/103976" rel="nofollow noreferrer">this</a> and <a href="https://stackoverflow.com/questions/1940578/windowserror-error-126-the-specified-module-could-not-be-found">this</a>. I have also tried to uninstall all <code>spacy</code>-related libraries and installed them again. Didn't help either.</p> <p>I am working with:</p> <blockquote> <p>Windows 10 Home</p> <p>Python: 3.7.2</p> <p>Spacy: 2.2.4</p> <p>Spacy-transformers: 0.5.1</p> </blockquote> <p>Would appreciate any help or advice!</p>
<p>It's probably a problem with your installation of <code>torch</code>. Start in a clean virtual environment and install <code>torch</code> using the instructions here with CUDA as None: <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">https://pytorch.org/get-started/locally/</a>. Then install <code>spacy-transformers</code> with <code>pip</code>.</p>
158
BERT model
i have an error while training BERT model
https://stackoverflow.com/questions/62991456/i-have-an-error-while-training-bert-model
<p>I'm trying to train my model using GPUs.</p> <p>When I execute it I get this error below:</p> <pre><code>File &quot;main.py&quot;, line 95, in train loss.backward() File &quot;/opt/conda/lib/python3.7/site-packages/torch/tensor.py&quot;, line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File &quot;/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py&quot;, line 100, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: transform: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered # Added identdation for better readability </code></pre> <p>This error occurs when i use single or multiple GPUs.</p> <p>How can i deal with this issue? Thank you</p> <p>Here is the code of the BERT model (no segment input, no pretraining) c.f. When I use another model, this issue didn't occure.</p> <pre><code>&quot;&quot;&quot;code&quot;&quot;&quot; import math import torch import torch.nn as nn import torch.nn.functional as F class HateClassification(nn.Module): def __init__(self, config): super().__init__() self.config = config self.bert = BERT(self.config) # classfier self.projection_cls = nn.Linear(self.config.d_hidn, self.config.n_output, bias=False) self.sigmoid = nn.Sigmoid() def forward(self, inputs): inputs = torch.transpose(inputs,0,1) # (bs, n_enc_seq, d_hidn), (bs, d_hidn), [(bs, n_head, n_enc_seq, n_enc_seq)] outputs, outputs_cls, attn_probs = self.bert(inputs) # (bs, n_output) logits_cls = self.sigmoid(self.projection_cls(outputs_cls)) # (bs, n_output), [(bs, n_head, n_enc_seq, n_enc_seq)] return logits_cls.squeeze() &quot;&quot;&quot; bert &quot;&quot;&quot; class BERT(nn.Module): def __init__(self, config): super().__init__() self.config = config self.encoder = Encoder(self.config) self.linear = nn.Linear(config.d_hidn, config.d_hidn) self.activation = torch.tanh def forward(self, inputs): # (bs, n_seq, d_hidn), [(bs, n_head, n_enc_seq, n_enc_seq)] outputs, self_attn_probs = self.encoder(inputs) # (bs, d_hidn) outputs_cls = outputs[:, 0].contiguous() outputs_cls = self.linear(outputs_cls) outputs_cls = self.activation(outputs_cls) # (bs, n_enc_seq, n_enc_vocab), (bs, d_hidn), [(bs, n_head, n_enc_seq, n_enc_seq)] return outputs, outputs_cls, self_attn_probs def save(self, epoch, loss, path): torch.save({ &quot;epoch&quot;: epoch, &quot;loss&quot;: loss, &quot;state_dict&quot;: self.state_dict() }, path) def load(self, path): save = torch.load(path) self.load_state_dict(save[&quot;state_dict&quot;]) return save[&quot;epoch&quot;], save[&quot;loss&quot;] &quot;&quot;&quot; encoder &quot;&quot;&quot; class Encoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.enc_emb = nn.Embedding(self.config.n_enc_vocab, self.config.d_hidn) self.pos_emb = nn.Embedding(self.config.n_enc_seq + 1, self.config.d_hidn) self.layers = nn.ModuleList([EncoderLayer(self.config) for _ in range(self.config.n_layer)]) def forward(self, inputs): positions = torch.arange(inputs.size(1), device=inputs.device, dtype=inputs.dtype).expand(inputs.size(0), inputs.size(1)).contiguous() + 1 pos_mask = inputs.eq(self.config.i_pad) positions.masked_fill_(pos_mask, 0) # (bs, n_enc_seq, d_hidn) outputs = self.enc_emb(inputs) + self.pos_emb(positions) # (bs, n_enc_seq, n_enc_seq) attn_mask = get_attn_pad_mask(inputs, inputs, self.config.i_pad) attn_probs = [] for layer in self.layers: # (bs, n_enc_seq, d_hidn), (bs, n_head, n_enc_seq, n_enc_seq) outputs, attn_prob = layer(outputs, attn_mask) attn_probs.append(attn_prob) # (bs, n_enc_seq, d_hidn), [(bs, n_head, n_enc_seq, n_enc_seq)] return outputs, attn_probs &quot;&quot;&quot; encoder layer &quot;&quot;&quot; class EncoderLayer(nn.Module): def __init__(self, config): super().__init__() self.config = config self.self_attn = MultiHeadAttention(self.config) self.layer_norm1 = nn.LayerNorm(self.config.d_hidn, eps=self.config.layer_norm_epsilon) self.pos_ffn = PoswiseFeedForwardNet(self.config) self.layer_norm2 = nn.LayerNorm(self.config.d_hidn, eps=self.config.layer_norm_epsilon) def forward(self, inputs, attn_mask): # (bs, n_enc_seq, d_hidn), (bs, n_head, n_enc_seq, n_enc_seq) att_outputs, attn_prob = self.self_attn(inputs, inputs, inputs, attn_mask) att_outputs = self.layer_norm1(inputs + att_outputs) # (bs, n_enc_seq, d_hidn) ffn_outputs = self.pos_ffn(att_outputs) ffn_outputs = self.layer_norm2(ffn_outputs + att_outputs) # (bs, n_enc_seq, d_hidn), (bs, n_head, n_enc_seq, n_enc_seq) return ffn_outputs, attn_prob &quot;&quot;&quot; attention pad mask &quot;&quot;&quot; def get_attn_pad_mask(seq_q, seq_k, i_pad): batch_size, len_q = seq_q.size() batch_size, len_k = seq_k.size() pad_attn_mask = seq_k.data.eq(i_pad) pad_attn_mask= pad_attn_mask.unsqueeze(1).expand(batch_size, len_q, len_k) return pad_attn_mask &quot;&quot;&quot; multi head attention &quot;&quot;&quot; class MultiHeadAttention(nn.Module): def __init__(self, config): super().__init__() self.config = config self.W_Q = nn.Linear(self.config.d_hidn, self.config.n_head * self.config.d_head) self.W_K = nn.Linear(self.config.d_hidn, self.config.n_head * self.config.d_head) self.W_V = nn.Linear(self.config.d_hidn, self.config.n_head * self.config.d_head) self.scaled_dot_attn = ScaledDotProductAttention(self.config) self.linear = nn.Linear(self.config.n_head * self.config.d_head, self.config.d_hidn) self.dropout = nn.Dropout(config.dropout) def forward(self, Q, K, V, attn_mask): batch_size = Q.size(0) # (bs, n_head, n_q_seq, d_head) q_s = self.W_Q(Q).view(batch_size, -1, self.config.n_head, self.config.d_head).transpose(1,2) # (bs, n_head, n_k_seq, d_head) k_s = self.W_K(K).view(batch_size, -1, self.config.n_head, self.config.d_head).transpose(1,2) # (bs, n_head, n_v_seq, d_head) v_s = self.W_V(V).view(batch_size, -1, self.config.n_head, self.config.d_head).transpose(1,2) # (bs, n_head, n_q_seq, n_k_seq) attn_mask = attn_mask.unsqueeze(1).repeat(1, self.config.n_head, 1, 1) # (bs, n_head, n_q_seq, d_head), (bs, n_head, n_q_seq, n_k_seq) context, attn_prob = self.scaled_dot_attn(q_s, k_s, v_s, attn_mask) # (bs, n_head, n_q_seq, h_head * d_head) context = context.transpose(1, 2).contiguous().view(batch_size, -1, self.config.n_head * self.config.d_head) # (bs, n_head, n_q_seq, e_embd) output = self.linear(context) output = self.dropout(output) # (bs, n_q_seq, d_hidn), (bs, n_head, n_q_seq, n_k_seq) return output, attn_prob &quot;&quot;&quot; feed forward &quot;&quot;&quot; class PoswiseFeedForwardNet(nn.Module): def __init__(self, config): super().__init__() self.config = config self.conv1 = nn.Conv1d(in_channels=self.config.d_hidn, out_channels=self.config.d_ff, kernel_size=1) self.conv2 = nn.Conv1d(in_channels=self.config.d_ff, out_channels=self.config.d_hidn, kernel_size=1) self.active = F.gelu self.dropout = nn.Dropout(config.dropout) def forward(self, inputs): # (bs, d_ff, n_seq) output = self.conv1(inputs.transpose(1, 2)) output = self.active(output) # (bs, n_seq, d_hidn) output = self.conv2(output).transpose(1, 2) output = self.dropout(output) # (bs, n_seq, d_hidn) return output &quot;&quot;&quot; scale dot product attention &quot;&quot;&quot; class ScaledDotProductAttention(nn.Module): def __init__(self, config): super().__init__() self.config = config self.dropout = nn.Dropout(config.dropout) self.scale = 1 / (self.config.d_head ** 0.5) def forward(self, Q, K, V, attn_mask): # (bs, n_head, n_q_seq, n_k_seq) scores = torch.matmul(Q, K.transpose(-1, -2)) scores = scores.mul_(self.scale) scores.masked_fill_(attn_mask, -1e9) # (bs, n_head, n_q_seq, n_k_seq) attn_prob = nn.Softmax(dim=-1)(scores) attn_prob = self.dropout(attn_prob) # (bs, n_head, n_q_seq, d_v) context = torch.matmul(attn_prob, V) # (bs, n_head, n_q_seq, d_v), (bs, n_head, n_q_seq, n_v_seq) return context, attn_prob </code></pre>
<p>What is your <code>device.inputs</code> in the following code.</p> <pre><code>positions = torch.arange(inputs.size(1), device=inputs.device, dtype=inputs.dtype).expand(inputs.size(0), inputs.size(1)).contiguous() + 1 </code></pre> <p>Can you set specific gpu and try:</p> <pre><code>torch.cuda.set_device(device_num) torch.cuda.set_device(0) #for gpu id '0' </code></pre> <p>And also I suggest you to run your code with the <code>CUDA_LAUNCH_BLOCKING=1</code> env variable. <code>CUDA_LAUNCH_BLOCKING</code> makes cuda report the error where it actually occurs.</p> <pre><code>CUDA_LAUNCH_BLOCKING=1 python your_code.py </code></pre> <p>I suggest you to use the latest version of the <a href="https://pytorch.org/get-started/locally/#linux-anaconda" rel="nofollow noreferrer">PyTorch</a>, if you're not doing that already. Also ensure you're using one of the latest stable CUDA <a href="https://developer.nvidia.com/cuda-downloads" rel="nofollow noreferrer">tookit</a>.</p>
159
BERT model
Assigning weights during testing the bert model
https://stackoverflow.com/questions/65925640/assigning-weights-during-testing-the-bert-model
<p>I have a basic conceptual doubt. When i train a bert model on sentence say:</p> <pre><code>Train: &quot;went to get loan from bank&quot; Test :&quot;received education loan from bank&quot; </code></pre> <p>How does the test sentence assigns the weights for each token because i however dont pass exact sentence for testing and there is a slight addition of words like &quot;education&quot; which change the context slightly</p> <p>Assuming such context is not trained in my model how the weights are assigned for each token in my bert before i fine tune further</p> <p>If i confuse with my question, simply put i am trying to understand how the weights get assigned during testing if a slight variation in context occurs that was not trained on.</p>
<p>The vector representation of a token (keep in mind that token != word) is stored in an embedding layer. When we load the 'bert-base-uncased' model, we can see that it &quot;knows&quot; 30522 tokens and that the vector representation of each token consists of 768 elements:</p> <pre class="lang-py prettyprint-override"><code>from transformers import BertModel bert= BertModel.from_pretrained('bert-base-uncased') print(bert.embeddings.word_embeddings) </code></pre> <p>Output:</p> <pre><code>Embedding(30522, 768, padding_idx=0) </code></pre> <p>This embedding layer is not aware of any strings but of ids. For example, the vector representation of the id <code>101</code> is:</p> <pre class="lang-py prettyprint-override"><code>print(bert.embeddings.word_embeddings.weight[101]) </code></pre> <p>Output:</p> <pre><code>tensor([ 1.3630e-02, -2.6490e-02, -2.3503e-02, -7.7876e-03, 8.5892e-03, -7.6645e-03, -9.8808e-03, 6.0184e-03, 4.6921e-03, -3.0984e-02, 1.8883e-02, -6.0093e-03, -1.6652e-02, 1.1684e-02, -3.6245e-02, 8.3482e-03, -1.2112e-03, 1.0322e-02, 1.6692e-02, -3.0354e-02, ... 5.4162e-03, -3.0037e-02, 8.6773e-03, -1.7942e-03, 6.6826e-03, -1.1929e-02, -1.4076e-02, 1.6709e-02, 1.6860e-03, -3.3842e-03, 8.6805e-03, 7.1340e-03, 1.5147e-02], grad_fn=&lt;SelectBackward&gt;) </code></pre> <p>Everything that is outside of the &quot;known&quot; ids is not processable by BERT. To answer your question we need to look at the component that maps a string to the ids. This component is called a tokenizer. There are different tokenization <a href="https://huggingface.co/transformers/tokenizer_summary.html" rel="nofollow noreferrer">approaches</a>. BERT uses a WordPiece tokenizer which is a subword algorithm. This algorithm replaces everything <strong>that can not be created</strong> from its vocabulary with an unknown token <strong>that is part</strong> of the vocabulary (<code>[UNK]</code> in the original implementation, id: 100).</p> <p>Please have a look at the following small example in which a WordPiece tokenizer is trained from scratch to confirm that beheaviour:</p> <pre class="lang-py prettyprint-override"><code>from tokenizers import BertWordPieceTokenizer path ='file_with_your_trainings_sentence.txt' tokenizer = BertWordPieceTokenizer() tokenizer.train(files=path, vocab_size=30000, special_tokens=['[UNK]', '[SEP]', '[PAD]', '[CLS]', '[MASK]']) otrain = tokenizer.encode(&quot;went to get loan from bank&quot;) otest = tokenizer.encode(&quot;received education loan from bank&quot;) print('Vocabulary size: {}'.format(tokenizer.get_vocab_size())) print('Train tokens: {}'.format(otrain.tokens)) print('Test tokens: {}'.format(otest.tokens)) </code></pre> <p>Output:</p> <pre><code>Vocabulary size: 27 Train tokens: ['w', '##e', '##n', '##t', 't', '##o', 'g', '##e', '##t', 'l', '##o', '##an', 'f', '##r', '##o', '##m', 'b', '##an', '##k'] Test tokens: ['[UNK]', '[UNK]', 'l', '##o', '##an', 'f', '##r', '##o', '##m', 'b', '##an', '##k'] </code></pre>
160
BERT model
How to solve Attribute Error after running BERT model
https://stackoverflow.com/questions/67046637/how-to-solve-attribute-error-after-running-bert-model
<p>Receiving an error once running the BERT model. Up till this point the code runs successfully. Error I receive is AttributeError: 'str' object has no attribute 'shape'. The previous step before the code was creating a custom data generator. Using this the model was created. To provide context the model I used is found on the website <a href="https://keras.io/examples/nlp/semantic_similarity_with_bert/" rel="nofollow noreferrer">https://keras.io/examples/nlp/semantic_similarity_with_bert/</a> which I used to interpret my own data.</p> <pre><code>from ipywidgets import IntProgress strategy = tf.distribute.MirroredStrategy() with strategy.scope(): # Encoded token ids from BERT tokenizer. input_ids = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name=&quot;input_ids&quot; ) # Attention masks indicates to the model which tokens should be attended to. attention_masks = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name=&quot;attention_masks&quot; ) # Token type ids are binary masks identifying different sequences in the model. token_type_ids = tf.keras.layers.Input( shape=(max_length,), dtype=tf.int32, name=&quot;token_type_ids&quot; ) # Loading pretrained BERT model. bert_model = transformers.TFBertModel.from_pretrained(&quot;bert-base-uncased&quot;) # Freeze the BERT model to reuse the pretrained features without modifying them. bert_model.trainable = False sequence_output, pooled_output = bert_model( input_ids, attention_mask=attention_masks, token_type_ids=token_type_ids ) # Add trainable layers on top of frozen layers to adapt the pretrained features on the new data. bi_lstm = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(64, return_sequences=True) )(sequence_output) # Applying hybrid pooling approach to bi_lstm sequence output. avg_pool = tf.keras.layers.GlobalAveragePooling1D()(bi_lstm) max_pool = tf.keras.layers.GlobalMaxPooling1D()(bi_lstm) concat = tf.keras.layers.concatenate([avg_pool, max_pool]) dropout = tf.keras.layers.Dropout(0.3)(concat) output = tf.keras.layers.Dense(3, activation=&quot;softmax&quot;)(dropout) model = tf.keras.models.Model( inputs=[input_ids, attention_masks, token_type_ids], outputs=output ) model.compile( optimizer=tf.keras.optimizers.Adam(), loss=&quot;categorical_crossentropy&quot;, metrics=[&quot;acc&quot;], ) print(f&quot;Strategy: {strategy}&quot;) model.summary() --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-35-e6d50369bfa4&gt; in &lt;module&gt; 25 ) 26 # Add trainable layers on top of frozen layers to adapt the pretrained features on the new data. ---&gt; 27 bi_lstm = tf.keras.layers.Bidirectional( 28 tf.keras.layers.LSTM(64, return_sequences=True) 29 )(sequence_output) ~\anaconda3\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\keras\layers\wrappers.py in __call__(self, inputs, initial_state, constants, **kwargs) 528 529 if initial_state is None and constants is None: --&gt; 530 return super(Bidirectional, self).__call__(inputs, **kwargs) 531 532 # Applies the same workaround as in `RNN.__call__` ~\anaconda3\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs) 980 with ops.name_scope_v2(name_scope): 981 if not self.built: --&gt; 982 self._maybe_build(inputs) 983 984 with ops.enable_auto_cast_variables(self._compute_dtype_object): ~\anaconda3\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs) 2615 # Check input assumptions set before layer building, e.g. input rank. 2616 if not self.built: -&gt; 2617 input_spec.assert_input_compatibility( 2618 self.input_spec, inputs, self.name) 2619 input_list = nest.flatten(inputs) ~\anaconda3\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\keras\engine\input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name) 164 spec.min_ndim is not None or 165 spec.max_ndim is not None): --&gt; 166 if x.shape.ndims is None: 167 raise ValueError('Input ' + str(input_index) + ' of layer ' + 168 layer_name + ' is incompatible with the layer: ' AttributeError: 'str' object has no attribute 'shape' </code></pre>
<p>The input to the model must not be a string. It should be a numpy array or a tensor. You should encode your string, converting the characters to a numpy array.</p> <p>From <a href="https://keras.io/examples/nlp/semantic_similarity_with_bert/" rel="nofollow noreferrer">https://keras.io/examples/nlp/semantic_similarity_with_bert/</a> there is also code examples for tokenization:</p> <pre class="lang-py prettyprint-override"><code>self.tokenizer = transformers.BertTokenizer.from_pretrained( &quot;bert-base-uncased&quot;, do_lower_case=True ) # With BERT tokenizer's batch_encode_plus batch of both the sentences are # encoded together and separated by [SEP] token. encoded = self.tokenizer.batch_encode_plus( sentence_pairs.tolist(), add_special_tokens=True, max_length=max_length, return_attention_mask=True, return_token_type_ids=True, pad_to_max_length=True, return_tensors=&quot;tf&quot;, ) # Convert batch of encoded features to numpy array. input_ids = np.array(encoded[&quot;input_ids&quot;], dtype=&quot;int32&quot;) attention_masks = np.array(encoded[&quot;attention_mask&quot;], dtype=&quot;int32&quot;) token_type_ids = np.array(encoded[&quot;token_type_ids&quot;], dtype=&quot;int32&quot;) </code></pre>
161
BERT model
Weights of pre-trained BERT model not initialized
https://stackoverflow.com/questions/66561880/weights-of-pre-trained-bert-model-not-initialized
<p>I am using the <a href="https://github.com/pair-code/lit" rel="nofollow noreferrer">Language Interpretability Toolkit</a> (LIT) to load and analyze a BERT model that I pre-trained on an NER task.</p> <p>However, when I'm starting the LIT script with the path to my pre-trained model passed to it, it fails to initialize the weights and tells me:</p> <pre><code> modeling_utils.py:648] loading weights file bert_remote/examples/token-classification/Data/Models/results_21_03_04_cleaned_annotations/04.03._8_16_5e-5_cleaned_annotations/04-03-2021 (15.22.23)/pytorch_model.bin modeling_utils.py:739] Weights of BertForTokenClassification not initialized from pretrained model: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] modeling_utils.py:745] Weights from pretrained model not used in BertForTokenClassification: ['bert.embeddings.position_ids'] </code></pre> <p>It then simply uses the <code>bert-base-german-cased</code> version of BERT, which of course doesn't have my custom labels and thus fails to predict anything. I think it might have to do with PyTorch, but I can't find the error.</p> <p>If relevant, here is how I load my dataset into CoNLL 2003 format (modification of the dataloader scripts found <a href="https://github.com/PAIR-code/lit/tree/main/lit_nlp/examples/datasets" rel="nofollow noreferrer">here</a>):</p> <pre><code> def __init__(self): # Read ConLL Test Files self._examples = [] data_path = &quot;lit_remote/lit_nlp/examples/datasets/NER_Data&quot; with open(os.path.join(data_path, &quot;test.txt&quot;), &quot;r&quot;, encoding=&quot;utf-8&quot;) as f: lines = f.readlines() for line in lines[:2000]: if line != &quot;\n&quot;: token, label = line.split(&quot; &quot;) self._examples.append({ 'token': token, 'label': label, }) else: self._examples.append({ 'token': &quot;\n&quot;, 'label': &quot;O&quot; }) def spec(self): return { 'token': lit_types.Tokens(), 'label': lit_types.SequenceTags(align=&quot;token&quot;), } </code></pre> <p>And this is how I initialize the model and start the LIT server (modification of the <code>simple_pytorch_demo.py</code> script found <a href="https://github.com/PAIR-code/lit/blob/main/lit_nlp/examples/simple_pytorch_demo.py" rel="nofollow noreferrer">here</a>):</p> <pre><code> def __init__(self, model_name_or_path): self.tokenizer = transformers.AutoTokenizer.from_pretrained( model_name_or_path) model_config = transformers.AutoConfig.from_pretrained( model_name_or_path, num_labels=15, # FIXME CHANGE output_hidden_states=True, output_attentions=True, ) # This is a just a regular PyTorch model. self.model = _from_pretrained( transformers.AutoModelForTokenClassification, model_name_or_path, config=model_config) self.model.eval() ## Some omitted snippets here def input_spec(self) -&gt; lit_types.Spec: return { &quot;token&quot;: lit_types.Tokens(), &quot;label&quot;: lit_types.SequenceTags(align=&quot;token&quot;) } def output_spec(self) -&gt; lit_types.Spec: return { &quot;tokens&quot;: lit_types.Tokens(), &quot;probas&quot;: lit_types.MulticlassPreds(parent=&quot;label&quot;, vocab=self.LABELS), &quot;cls_emb&quot;: lit_types.Embeddings() </code></pre>
<p>This actually seems to be expected behaviour. In the <a href="https://huggingface.co/docs/transformers/training" rel="nofollow noreferrer">documentation of the GPT models</a> the HuggingFace team writes:</p> <blockquote> <p>This will issue a warning about some of the pretrained weights not being used and some weights being randomly initialized. That’s because we are throwing away the pretraining head of the BERT model to replace it with a classification head which is randomly initialized.</p> </blockquote> <p>So it seems to not be a problem for the fine-tuning. In my use case described above it worked despite the warning as well.</p>
162
BERT model
Error with using BERT model from Tensorflow
https://stackoverflow.com/questions/65298391/error-with-using-bert-model-from-tensorflow
<p>I have tried to follow Tensorflow instructions to use BERT model: (<a href="https://www.tensorflow.org/tutorials/text/classify_text_with_bert" rel="noreferrer">https://www.tensorflow.org/tutorials/text/classify_text_with_bert</a>)</p> <p>However, when I run these lines:</p> <pre><code>text_test = ['this is such an amazing movie!'] text_preprocessed = bert_preprocess_model(text_test) </code></pre> <p>I got the below error:</p> <pre> InvalidArgumentError: Trying to access resource using the wrong type. Expected class tensorflow::lookup::LookupInterface got class tensorflow::lookup::LookupInterface [[{{node StatefulPartitionedCall/StatefulPartitionedCall/bert_tokenizer/StatefulPartitionedCall/WordpieceTokenizeWithOffsets/WordpieceTokenizeWithOffsets/WordpieceTokenizeWithOffsets}}]] [Op:__inference_restored_function_body_72474] </pre> <p>The two classes are exactly the same: &quot;tensorflow::lookup::LookupInterface&quot;. Could anyone help with this? Thank you.</p>
<p>I found this bug report on GitHub: <a href="https://github.com/tensorflow/text/issues/476" rel="nofollow noreferrer">https://github.com/tensorflow/text/issues/476</a></p> <p>It appears that they've acknowledged it as a bug and are trying to fix it.</p>
163
BERT model
How to use the outputs of bert model?
https://stackoverflow.com/questions/63673511/how-to-use-the-outputs-of-bert-model
<p>The bert model gives us the two outputs, one gives us the [batch,maxlen,hiddenstates] and other one is [batch, hidden States of cls token]. But I did not understood when to use the specific output. Can anyone tell me for which task which output should be used??</p>
<p>The output is usually <code>[batch, maxlen, hidden_state]</code>, it can be narrowed down to <code>[batch, 1, hidden_state]</code> for <code>[CLS]</code> token, as the <code>[CLS]</code> token is 1st token in the sequence. Here , <code>[batch, 1, hidden_state]</code> can be equivalently considered as <code>[batch, hidden_state]</code>.</p> <p>Since BERT is transformer based contextual model, the idea is <code>[CLS]</code> token would have captured the entire context and would be sufficient for simple downstream tasks such as classification. Hence, for tasks such as classification using sentence representations, you can use <code>[batch, hidden_state]</code>. However, you can also consider <code>[batch, maxlen, hidden_state]</code>, average across <code>maxlen</code> dimension to get averaged embeddings. However, some sequential tasks, such as classification using CNN or RNN requires, sequence of representations, during which you have to rely on <code>[batch, maxlen, hidden_state]</code>. Also, some training objectives such as predicting the masked words, or for SQUAD 1.1 (as shown in BERT paper), the entire sequence of embeddings <code>[batch, maxlen, hidden_state]</code> are used.</p>
164
BERT model
Spacy&#39;s BERT model doesn&#39;t learn
https://stackoverflow.com/questions/61943409/spacys-bert-model-doesnt-learn
<p>I've been trying to use <code>spaCy</code>'s pretrained BERT model <code>de_trf_bertbasecased_lg</code> to increase accuracy in my classification project. I used to build a model from scratch using <code>de_core_news_sm</code> and everything worked fine: I had an accuracy around 70%. But now I am using BERT pretrained model instead and I'm getting 0% accuracy. I don't believe that it's working so bad, so I'm assuming that there is just a problem with my code. I might have missed something important but I can't figure out what. I used the code in <a href="https://explosion.ai/blog/spacy-transformers" rel="nofollow noreferrer">this article</a> as an example.</p> <p>Here is my code:</p> <pre><code>import spacy from spacy.util import minibatch from random import shuffle spacy.require_gpu() nlp = spacy.load('de_trf_bertbasecased_lg') data = get_data() # get_data() function returns a list with train data (I'll explain later how it looks) textcat = nlp.create_pipe("trf_textcat", config={"exclusive_classes": False}) for category in categories: # categories - a list of 21 different categories used for classification textcat.add_label(category) nlp.add_pipe(textcat) num = 0 # number used for counting batches optimizer = nlp.resume_training() for i in range(2): shuffle(data) losses = {} for batch in minibatch(data): texts, cats = zip(*batch) nlp.update(texts, cats, sgd=optimizer, losses=losses) num += 1 if num % 10000 == 0: # test model's performance every 10000 batches acc = test(nlp) # function test() will be explained later print(f'Accuracy: {acc}') nlp.to_disk('model/') </code></pre> <p>Function <code>get_data()</code> opens files with different categories, creates a tuple like this one <code>(text, {'cats' : {'category1': 0, 'category2':1, ...}})</code>, gathers all these tuples into one array, which is then being returned to the main function.</p> <p>Function <code>test(nlp)</code> opens the file with test data, predicts categories for each line in the file and checks, whether the prediction was correct.</p> <p>Again, everything worked just fine with <code>de_core_news_sm</code>, so I'm pretty sure that functions <code>get_data()</code> and <code>test(nlp)</code> are working fine. Code above looks like in example but still 0% accuracy.I don't understand what I'm doing wrong.</p> <p>Thanks in advance for any help!</p> <p><strong>UPDATE</strong></p> <p>Trying to understand the above problem I decided to try the model with only a few examples (just like it is advised <a href="https://github.com/explosion/spacy-transformers/issues/144" rel="nofollow noreferrer">here</a>). Here is the code:</p> <pre><code>import spacy from spacy.util import minibatch import random import torch train_data = [ ("It is realy cool", {"cats": {"POSITIVE": 1.0, "NEGATIVE": 0.0}}), ("I hate it", {"cats": {"POSITIVE": 0.0, "NEGATIVE": 1.0}}) ] is_using_gpu = spacy.prefer_gpu() if is_using_gpu: torch.set_default_tensor_type("torch.cuda.FloatTensor") nlp = spacy.load("en_trf_bertbaseuncased_lg") textcat = nlp.create_pipe("trf_textcat", config={"exclusive_classes": True}) for label in ("POSITIVE", "NEGATIVE"): textcat.add_label(label) nlp.add_pipe(textcat) optimizer = nlp.resume_training() for i in range(10): random.shuffle(train_data) losses = {} for batch in minibatch(train_data): texts, cats = zip(*batch) nlp.update(texts, cats, sgd=optimizer, losses=losses) print(i, losses) print() test_data = [ "It is really cool", "I hate it", "Great!", "I do not think this is cool" ] for line in test_data: print(line) print(nlp(line).cats) </code></pre> <p>And the output was:</p> <pre><code>0 {'trf_textcat': 0.125} 1 {'trf_textcat': 0.12423406541347504} 2 {'trf_textcat': 0.12188033014535904} 3 {'trf_textcat': 0.12363225221633911} 4 {'trf_textcat': 0.11996611207723618} 5 {'trf_textcat': 0.14696261286735535} 6 {'trf_textcat': 0.12320466339588165} 7 {'trf_textcat': 0.12096124142408371} 8 {'trf_textcat': 0.15916231274604797} 9 {'trf_textcat': 0.1238454058766365} It is really cool {'POSITIVE': 0.47827497124671936, 'NEGATIVE': 0.5217249989509583} I hate it {'POSITIVE': 0.47827598452568054, 'NEGATIVE': 0.5217240452766418} Great! {'POSITIVE': 0.4782750606536865, 'NEGATIVE': 0.5217249393463135} I do not think this is cool {'POSITIVE': 0.478275328874588, 'NEGATIVE': 0.5217246413230896} </code></pre> <p>Not only the model performs bad, the loss is not getting smaller and scores for all the test sentences are almost the same. And most importantly: it didn't even get those questions correct, that happened to be in the train data. So my question is: does the model even learn? And what am I doing wrong?</p> <p>Any thoughts?</p>
<p>Received an answer to my question on <a href="https://github.com/explosion/spacy-transformers/issues/180" rel="nofollow noreferrer">GitHub</a> and it looks like there must be some optimizer parameters specified, just like in <a href="https://github.com/explosion/spacy-transformers/blob/v0.6.x/examples/train_textcat.py" rel="nofollow noreferrer">this example</a>.</p>
165
BERT model
Issues calculating accuracy for custom BERT model
https://stackoverflow.com/questions/67420868/issues-calculating-accuracy-for-custom-bert-model
<p>I'm having some issues trying to calculate the accuracy of a custom BERT model which also uses the pretrained model from Huggingface. This is the code that I have :</p> <pre><code>import numpy as np import pandas as pd from sklearn import metrics, linear_model import torch from torch.utils.data import Dataset, DataLoader, RandomSampler, SequentialSampler from transformers import BertTokenizer, BertModel from torch import cuda import re import torch.nn as nn device = 'cuda' if cuda.is_available() else 'cpu' MAX_LEN = 200 TRAIN_BATCH_SIZE = 8 # 12, 64 VALID_BATCH_SIZE = 4 EPOCHS = 1 LEARNING_RATE = 1e-4 #3e-4, 1e-4, 5e-5, 3e-5 tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased') file1 = open('test.txt', 'r') list_com = [] list_label = [] for line in file1: possible_labels = 'positive|negative' label = re.findall(possible_labels, line) line = re.sub(possible_labels, ' ', line) line = re.sub('\n', ' ', line) list_com.append(line) list_label.append(label[0]) list_tuples = list(zip(list_com, list_label)) file1.close() labels = ['positive', 'negative'] df = pd.DataFrame(list_tuples, columns=['review', 'sentiment']) df['sentiment'] = df['sentiment'].map({'positive': 1, 'negative': 0}) for i in range(0,len(df['sentiment'])): list_label[i] = df['sentiment'][i] #print(df) class CustomDataset(Dataset): def __init__(self, dataframe, tokenizer, max_len): self.tokenizer = tokenizer self.data = dataframe self.comment_text = dataframe.review self.targets = self.data.sentiment self.max_len = max_len def __len__(self): return len(self.comment_text) def __getitem__(self, index): comment_text = str(self.comment_text[index]) comment_text = &quot; &quot;.join(comment_text.split()) inputs = self.tokenizer.encode_plus(comment_text,None,add_special_tokens=True,max_length=self.max_len, pad_to_max_length=True,return_token_type_ids=False,truncation=True) ids = inputs['input_ids'] mask = inputs['attention_mask'] return { 'ids': torch.tensor(ids, dtype=torch.long), 'mask': torch.tensor(mask, dtype=torch.long), 'targets': torch.tensor(self.targets[index], dtype=torch.float) } train_size = 0.8 train_dataset=df.sample(frac=train_size,random_state=200) test_dataset=df.drop(train_dataset.index).reset_index(drop=True) train_dataset = train_dataset.reset_index(drop=True) print(&quot;FULL Dataset: {}&quot;.format(df.shape)) print(&quot;TRAIN Dataset: {}&quot;.format(train_dataset.shape)) print(&quot;TEST Dataset: {}&quot;.format(test_dataset.shape)) training_set = CustomDataset(train_dataset, tokenizer, MAX_LEN) testing_set = CustomDataset(test_dataset, tokenizer, MAX_LEN) train_params = {'batch_size': TRAIN_BATCH_SIZE,'shuffle': True,'num_workers': 0} test_params = {'batch_size': VALID_BATCH_SIZE,'shuffle': True,'num_workers': 0} training_loader = DataLoader(training_set, **train_params) testing_loader = DataLoader(testing_set, **test_params) class BERTClass(torch.nn.Module): def __init__(self): super(BERTClass, self).__init__() self.bert = BertModel.from_pretrained('bert-base-multilingual-uncased',return_dict=False,num_labels = 2) self.lstm = nn.LSTM(768, 256, batch_first=True, bidirectional=True) self.linear = nn.Linear(256*2,2) def forward(self, ids , mask): sequence_output, pooled_output = self.bert(ids, attention_mask=mask ) lstm_output, (h, c) = self.lstm(sequence_output) ## extract the 1st token's embeddings hidden = torch.cat((lstm_output[:, -1, :256], lstm_output[:, 0, 256:]), dim=-1) linear_output = self.linear(lstm_output[:, -1].view(-1, 256 * 2)) return linear_output model = BERTClass() model.to(device) #print(model) def loss_fn(outputs, targets): return torch.nn.CrossEntropyLoss()(outputs, targets) optimizer = torch.optim.Adam(params = model.parameters(), lr=LEARNING_RATE) def train(epoch): model.train() for _, data in enumerate(training_loader, 0): ids = data['ids'].to(device, dtype=torch.long) mask = data['mask'].to(device, dtype=torch.long) targets = data['targets'].to(device, dtype=torch.long) outputs = model(ids, mask) optimizer.zero_grad() loss = loss_fn(outputs, targets) if _ % 1000 == 0: print(f'Epoch: {epoch}, Loss: {loss.item()}') optimizer.zero_grad() loss.backward() optimizer.step() for epoch in range(EPOCHS): train(epoch) def validation(epoch): model.eval() fin_targets = [] fin_outputs = [] with torch.no_grad(): for _, data in enumerate(testing_loader, 0): ids = data['ids'].to(device, dtype=torch.long) mask = data['mask'].to(device, dtype=torch.long) targets = data['targets'].to(device, dtype=torch.float) outputs = model(ids, mask) fin_targets.extend(targets.cpu().detach().numpy().tolist()) fin_outputs.extend(torch.sigmoid(outputs).cpu().detach().numpy().tolist()) return fin_outputs, fin_targets for epoch in range(EPOCHS): outputs, targets = validation(epoch) outputs = np.array(outputs) &gt;= 0.5 accuracy = metrics.accuracy_score(targets, outputs) print(f&quot;Accuracy Score = {accuracy}&quot;) torch.save(model.state_dict, 'model.pt') print(f'Model saved!') </code></pre> <p>It should be a binary classification, positive(1) or negative(0), but when i try to compute the accuracy i get the error <code>ValueError: Classification metrics can't handle a mix of binary and multilabel-indicator targets</code> oh this line <code>accuracy = metrics.accuracy_score(targets, outputs)</code> .The outputs look like this:</p> <pre><code>[[ True False] [False False] [ True False] [ True False] [ True False] [ True False] [ True False] [False True] [ True False] [ True False] [False True]] </code></pre> <p>Can someone advise what would be the fix to this? Or if there something else that can improve this? Also, I saved the model and I want to know how can I use the saved model to classify user input in another .py file?(assuming that we enter a sentence from keyboard and we want the model to classify it).</p>
166
BERT model
How to use a different pre-trained BERT model with bert_score
https://stackoverflow.com/questions/76306997/how-to-use-a-different-pre-trained-bert-model-with-bert-score
<p>I want you to use different pretrain bert model embeddings for the bert score. How can I do that? P, R, F1 = score(cand, ref, lang=&quot;bn&quot;, model_type=&quot;distilbert-base-uncased&quot;, verbose=True) In model_type if use my pretain model then it gives a keyError.</p>
<p>You need to pass num_layers configuration parameter (if it is not given, the library will search for predefined defaults in utils.py file ).</p> <pre><code>bert_score.score(['Hello world'], ['Whats up'], model_type='/home/user/bart_large', num_layers=10) </code></pre>
167
BERT model
How to get intermediate layers&#39; output of pre-trained BERT model in HuggingFace Transformers library?
https://stackoverflow.com/questions/61465103/how-to-get-intermediate-layers-output-of-pre-trained-bert-model-in-huggingface
<p>(I'm following <a href="https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/" rel="noreferrer">this</a> pytorch tutorial about BERT word embeddings, and in the tutorial the author is access the intermediate layers of the BERT model.)</p> <p>What I want is to access the last, lets say, 4 last layers of a single input token of the BERT model in TensorFlow2 using HuggingFace's Transformers library. Because each layer outputs a vector of length 768, so the last 4 layers will have a shape of <code>4*768=3072</code> (for each token).</p> <p>How can I implement this in TF/keras/TF2, to get the intermediate layers of pretrained model for an input token? (later I will try to get the tokens for each token in a sentence, but for now one token is enough).</p> <p>I'm using the HuggingFace's BERT model:</p> <pre><code>!pip install transformers from transformers import (TFBertModel, BertTokenizer) bert_model = TFBertModel.from_pretrained("bert-base-uncased") # Automatically loads the config bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") sentence_marked = "hello" tokenized_text = bert_tokenizer.tokenize(sentence_marked) indexed_tokens = bert_tokenizer.convert_tokens_to_ids(tokenized_text) print (indexed_tokens) &gt;&gt; prints [7592] </code></pre> <p>The output is a token (<code>[7592]</code>), which should be the input of the for the BERT model.</p>
<p>The third element of the BERT model's output is a tuple which consists of output of embedding layer as well as the intermediate layers hidden states. From <a href="https://huggingface.co/transformers/model_doc/bert.html#tfbertmodel" rel="noreferrer">documentation</a>:</p> <blockquote> <p><strong>hidden_states (<code>tuple(tf.Tensor)</code>, optional, returned when <code>config.output_hidden_states=True</code>):</strong> tuple of <code>tf.Tensor</code> (one for the output of the embeddings + one for the output of each layer) of shape <code>(batch_size, sequence_length, hidden_size)</code>.</p> <p>Hidden-states of the model at the output of each layer plus the initial embedding outputs.</p> </blockquote> <p>For the <code>bert-base-uncased</code> model, the <code>config.output_hidden_states</code> is by default <code>True</code>. Therefore, to access hidden states of the 12 intermediate layers, you can do the following:</p> <pre><code>outputs = bert_model(input_ids, attention_mask) hidden_states = outputs[2][1:] </code></pre> <p>There are 12 elements in <code>hidden_states</code> tuple corresponding to all the layers from beginning to the last, and each of them is an array of shape <code>(batch_size, sequence_length, hidden_size)</code>. So, for example, to access the hidden state of third layer for the fifth token of all the samples in the batch, you can do: <code>hidden_states[2][:,4]</code>.</p> <hr> <p>Note that if the model you are loading does not return the hidden states by default, then you can load the config using <code>BertConfig</code> class and pass <code>output_hidden_state=True</code> argument, like this:</p> <pre><code>config = BertConfig.from_pretrained("name_or_path_of_model", output_hidden_states=True) bert_model = TFBertModel.from_pretrained("name_or_path_of_model", config=config) </code></pre>
168
BERT model
Why is BERT model with pytorch native approach not learning?
https://stackoverflow.com/questions/68596995/why-is-bert-model-with-pytorch-native-approach-not-learning
<p>My custom BERT model's architecture:</p> <pre><code>class BertArticleClassifier(nn.Module): def __init__(self, n_classes, freeze_bert_weights=False): super(BertArticleClassifier, self).__init__() self.bert = AutoModel.from_pretrained('bert-base-uncased') if freeze_bert_weights: for param in self.bert.parameters(): param.requires_grad = False self.dropout = nn.Dropout(0.1) self.fc_1 = nn.Linear(768, 256) self.leaky_relu = nn.LeakyReLU() self.fc_out = nn.Linear(256, n_classes) def forward(self, input_ids, attention_mask): output = self.bert(input_ids, attention_mask) return self.fc_out(self.leaky_relu(self.fc_1(self.dropout(output['pooler_output'])))) </code></pre> <p><code>self.bert</code> is a model from transformers library.</p> <p>Training script:</p> <pre><code>def train_my_model(model, optimizer, criterion, scheduler, epochs, dataloader_train, dataloader_validation, device, pretrained_weights=None): if pretrained_weights: torch.save(model.state_dict(), pretrained_weights) for epoch in tqdm(range(1, epochs + 1)): model.train() loss_train_total = 0 progress_bar = tqdm(dataloader_train, desc=f'Epoch {epoch :1d}', leave=False, disable=False) for batch in progress_bar: optimizer.zero_grad() batch = tuple(batch[b].to(device) for b in batch) input_ids, mask, labels = batch predictions = model(input_ids, mask) loss = criterion(predictions, labels) loss.backward() loss_train_total += loss.item() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item() / len(batch))}) torch.save(model.state_dict(), f'models_data/bert_my_model/finetuned_BERT_epoch_{epoch}.model') tqdm.write(f'\nEpoch {epoch}') loss_train_avg = loss_train_total / len(dataloader_train) tqdm.write(f'Training loss: {loss_train_avg}') val_loss, predictions, true_vals = evaluate(model, dataloader_validation, criterion, device) val_f1 = f1_score_func(predictions, true_vals) tqdm.write(f'Validation loss: {val_loss}') tqdm.write(f'F1 Score (Weighted): {val_f1}') </code></pre> <p>Optimizer and Criterion:</p> <pre><code>optimizer = AdamW(model.parameters(), lr=1e-4, eps=1e-6) class_weights = torch.tensor(class_weights, dtype=torch.float).to(device) criterion = nn.CrossEntropyLoss(weight=class_weights).to(device) </code></pre> <p>After 5 epochs I get the same validation loss ~3.1. I know that my data is preprocessed in the correct way because if I train this transformers <code>BertForSequenceClassification</code> model, the model is learning, but the problem with that approach is that I cannot tweak the loss function to accept the class weights, so that is the reason for creating my own custom model.</p> <p>As you can see in the model's <code>forward</code> method, I extract the <code>output['pooler_output']</code> piece, and disregard the loss (which is returned alongside the <code>output['pooler_output']</code> element). The problem which I may deduced is that when in the training loop I call <code>loss.backward()</code>, maybe the model's weights aren't updating, because transformers BERT model's return their own loss as an output.</p> <p>What am I doing wrong?</p>
169
BERT model
BERT model does not learn new task
https://stackoverflow.com/questions/56769943/bert-model-does-not-learn-new-task
<p>I am trying to fine-tune a pretrained BERT model on amazon-review dataset. For that I extended the <code>run_classifier</code> file by the following processor:</p> <pre><code>class AmazonProcessor(DataProcessor): """Processor for the Amazon data set.""" def get_train_examples(self, data_dir): """See base class.""" return self._create_examples( self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") def get_dev_examples(self, data_dir): """See base class.""" return self._create_examples( self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") def get_test_examples(self, data_dir): """See base class.""" return self._create_examples( self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") def get_labels(self): """See base class.""" return ["0", "1", "2"] def _create_examples(self, lines, set_type): """Creates examples for the training and dev sets.""" examples = [] for (i, line) in enumerate(lines): # header if i == 0: continue guid = "%s-%s" % (set_type, i) text_a = tokenization.convert_to_unicode(line[13]) label = tokenization.convert_to_unicode(line[7]) # only train on 3 labels instead of 5 if int(label) &lt;= 2: label = "0" if int(label) == 3: label = "1" if int(label) &gt;= 4: label = "2" examples.append( InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) return examples </code></pre> <p>I am training in a colab notebook on GPU, so therefore I adapted the main method for my need as well:</p> <pre class="lang-py prettyprint-override"><code>processors = { "cola": run_classifier.ColaProcessor, "mnli": run_classifier.MnliProcessor, "mrpc": run_classifier.MrpcProcessor, "xnli": run_classifier.XnliProcessor, "amazon": run_classifier.AmazonProcessor, } bert_config_file = os.path.join(BERT_FOLDER, "bert_config.json") max_seq_length = 128 output_dir = "drive/My Drive/model" task_name = "amazon" vocab_file = os.path.join(BERT_FOLDER, "vocab.txt") do_lower_case = False master = None tpu_cluster_resolver = None save_checkpoints_steps = 1000 iterations_per_loop = 1000 use_tpu = False data_dir = "drive/My Drive/csv_dataset" learning_rate = 5e-5 warmup_proportion = 0.1 train_batch_size = 16 eval_batch_size = 1 predict_batch_size = 1 num_train_epochs = 10.0 num_train_steps = 10000 num_tpu_cores = 8 #init_checkpoint = os.path.join(BERT_FOLDER, "bert_model.ckpt") init_checkpoint = "drive/My Drive/model2/model.ckpt-41000" do_train = True do_eval = True tokenization.validate_case_matches_checkpoint(do_lower_case, init_checkpoint) bert_config = modeling.BertConfig.from_json_file(bert_config_file) print(bert_config) task_name = task_name.lower() processor = processors[task_name]() label_list = processor.get_labels() tokenizer = tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=do_lower_case) is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 run_config = tf.contrib.tpu.RunConfig( cluster=tpu_cluster_resolver, master=master, model_dir=output_dir, save_checkpoints_steps=save_checkpoints_steps, tpu_config=tf.contrib.tpu.TPUConfig( iterations_per_loop=iterations_per_loop, num_shards=num_tpu_cores, per_host_input_for_training=is_per_host)) train_examples = None num_train_steps = None num_warmup_steps = None if do_train: train_examples = processor.get_train_examples(data_dir) num_train_steps = int( len(train_examples) / train_batch_size * num_train_epochs) num_warmup_steps = int(num_train_steps * warmup_proportion) model_fn = run_classifier.model_fn_builder( bert_config=bert_config, num_labels=len(label_list), init_checkpoint=init_checkpoint, learning_rate=learning_rate, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, use_tpu=use_tpu, use_one_hot_embeddings=use_tpu) estimator = tf.contrib.tpu.TPUEstimator( use_tpu=use_tpu, model_fn=model_fn, config=run_config, train_batch_size=train_batch_size, eval_batch_size=eval_batch_size, predict_batch_size=predict_batch_size) if do_train: train_file = os.path.join(output_dir, "train.tf_record") run_classifier.file_based_convert_examples_to_features( train_examples, label_list, max_seq_length, tokenizer, train_file) tf.logging.info("***** Running training *****") tf.logging.info(" Num examples = %d", len(train_examples)) tf.logging.info(" Batch size = %d", train_batch_size) tf.logging.info(" Num steps = %d", num_train_steps) train_input_fn = run_classifier.file_based_input_fn_builder( input_file=train_file, seq_length=max_seq_length, is_training=True, drop_remainder=True) estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) if do_eval: eval_examples = processor.get_test_examples(data_dir) num_actual_eval_examples = len(eval_examples) if use_tpu: # TPU requires a fixed batch size for all batches, therefore the number # of examples must be a multiple of the batch size, or else examples # will get dropped. So we pad with fake examples which are ignored # later on. These do NOT count towards the metric (all tf.metrics # support a per-instance weight, and these get a weight of 0.0). while len(eval_examples) % eval_batch_size != 0: eval_examples.append(PaddingInputExample()) eval_file = os.path.join(output_dir, "eval.tf_record") run_classifier.file_based_convert_examples_to_features( eval_examples, label_list, max_seq_length, tokenizer, eval_file) tf.logging.info("***** Running evaluation *****") tf.logging.info(" Num examples = %d (%d actual, %d padding)", len(eval_examples), num_actual_eval_examples, len(eval_examples) - num_actual_eval_examples) tf.logging.info(" Batch size = %d", eval_batch_size) # This tells the estimator to run through the entire set. eval_steps = None # However, if running eval on the TPU, you will need to specify the # number of steps. if use_tpu: assert len(eval_examples) % eval_batch_size == 0 eval_steps = int(len(eval_examples) // eval_batch_size) eval_drop_remainder = True if use_tpu else False eval_input_fn = run_classifier.file_based_input_fn_builder( input_file=eval_file, seq_length=max_seq_length, is_training=False, drop_remainder=eval_drop_remainder) result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps) output_eval_file = os.path.join(output_dir, "eval_results.txt") with tf.gfile.GFile(output_eval_file, "w") as writer: tf.logging.info("***** Eval results *****") for key in sorted(result.keys()): tf.logging.info(" %s = %s", key, str(result[key])) writer.write("%s = %s\n" % (key, str(result[key]))) </code></pre> <p>I know this is a lot of code but because I cannot pin point the error I want to present all of it.</p> <p>Note that most of the logging output seems perfectly reasonable:</p> <p>For example a converted example:</p> <pre><code>INFO:tensorflow:tokens: [CLS] Ich habe schon viele Klavier ##kon ##zer ##te gehört , aber was Frau Martha Ar ##geri ##ch hier spielt lässt einem ge ##wis ##ser ##ma ##ßen den At ##em stock ##en . So geni ##al habe ich diese 2 Klavier ##kon ##zer ##te von Ra ##ch ##mani ##no ##ff und T ##sch ##aik ##ov ##sky noch nie gehört . Sie ent ##fes ##selt einen regel ##rechte ##n Feuer ##stu ##rm an Vir ##tu ##osi ##tät . [SEP] INFO:tensorflow:input_ids: 101 21023 21404 16363 18602 48021 17423 14210 10216 16706 117 11566 10134 16783 26904 18484 68462 10269 13329 28508 25758 10745 46503 83648 12754 10369 20284 10140 11699 10451 20511 10136 119 12882 107282 10415 21404 12979 12750 123 48021 17423 14210 10216 10166 38571 10269 31124 10343 13820 10130 157 12044 106333 11024 16116 11230 11058 16706 119 11583 61047 58058 26063 10897 46578 55663 10115 68686 19987 19341 10151 106433 10991 20316 24308 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 INFO:tensorflow:label: 2 (id = 2) </code></pre> <p>Or the model loading from a checkpoint-file:</p> <pre><code>INFO:tensorflow: name = output_weights:0, shape = (3, 768), *INIT_FROM_CKPT* INFO:tensorflow: name = output_bias:0, shape = (3,), *INIT_FROM_CKPT* </code></pre> <p>But in the end the eval_accuracy always stays the same:</p> <pre><code>I0625 15:46:41.328946 eval_accuracy = 0.3338616 </code></pre> <p>The full repository can be found here: <a href="https://github.com/joroGER/bert/" rel="nofollow noreferrer">https://github.com/joroGER/bert/</a></p> <p>And a gist to the notebook here: <a href="https://colab.research.google.com/gist/joroGER/75c1c9c6383f0199bb54ce7b63d412d0/untitled4.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/gist/joroGER/75c1c9c6383f0199bb54ce7b63d412d0/untitled4.ipynb</a></p>
170
BERT model
Using Pretrained BERT model to add additional words that are not recognized by the model
https://stackoverflow.com/questions/64816669/using-pretrained-bert-model-to-add-additional-words-that-are-not-recognized-by-t
<p>I want some help regarding adding additional words in the existing BERT model. I have two quires kindly guide me:</p> <p>I am working on NER task for a domain:</p> <p>There are few words (not sure the exact numbers) that BERT recognized as [UNK], but those entities are required for the model to recognize. The pretrained model learns well (up to 80%) accuracy on &quot;bert-base-cased&quot; while providing labeled data and fine-tune the model but intuitively the model will learn better if it recognize all the entities.</p> <ol> <li><p>Do i need to add those unknown entities in vocabs.txt and train the model again?</p> </li> <li><p>Do i need to train the BERT model on my data from Scratch?</p> </li> </ol> <p>Thanks...</p>
<p>BERT works well because it is pre-trained on a very large textual dataset of 3.3 billion words. Training BERT from skratch is resource-demanding and does not pay of in most reasonable use cases.</p> <p>BERT uses the wordpiece algorithm for input segmentation. This shoudl in theory ensure that there no out-of-vocabulary token that would end up as <code>[UNK]</code>. The worst-case scenario in the segmentation would be that input tokens end up segmented into individual characters. If the segmentation is done correctly, <code>[UNK]</code> should appear only if the tokenizer encouters UTF-8 that were not in the training data.</p> <p>The most probably sources of your problem:</p> <ol> <li><p>There is a bug in the tokenization, so it produces tokens that are not in the word-piece vocabulary. (Perhaps word tokenization instead of WordPiece tokenization?)</p> </li> <li><p>It is an encoding issue that generates invalid or weird UTF-8 characters.</p> </li> </ol>
171
BERT model
Try to run an NLP model with an Electra instead of a BERT model
https://stackoverflow.com/questions/72680932/try-to-run-an-nlp-model-with-an-electra-instead-of-a-bert-model
<p>I want to run the <a href="https://github.com/vdobrovolskii/wl-coref" rel="nofollow noreferrer">wl-coref</a> model with an Electra model instead of a Bert model. However, I get an error message with the Electra model and can't find a hint in the Huggingface documentation on how to fix it.</p> <p>I try different BERT models such like roberta-base, bert-base-german-cased or SpanBERT/spanbert-base-cased. All works. But if I try an Electra model, like google/electra-base-discriminator or german-nlp-group/electra-base-german-uncased then it doesn't work.</p> <p>The error that is displayed:</p> <pre><code>out, _ = self.bert(subwords_batches_tensor, attention_mask=torch.tensor(attention_mask, device=self.config.device)) ValueError: not enough values to unpack (expected 2, got 1) </code></pre> <p>And this is the method where the error comes from:<a href="https://github.com/vdobrovolskii/wl-coref/blob/master/coref/coref_model.py#L332" rel="nofollow noreferrer">_bertify</a> in line 349.</p>
<p>Just remove the underscore <code>_</code>. <a href="https://huggingface.co/docs/transformers/model_doc/electra#transformers.ElectraModel.forward" rel="nofollow noreferrer">ELECTRA</a> does not return a pooling output like <a href="https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertModel.forward" rel="nofollow noreferrer">BERT</a> or <a href="https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaModel.forward" rel="nofollow noreferrer">RoBerta</a>:</p> <pre class="lang-py prettyprint-override"><code>from transformers import AutoTokenizer, AutoModel def bla(model_id:str): t = AutoTokenizer.from_pretrained(model_id) m = AutoModel.from_pretrained(model_id) print(m(**t(&quot;this is a test&quot;, return_tensors=&quot;pt&quot;)).keys()) bla(&quot;google/electra-base-discriminator&quot;) bla(&quot;roberta-base&quot;) </code></pre> <p>Output:</p> <pre><code>odict_keys(['last_hidden_state']) odict_keys(['last_hidden_state', 'pooler_output']) </code></pre>
172
BERT model
Predicting Sentiment of Raw Text using Trained BERT Model, Hugging Face
https://stackoverflow.com/questions/69820318/predicting-sentiment-of-raw-text-using-trained-bert-model-hugging-face
<p>I'm predicting sentiment analysis of Tweets with positive, negative, and neutral classes. I've trained a BERT model using Hugging Face. Now I'd like to make predictions on a dataframe of unlabeled Twitter text and I'm having difficulty.</p> <p>I've followed the following tutorial (<a href="https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/" rel="nofollow noreferrer">https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/</a>) and was able to train a BERT model using Hugging Face.</p> <p>Here's an example of predicting on raw text however it's only one sentence and I would like to use a column of Tweets. <a href="https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/#predicting-on-raw-text" rel="nofollow noreferrer">https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/#predicting-on-raw-text</a></p> <pre><code>review_text = &quot;I love completing my todos! Best app ever!!!&quot; encoded_review = tokenizer.encode_plus( review_text, max_length=MAX_LEN, add_special_tokens=True, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'].to(device) attention_mask = encoded_review['attention_mask'].to(device) output = model(input_ids, attention_mask) _, prediction = torch.max(output, dim=1) print(f'Review text: {review_text}') print(f'Sentiment : {class_names[prediction]}') Review text: I love completing my todos! Best app ever!!! Sentiment : positive </code></pre> <p>Bill's response works. Here's the solution.</p> <pre><code>def predictionPipeline(text): encoded_review = tokenizer.encode_plus( text, max_length=MAX_LEN, add_special_tokens=True, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'].to(device) attention_mask = encoded_review['attention_mask'].to(device) output = model(input_ids, attention_mask) _, prediction = torch.max(output, dim=1) return(class_names[prediction]) df2['prediction']=df2['cleaned_tweet'].apply(predictionPipeline) </code></pre>
<p>You can use the same code to predict texts from the dataframe column.</p> <pre><code>model = ... tokenizer = ... def predict(review_text): encoded_review = tokenizer.encode_plus( review_text, max_length=MAX_LEN, add_special_tokens=True, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'].to(device) attention_mask = encoded_review['attention_mask'].to(device) output = model(input_ids, attention_mask) _, prediction = torch.max(output, dim=1) print(f'Review text: {review_text}') print(f'Sentiment : {class_names[prediction]}') return class_names[prediction] df = pd.DataFrame({ 'texts': [&quot;text1&quot;, &quot;text2&quot;, &quot;....&quot;] }) df_dataset[&quot;sentiments&quot;] = df.apply(lambda l: predict(l.texts), axis=1) </code></pre>
173
BERT model
Retraining existing base BERT model with additional data
https://stackoverflow.com/questions/62948266/retraining-existing-base-bert-model-with-additional-data
<p>I have generated new Base BERT model(<strong>dataset1_model_cased_L-12_H-768_A-12</strong>) using <strong>cased_L-12_H-768_A-12</strong> as trained multi label classification from <a href="https://github.com/dmis-lab/biobert/blob/master/run_classifier.py" rel="nofollow noreferrer">biobert-run_classifier</a></p> <p>I need to add more additional data as <strong>dataset2</strong> and the model should be <strong>dataset2_model_cased_L-12_H-768_A-12</strong></p> <p>Is <a href="https://www.tensorflow.org/hub/tutorials/tf2_image_retraining" rel="nofollow noreferrer">tensorflow-hub</a> help this to resolve my problem?</p> <p>Model training life cycle will be like this below,</p> <blockquote> <p>cased_L-12_H-768_A-12 =&gt; dataset1 =&gt; dataset1_model_cased_L-12_H-768_A-12</p> <p>dataset1_model_cased_L-12_H-768_A-12 =&gt; dataset2 =&gt; dataset2_model_cased_L-12_H-768_A-12</p> </blockquote>
<p>Tensorflow Hub is a platform for sharing pre-trained model pieces or whole models, and an API to facilitate this sharing. In TF 1.x, this API was a stand-alone API and in TF 2.x this API (SavedModel: <a href="https://www.tensorflow.org/guide/saved_model" rel="nofollow noreferrer">https://www.tensorflow.org/guide/saved_model</a>) is part of the core TF API.</p> <p>In the proposed training life-cycle example, using SavedModel to save relevant model between the training steps could simplify pipeline architecture design. Alternatively, you could use coding examples available as part of the TF Model Garden to perform this pre-training: <a href="https://github.com/tensorflow/models/tree/master/official/nlp" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/official/nlp</a>.</p>
174
BERT model
TypeError: dropout(): argument &#39;input&#39; (position 1) must be Tensor, not str Bert Model
https://stackoverflow.com/questions/72442319/typeerror-dropout-argument-input-position-1-must-be-tensor-not-str-bert
<p>Hi I encounter this error when I was training my Bert Model for sentiment analysis, where my classes have 3 outcomes and my input data is text.</p> <p>So I got the above error when I am training the model. I have searched some of the guides and tried to set this parameter to my bert model <code>bert_model = BertModel.from_pretrained(MODEL_NAME,return_dict=False)</code> but I am still getting the same error as before. I am using 'bert-base-cased' pretrained model</p> <pre><code># Function for a single training iteration def train_epoch(model, data_loader, loss_fn, optimizer, device, scheduler, n_examples): model = model.train() losses = [] correct_predictions = 0 for d in data_loader: input_ids = d[&quot;input_ids&quot;].to(device) attention_mask = d[&quot;attention_mask&quot;].to(device) targets = d[&quot;targets&quot;].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) # Backward prop loss.backward() # Gradient Descent nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return correct_predictions.double() / n_examples, np.mean(losses) </code></pre> <pre><code>%%time history = defaultdict(list) best_accuracy = 0 for epoch in range(EPOCHS): # Show details print(f&quot;Epoch {epoch + 1}/{EPOCHS}&quot;) print(&quot;-&quot; * 10) train_acc, train_loss = train_epoch( model, train_data_loader, loss_fn, optimizer, device, scheduler, len(df_train) ) print(f&quot;Train loss {train_loss} accuracy {train_acc}&quot;) # Get model performance (accuracy and loss) val_acc, val_loss = eval_model( model, val_data_loader, loss_fn, device, len(df_val) ) print(f&quot;Val loss {val_loss} accuracy {val_acc}&quot;) print() history['train_acc'].append(train_acc) history['train_loss'].append(train_loss) history['val_acc'].append(val_acc) history['val_loss'].append(val_loss) # If we beat prev performance if val_acc &gt; best_accuracy: torch.save(model.state_dict(), 'best_model_state.bin') best_accuracy = val_acc </code></pre>
175
BERT model
How do I retrain BERT model with new data
https://stackoverflow.com/questions/72040423/how-do-i-retrain-bert-model-with-new-data
<p>I have already trained a bert model and saved it in the .pb format and I want to retrain the model with new datasets that i custom made, so in order to not to lose the previous training and such, how do I train the model with the new data so the model could update it self any approaches? this is my training code down below</p> <pre><code>optimizer = Adam(lr=1e-5, decay=1e-6) model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) history = model.fit( x={'input_ids': x['input_ids'], 'attention_mask': x['attention_mask']}, #x={'input_ids': x['input_ids']}, y={'outputs': train_y}, validation_split=0.1, batch_size=32, epochs=1) </code></pre>
176
BERT model
Retrieve the &quot;relevant tokens&quot; with a BERT model (already fine-tuned)
https://stackoverflow.com/questions/66860788/retrieve-the-relevant-tokens-with-a-bert-model-already-fine-tuned
<p>I already fine-tuned a BERT model ( with the huggingface library) for a classification task to predict a post category in two types (1 and 0, for example). But, I would need to retrieve the &quot;relevant tokens&quot; for the documents that are predicted as category 1 (for example). I know that I can use the traditional TF-IDF approach once I have all the posts labeled as 1 (for example) with my BERT model. But I have the following question: is it possible to do the same task with the architecture of the fine-tunned BERT model? I mean, access to the last layer of the encoder (the prediction layer), and with the attention mechanism, get the &quot;relevant&quot; tokens that make that te prediction are 1 (for example)? Is it possible to do that? Does someone know a tutorial o something similar?</p>
<p>With transformer models, you can perform some explainability analysis, which is probably what you want. I would recommend looking at the transformer section of <a href="https://github.com/slundberg/shap#natural-language-example-transformers" rel="nofollow noreferrer">SHAP</a>. You just have to wrap your model in the SHAP explainer, like this:</p> <pre class="lang-py prettyprint-override"><code>import shap explainer = shap.Explainer(model) </code></pre> <p>There is another option if you have labels on which tokens are relevant, namely training a token classification model. But that would require retraining and labels for each token.</p>
177
BERT model
Train BERT model from scratch on a different language
https://stackoverflow.com/questions/67957446/train-bert-model-from-scratch-on-a-different-language
<p>First i create tokenizer as follow</p> <pre><code>from tokenizers import Tokenizer from tokenizers.models import BPE,WordPiece tokenizer = Tokenizer(WordPiece(unk_token=&quot;[UNK]&quot;)) from tokenizers.trainers import BpeTrainer,WordPieceTrainer trainer = WordPieceTrainer(vocab_size=5000,min_frequency=3, special_tokens=[&quot;[UNK]&quot;, &quot;[CLS]&quot;, &quot;[SEP]&quot;, &quot;[PAD]&quot;, &quot;[MASK]&quot;]) from tokenizers.pre_tokenizers import Whitespace,WhitespaceSplit tokenizer.pre_tokenizer = WhitespaceSplit() tokenizer.train(files, trainer) from tokenizers.processors import TemplateProcessing tokenizer.token_to_id(&quot;[SEP]&quot;),tokenizer.token_to_id(&quot;[CLS]&quot;) tokenizer.post_processor = TemplateProcessing( single=&quot;[CLS] $A [SEP]&quot;, pair=&quot;[CLS] $A [SEP] $B:1 [SEP]:1&quot;, special_tokens=[ (&quot;[CLS]&quot;, tokenizer.token_to_id(&quot;[CLS]&quot;)), (&quot;[SEP]&quot;, tokenizer.token_to_id(&quot;[SEP]&quot;)), ], ) </code></pre> <p>Next, I want to train BERT model on these tokens. I tried as follow</p> <pre><code>from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer,mlm=True, mlm_probability=0.15) </code></pre> <p>But it gives me an error <code>AttributeError: 'tokenizers.Tokenizer' object has no attribute 'mask_token'</code> &quot;This tokenizer does not have a mask token which is necessary for masked language modeling. &quot; Though I have <code>attention_mask</code>. Is is different than <code>mask token</code></p>
178
BERT model
Huggingface pre trained bert model is not working
https://stackoverflow.com/questions/64365122/huggingface-pre-trained-bert-model-is-not-working
<p>I have pre-trained a bert model with custom corpus then got vocab file, checkpoints, model.bin, tfrecords, etc.</p> <p>Then I loaded the model as below :</p> <pre class="lang-py prettyprint-override"><code># Load pre-trained model (weights) model = BertModel.from_pretrained('/content/drive/My Drive/Anirban_test_pytorch') </code></pre> <p>But when I am trying to use the model for any task (like q and a, prediction of mask word etc.) then getti9ng below error</p> <pre class="lang-py prettyprint-override"><code>from transformers import pipeline nlp = pipeline(&quot;fill-mask&quot;, model=&quot;model&quot;) nlp(f&quot;This is the best thing I've {nlp.tokenizer.mask_token} in my life.&quot;) </code></pre> <p>ERROR:</p> <p>OSError: Can't load config for 'model'. Make sure that:</p> <ul> <li><p>'model' is a correct model identifier listed on 'https://huggingface.co/models'</p> </li> <li><p>or 'model' is the correct path to a directory containing a config.json file</p> </li> </ul> <p>Can you please help me?</p>
179
BERT model
tensor type attributes in bert model returned as string
https://stackoverflow.com/questions/65461593/tensor-type-attributes-in-bert-model-returned-as-string
<p>I am new to nlp and i want to build a bert model for sentiment Analysis so i am following this tuto <a href="https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/" rel="nofollow noreferrer">https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/</a> but i am getting the error bellow</p> <pre><code>bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME) last_hidden_state, pooled_output = bert_model( input_ids=encoding['input_ids'], attention_mask=encoding['attention_mask'] ) last_hidden_state.shape pooled_output.shape </code></pre> <p>When i want to execute last_hidden_state.shape I get an error:</p> <p>'str' object has no attribute 'shape' why does it return last_hidden_state and pooled_output as str and not tensors. Thank you.</p>
<p>it seems there was A couple of changes were introduced when the switch from version 3 to version 4 was done in hugging face and can be solved like below</p> <pre><code>bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict=False) </code></pre>
180
BERT model
I get error while downloading BERT models for summarization
https://stackoverflow.com/questions/63832094/i-get-error-while-downloading-bert-models-for-summarization
<p>I'm a novice in writing neural networks. I have just started using BERT models, while running BERT for text summarization using the examples in <a href="https://pypi.org/project/bert-extractive-summarizer/" rel="nofollow noreferrer">bert extractive summarizer</a> I get the following error with the pretrained model getting halted at 57%.</p> <p><code> OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-pytorch_model.bin' to download pretrained weights.</code></p> <p>How do I resolve this error. Thanks.</p>
181
BERT model
BERT model : &quot;enable_padding() got an unexpected keyword argument &#39;max_length&#39;&quot;
https://stackoverflow.com/questions/66743649/bert-model-enable-padding-got-an-unexpected-keyword-argument-max-length
<p>I am trying to implement the BERT model architecture using Hugging Face and KERAS. I am learning this from the Kaggle (<a href="https://www.kaggle.com/tanulsingh077/deep-learning-for-nlp-zero-to-transformers-bert" rel="noreferrer">link</a>) and try to understand it. When I tokenized my data, I face some problems and get an error message. The error msg is:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-20-888a40c0160b&gt; in &lt;module&gt; ----&gt; 1 x_train = fast_encode(train1.comment_text.astype(str), fast_tokenizer, maxlen=MAX_LEN) 2 x_valid = fast_encode(valid.comment_text.astype(str), fast_tokenizer, maxlen=MAX_LEN) 3 x_test = fast_encode(test.content.astype(str), fast_tokenizer, maxlen=MAX_LEN ) 4 y_train = train1.toxic.values 5 y_valid = valid.toxic.values &lt;ipython-input-8-de591bf0a0b9&gt; in fast_encode(texts, tokenizer, chunk_size, maxlen) 4 &quot;&quot;&quot; 5 tokenizer.enable_truncation(max_length=maxlen) ----&gt; 6 tokenizer.enable_padding(max_length=maxlen) 7 all_ids = [] 8 TypeError: enable_padding() got an unexpected keyword argument 'max_length' </code></pre> <p>and the code is:</p> <pre><code>x_train = fast_encode(train1.comment_text.astype(str), fast_tokenizer, maxlen=192) x_valid = fast_encode(valid.comment_text.astype(str), fast_tokenizer, maxlen=192) x_test = fast_encode(test.content.astype(str), fast_tokenizer, maxlen=192 ) y_train = train1.toxic.values y_valid = valid.toxic.values </code></pre> <p>and the function fast_encode is here:</p> <pre><code>def fast_encode(texts, tokenizer, chunk_size=256, maxlen=512): &quot;&quot;&quot; Encoder for encoding the text into sequence of integers for BERT Input &quot;&quot;&quot; tokenizer.enable_truncation(max_length=maxlen) tokenizer.enable_padding(max_length=maxlen) all_ids = [] for i in tqdm(range(0, len(texts), chunk_size)): text_chunk = texts[i:i+chunk_size].tolist() encs = tokenizer.encode_batch(text_chunk) all_ids.extend([enc.ids for enc in encs]) return np.array(all_ids) </code></pre> <p>What should I do now?</p>
<p>The tokenizer used here is not the regular tokenizer, but the fast tokenizer provided by an older version of the Huggingface <code>tokenizer</code> library.</p> <p>If you wish to create the fast tokenizer using the older version of huggingface <code>transformers</code> from the notebook, you can do this:</p> <pre class="lang-py prettyprint-override"><code>from tokenizers import BertWordPieceTokenizer # First load the real tokenizer tokenizer = transformers.DistilBertTokenizer.from_pretrained('distilbert-base-multilingual-cased') # Save the loaded tokenizer locally tokenizer.save_pretrained('.') # Reload it with the huggingface tokenizers library fast_tokenizer = BertWordPieceTokenizer('vocab.txt', lowercase=False) fast_tokenizer </code></pre> <p>However, the process of using a fast tokenizer has become significantly simpler since I wrote this code. If you look at the <a href="https://huggingface.co/transformers/master/preprocessing.html" rel="nofollow noreferrer">Proprocessing data tutorial</a> by Huggingface, you will notice that you simply need to do:</p> <pre><code>tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') batch_sentences = [ &quot;Hello world&quot;, &quot;Some slightly longer sentence to trigger padding&quot; ] batch = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors=&quot;tf&quot;) </code></pre> <p>This is because the fast tokenizer (which is written in Rust) is automatically used whenever it's available.</p>
182
BERT model
Index out of Range in Self - BERT Model Tuning Pytorch
https://stackoverflow.com/questions/75622232/index-out-of-range-in-self-bert-model-tuning-pytorch
<p>I am working on training the BERT Model for Pytorch. I'm quite new to Pytorch as well. My code as replicated from: <a href="https://towardsdatascience.com/text-classification-with-bert-in-pytorch-887965e5820f" rel="nofollow noreferrer">https://towardsdatascience.com/text-classification-with-bert-in-pytorch-887965e5820f</a> keeps returning an error: &quot;Index out of Range in Self&quot;. The training partially executed but then abruptly stops with this exception. My code below:</p> <pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') </code></pre> <pre><code>labels = {'prepared' : 0, 'anticipating' : 1, 'hopeful' : 2, 'proud' : 3, 'excited' : 4, 'joyful' : 5, 'content' : 6, 'caring' : 7, 'grateful' : 8, 'trusting' : 9, 'confident' : 10, 'faithful' : 11, 'impressed' : 12, 'surprised' : 13, 'terrified' : 14, 'afraid' : 15, 'apprehensive' : 16, 'anxious' : 17, 'embarrassed' : 18, 'ashamed' : 19, 'devastated' : 20, 'sad' : 21, 'disappointed' : 22, 'lonely' : 23, 'sentimental' : 24, 'nostalgic' : 25, 'guilty' : 26, 'disgusted' : 27, 'furious' : 28, 'angry' : 29, 'annoyed' : 30, 'jealous' : 31, 'agreeing' : 32, 'acknowledging' : 33, 'encouraging' : 34, 'consoling' : 35, 'sympathizing' : 36, 'suggesting' : 37, 'questioning' : 38, 'wishing' : 39, 'neutral' : 40} class Dataset(torch.utils.data.Dataset): def __init__(self, df): self.labels = [labels[label] for label in df['eb+_emot']] self.texts = [tokenizer(text, padding='max_length', max_length = 510, truncation=True, return_tensors=&quot;pt&quot;) for text in df['uttr']] def classes(self): return self.labels def __len__(self): return len(self.labels) def get_batch_labels(self, idx): # Fetch a batch of labels return np.array(self.labels[idx]) def get_batch_texts(self, idx): # Fetch a batch of inputs return self.texts[idx] def __getitem__(self, idx): batch_texts = self.get_batch_texts(idx) batch_y = self.get_batch_labels(idx) return batch_texts, batch_y </code></pre> <pre><code>np.random.seed(112) df_train, df_val, df_test = np.split(df.sample(frac = 1, random_state= 42), [int(.8*len(df)), int(.9*len(df))]) print(len(df_train), len(df_val), len(df_test)) </code></pre> <pre><code>from torch import nn from transformers import BertModel </code></pre> <pre><code>class BertClassifier(nn.Module): def __init__(self, dropout=0.5): super(BertClassifier, self).__init__() self.bert = BertModel.from_pretrained('bert-base-cased') self.dropout = nn.Dropout(dropout) self.linear = nn.Linear(768, 41) self.relu = nn.ReLU() def forward(self, input_id, mask): _, pooled_output = self.bert(input_ids= input_id, attention_mask=mask,return_dict=False) dropout_output = self.dropout(pooled_output) linear_output = self.linear(dropout_output) final_layer = self.relu(linear_output) return final_layer </code></pre> <pre><code>from torch.optim import Adam from tqdm import tqdm </code></pre> <pre><code>def train(model, train_data, val_data, learning_rate, epochs): train, val = Dataset(train_data), Dataset(val_data) train_dataloader = torch.utils.data.DataLoader(train, batch_size=2) val_dataloader = torch.utils.data.DataLoader(val, batch_size=2) use_cuda = torch.cuda.is_available() device = torch.device(&quot;cuda&quot; if use_cuda else &quot;cpu&quot;) criterion = nn.CrossEntropyLoss() optimizer = Adam(model.parameters(), lr= learning_rate) if use_cuda: model = model.cuda() criterion = criterion.cuda() for epoch_num in range(epochs): total_acc_train = 0 total_loss_train = 0 for train_input, train_label in tqdm(train_dataloader): train_label = train_label.to(device) mask = train_input['attention_mask'].to(device) input_id = train_input['input_ids'].squeeze(1).to(device) output = model(input_id, mask) batch_loss = criterion(output, train_label.long()) total_loss_train += batch_loss.item() acc = (output.argmax(dim=1) == train_label).sum().item() total_acc_train += acc model.zero_grad() batch_loss.backward() optimizer.step() total_acc_val = 0 total_loss_val = 0 with torch.no_grad(): for val_input, val_label in val_dataloader: val_label = val_label.to(device) mask = val_input['attention_mask'].to(device) input_id = val_input['input_ids'].squeeze(1).to(device) output = model(input_id, mask) batch_loss = criterion(output, val_label.long()) total_loss_val += batch_loss.item() acc = (output.argmax(dim=1) == val_label).sum().item() total_acc_val += acc print( f'Epochs: {epoch_num + 1} | Train Loss: {total_loss_train / len(train_data): .3f} \ | Train Accuracy: {total_acc_train / len(train_data): .3f} \ | Val Loss: {total_loss_val / len(val_data): .3f} \ | Val Accuracy: {total_acc_val / len(val_data): .3f}') EPOCHS = 5 model = BertClassifier() LR = 1e-6 train(model, df_train, df_val, LR, EPOCHS) </code></pre> <p>I've found some other solutions that people posed around 'embedding', but I'm quite new to this syntax so I'm not sure where to actually edit my code (hence the plethora that I've posted).</p>
183
BERT model
How can I integrate BERT model in my notebook (Python)?
https://stackoverflow.com/questions/66356324/how-can-i-integrate-bert-model-in-my-notebook-python
<p>I am doing text classification with keras model (sequential). Now, what can I do to improve the model performance (the accuracy, the val accuracy, the prediction, etc). This is my model architecture:</p> <pre><code>from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from keras.utils.vis_utils import plot_model model = Sequential() model.add(Embedding(vocab_size, embedding_dim, input_length=train_padded.shape[1])) model.add(Conv1D(48, 5, activation='relu', padding='valid')) model.add(GlobalMaxPooling1D()) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dropout(0.5)) model.add(Dense(4, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) </code></pre> <p>Another question please, can I insert Bert model for text classification in order to improve the val accuracy of my model?</p> <p>Thank you !</p>
184
BERT model
BERT Model Evaluation Measure in terms of Syntax Correctness and Semantic Coherence
https://stackoverflow.com/questions/58840538/bert-model-evaluation-measure-in-terms-of-syntax-correctness-and-semantic-cohere
<p>For example I have an original sentence. The word <strong>barking</strong> corresponds to the word that is missing.</p> <pre><code>Original Sentence : The dog is barking. Incomplete Sentence : The dog is ___________. </code></pre> <p>For example, using the BERT model, it predicts the word crying instead of the word barking. How will I measure the accuracy of the BERT Model in terms of how syntactically correct and semantically coherent the predicted word is? </p> <p>(For an instance, there are a lot of incomplete sentences, and the task is to evaluate BERT accuracy based on these incomplete sentences.)Please help.</p>
<p>For <em>syntax</em>, you can use for instance <a href="https://github.com/delph-in/erg" rel="nofollow noreferrer">English Resource Grammar</a> to decide if a sentence is grammatical. It is the biggest manually curated description of English grammar, you can try an <a href="http://erg.delph-in.net/logon" rel="nofollow noreferrer">online demo</a>. A grammar (given it has a sufficiently large coverage which they usually don't) refuses to parse ungrammatical sentences, unlike statistical/neural parser that happily parses everything (and usually better than grammars).</p> <p>Estimating <em>semantic plausibility</em> is a very difficult task and given that BERT is probably one of the best current language models, you cannot use another language model as a reference. There are some academic papers that deal with modeling semantic plausibility, you can start, e.g., with <a href="https://www.aclweb.org/anthology/N18-2049.pdf" rel="nofollow noreferrer">this one from NAACL 2018</a>.</p>
185
BERT model
&quot;gcloud ai endpoints deploy-model&quot; fails with &quot;Model server exited unexpectedly&quot; (BERT model deployment on Vertex AI)
https://stackoverflow.com/questions/79502332/gcloud-ai-endpoints-deploy-model-fails-with-model-server-exited-unexpectedly
<p>I'm encountering persistent issues deploying a custom container to Vertex AI using gcloud ai endpoints deploy-model. I'm trying to deploy a BERT model packaged in a Docker image, but I'm consistently facing errors despite providing the correct Artifact Registry image path.</p> <p>Here's a breakdown of my setup:</p> <ul> <li>I have a Docker image containing a BERT model and a Flask application for inference. The image is approximately 18GB in size.</li> <li>The image is successfully built and pushed to Google Cloud Artifact Registry.</li> <li>I'm using the image's fully qualified digest in the gcloud command.</li> <li>I have an existing Vertex AI endpoint.</li> <li>The service account has the necessary permissions.</li> </ul> <p>When I run the following command:</p> <pre><code>gcloud ai endpoints deploy-model ENDPOINT_ID \ --region=us-central1 \ --model=&quot;us-central1-docker.pkg.dev/gemini-demo-429713/bert-repo/bert-vertex-ai@sha256:bcec3e...&quot; \ --deployed-model-id=bert-model-production-v1 \ --machine-type=n1-standard-8 \ --display-name=DISPLAY_NAME \ --service-account=SERVICE_ACC_NAME \ --traffic-split=&quot;0=100&quot; </code></pre> <p>I get the following error:</p> <pre><code>ERROR: (gcloud.ai.endpoints.deploy-model) There is an error while getting the model information. Please make sure the model 'projects/gemini-demo-429713/locations/us-central1/models/us-central1-docker.pkg.dev/gemini-demo-429713/bert-repo/bert-vertex-ai@sha256:bcec3e...' exists. </code></pre> <p>I have also tried uploading a model with the following command:</p> <pre><code>gcloud ai models upload \ --region=us-central1 \ --display-name=bert-model-name-predict-luxure \ --container-image-uri=us-central1-docker.pkg.dev/gemini-demo-429713/bert-repo/bert-vertex-ai:latest \ --format=&quot;value(name)&quot; </code></pre> <p>and then tried to deploy the model with:</p> <pre><code>gcloud ai endpoints deploy-model ENDPOINT_ID \ --region=us-central1 \ --model=MODEL_ID \ --machine-type=n1-standard-4 \ --display-name=DISPLAY_NAME \ --service-account=SERVICE_ACC_NAME\ --verbosity=debug </code></pre> <p>The deployment process goes on for more than 30 mins and then eventually fails with no logs and a generic error:</p> <pre><code>RROR: (gcloud.ai.endpoints.deploy-model) Model server exited unexpectedly. Model server logs can be found at &lt;Link&gt; </code></pre> <p>The logs are empty.</p> <p>What could be causing this issue?</p>
186
BERT model
Bert model train don&#39;t want to stop
https://stackoverflow.com/questions/65073823/bert-model-train-dont-want-to-stop
<p>I am using this code to train Bert for Turkish language model classification with 2 labels. But when I run the following code:</p> <pre><code>import numpy as np import pandas as pd df = pd.read_excel (r'preparedDataNoId.xlsx') df = df.sample(frac = 1) from sklearn.model_selection import train_test_split train_df, test_df = train_test_split(df, test_size=0.10) print('train shape: ',train_df.shape) print('test shape: ',test_df.shape) train_df[&quot;text&quot;]=train_df[&quot;text&quot;].apply(lambda r: str(r)) train_df['label']=train_df['label'].astype(int) from simpletransformers.classification import ClassificationModel model = ClassificationModel('bert', 'dbmdz/bert-base-turkish-uncased', use_cuda=False,num_labels=2, args={'reprocess_input_data': True, 'overwrite_output_dir': True, 'num_train_epochs': 3, &quot;train_batch_size&quot;: 64 , &quot;fp16&quot;:False, &quot;output_dir&quot;: &quot;bert_model&quot;}) model.train_model(train_df) </code></pre> <p>It takes a lot of time, it doesn't stop and the screen keeps showing:</p> <pre><code>This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... </code></pre>
<p>As the error suggests you should wrap your code with an <code>if __name__ == '__main__':</code></p> <p>So your code would be:</p> <pre><code>import numpy as np import pandas as pd if __name__ == '__main__': df = pd.read_excel(r'preparedDataNoId.xlsx') df = df.sample(frac=1) from sklearn.model_selection import train_test_split train_df, test_df = train_test_split(df, test_size=0.10) print('train shape: ', train_df.shape) print('test shape: ', test_df.shape) train_df[&quot;text&quot;] = train_df[&quot;text&quot;].apply(lambda r: str(r)) train_df['label'] = train_df['label'].astype(int) from simpletransformers.classification import ClassificationModel model = ClassificationModel('bert', 'dbmdz/bert-base-turkish-uncased', use_cuda=False, num_labels=2, args={'reprocess_input_data': True, 'overwrite_output_dir': True, 'num_train_epochs': 3, &quot;train_batch_size&quot;: 64, &quot;fp16&quot;: False, &quot;output_dir&quot;: &quot;bert_model&quot;}) model.train_model(train_df) </code></pre> <p><strong>Why is this happening?</strong></p> <blockquote> <p>On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an <code>if __name__ == '__main__':</code> guard in the main module to avoid creating subprocesses recursively.</p> </blockquote> <p>Quoted from: <a href="https://stackoverflow.com/a/18205006/6025629">https://stackoverflow.com/a/18205006/6025629</a></p>
187
BERT model
BERT model bug encountered during training
https://stackoverflow.com/questions/67360987/bert-model-bug-encountered-during-training
<p>So, I made a custom dataset consisting of reviews form several E-learning sites. What I am trying to do is build a model that can recognize emotions based on text and for training I am using the dataset I've made via scraping. While working on BERT, I encountered this error</p> <p><code>normalize() argument 2 must be str, not float</code></p> <p>here's my code:-</p> <pre><code>import numpy as np import pandas as pd import numpy as np import tensorflow as tf print(tf.__version__) import ktrain from ktrain import text from sklearn.model_selection import train_test_split import pickle #class_names = [&quot;Frustration&quot;, &quot;Not satisfied&quot;, &quot;Satisfied&quot;, &quot;Happy&quot;, &quot;Excitement&quot;] data = pd.read_csv(&quot;Final_scraped_dataset.csv&quot;) print(data.head()) X = data['Text'] y = data['Emotions'] class_names = np.unique(data['Emotions']) print(class_names) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 42) print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) print(X_train.head(10)) encoding = { 'Frustration': 0, 'Not satisfied': 1, 'Satisfied': 2, 'Happy': 3, 'Excitement' : 4 } y_train = [encoding[x] for x in y_train] y_test = [encoding[x] for x in y_test] X_train = X_train.tolist() X_test = X_test.tolist() #print(X_train) (x_train, y_train), (x_test, y_test), preproc = text.texts_from_array(x_train=X_train, y_train=y_train, x_test=X_test, y_test=y_test, class_names=class_names, preprocess_mode='bert', maxlen=200, max_features=15000) #I've encountered the error here '''model = text.text_classifier('bert', train_data=(x_train, y_train), preproc=preproc) learner = ktrain.get_learner(model, train_data=(x_train, y_train), val_data=(x_test, y_test), batch_size=4) learner.fit_onecycle(2e-5, 3) learner.validate(val_data=(x_test, y_test)) predictor = ktrain.get_predictor(learner.model, preproc) predictor.get_classes() import time message = 'I hate you a lot' start_time = time.time() prediction = predictor.predict(message) print('predicted: {} ({:.2f})'.format(prediction, (time.time() - start_time))) # let's save the predictor for later use predictor.save(&quot;new_model/bert_model&quot;) print(&quot;SAVED _______&quot;)''' </code></pre> <p>here's the complete error:-</p> <pre><code> File &quot;D:\Sentiment analysis\BERT_model_new_dataset.py&quot;, line 73, in &lt;module&gt; max_features=15000) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\ktrain\text\data.py&quot;, line 373, in texts_from_array trn = preproc.preprocess_train(x_train, y_train, verbose=verbose) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\ktrain\text\preprocessor.py&quot;, line 796, in preprocess_train x = bert_tokenize(texts, self.tok, self.maxlen, verbose=verbose) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\ktrain\text\preprocessor.py&quot;, line 166, in bert_tokenize ids, segments = tokenizer.encode(doc, max_len=max_length) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\keras_bert\tokenizer.py&quot;, line 73, in encode first_tokens = self._tokenize(first) File &quot;D:\Anaconda3\envs\pythy37\lib\site-packages\keras_bert\tokenizer.py&quot;, line 103, in _tokenize text = unicodedata.normalize('NFD', text) TypeError: normalize() argument 2 must be str, not float </code></pre>
<p>It sounds like you may have a float value in your <code>data['Text']</code> column somehow.</p> <p>You can try something like this to shed more light on what's happening:</p> <pre class="lang-py prettyprint-override"><code>for i, s in enumerate(data['Text']): if not isinstance(s, str): print('Text in row %s is not a string: %s' % (i, s)) </code></pre>
188
BERT model
Weird behaviour when finetuning Huggingface Bert model with Tensorflow
https://stackoverflow.com/questions/72139450/weird-behaviour-when-finetuning-huggingface-bert-model-with-tensorflow
<p>I am trying to fine tune a Huggingface Bert model using Tensorflow (on ColabPro GPU enabled) for tweets sentiment analysis. I followed step by step the guide on the Huggingface website, but I am experiencing a weird training time. This happens with all the Bert models I tried. I have two datasets of different sizes (10k and 2.5Millions) consisting of tweets that I need to classify as having a positive sentiment or a negative sentiment.</p> <p>With this piece of code I perform tokenization of my dataset:</p> <pre><code># perform tokenization of the dataset from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) def tokenize_function(sentence): return tokenizer(sentence['Phrase'], padding=True, truncation=True, max_length=30) train = train.map(tokenize_function, batched=True) test = test.map(tokenize_function, batched=True) val = val.map(tokenize_function, batched=True) </code></pre> <p>I then create tensoflow datasets:</p> <pre><code># go from 'Dataset' type to tensorflow so that our dataset can be used for training in keras from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors=&quot;tf&quot;) tf_train_dataset = train.to_tf_dataset( columns=[&quot;attention_mask&quot;, &quot;input_ids&quot;, &quot;token_type_ids&quot;], label_cols=[&quot;Label&quot;], shuffle=False, collate_fn=data_collator, batch_size=256, ) tf_val_dataset = val.to_tf_dataset( columns=[&quot;attention_mask&quot;, &quot;input_ids&quot;, &quot;token_type_ids&quot;], label_cols=[&quot;Label&quot;], shuffle=False, collate_fn=data_collator, batch_size=256, ) tf_test_dataset = test.to_tf_dataset( columns=[&quot;attention_mask&quot;, &quot;input_ids&quot;, &quot;token_type_ids&quot;], shuffle=False, collate_fn=data_collator, batch_size=256, ) </code></pre> <p>Download and compile the model:</p> <pre><code>from transformers import TFAutoModelForSequenceClassification # download pre-trained model model = TFAutoModelForSequenceClassification.from_pretrained(MODEL_NAME, num_labels=2) model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=tf.metrics.SparseCategoricalAccuracy(), ) </code></pre> <p>And I finally train the model</p> <pre><code># Compute some variables needed to speed up training batch_size = 64 train_steps_per_epoch = int(len(tf_train_dataset) // batch_size) dev_steps_per_epoch = int(len(tf_val_dataset) // batch_size) # train model model.fit(tf_train_dataset, validation_data=tf_val_dataset, epochs=1, verbose=2, # steps_per_epoch=train_steps_per_epoch, # validation_steps=dev_steps_per_epoch, ) </code></pre> <p>I first trained this model on the 10K dataset and one epochs takes around 20mins. To me this is a lot. The training set is not that big and I am using a rather powerful GPU. I tried searching the web for some tricks to speed up the trainig time and a guy on stackoverflow suggested setting the <code>steps_per_epoch</code> parameter, which I set to what you can see in the code. Now the trainig time improves exponentially, I can train the model on the full dataset (2.5M) for 3 epochs in 30mins, but the performance actually decrease. I loked up the definition of <code>steps_per_epoch</code> and to me it is almost like batch_size.</p> <p>My questions now would be:</p> <ol> <li>is it normal for a bert model to take 20mins for one epoch on a datatset of 10k tweets?</li> <li>what does <code>steps_per_epoch</code> actually do? why does it speed up the training time so much? and why does the performance actually decrease?</li> </ol>
<p>For your first question i haven't used the bert model before so i can't say.</p> <p>For your second question well from what I understand, steps per epoch is the number of sample batches that will be used to fit the model during 1 epoch.</p> <p>So let's say your number of epochs was 2 instead of 1 and you set your steps per epoch to 10 once the model is fitted with 10 batches of your training data (<em>in your case the batch size is set to 64, which is 640 samples per epoch</em>) it will end that epoch and start the second epoch which will also end after receiving 640 samples. So in the end the model is only trained with about 1280 samples of the data.</p> <p>Although in your case you set your steps per epoch to the length of your entire sample size so it is strange that it reduced the training time. I will try something similar to understand what is going on.</p>
189
BERT model
Fine-tuning BERT sentence transformer model
https://stackoverflow.com/questions/69562624/fine-tuning-bert-sentence-transformer-model
<p>I am using a pre-trained BERT sentence transformer model, as described here <a href="https://www.sbert.net/docs/training/overview.html" rel="noreferrer">https://www.sbert.net/docs/training/overview.html</a> , to get embeddings for sentences.</p> <p>I want to fine-tune these pre-trained embeddings, and I am following the instructions in the tutorial i have linked above. According to the tutorial, you fine-tune the pre-trained model by feeding it sentence pairs and a label score that indicates the similarity score between two sentences in a pair. I understand this fine-tuning happens using the architecture shown in the image below:</p> <p><a href="https://i.sstatic.net/JPA53.png" rel="noreferrer"><img src="https://i.sstatic.net/JPA53.png" alt="enter image description here" /></a></p> <p>Each sentence in a pair is encoded first using the BERT model, and then the &quot;pooling&quot; layer aggregates (usually by taking the average) the word embeddings produced by Bert layer to produce a single embedding for each sentence. The cosine similarity of the two sentence embeddings is computed in the final step and compared against the label score.</p> <p>My question here is - which parameters are being optimized when fine-tuning the model using the given architecture? Is it fine-tuning only the parameters of the <em>last layer</em> in BERT model? This is not clear to me by looking at the code example shown in the tutorial for fine-tuning the model.</p>
<p>That actually depend on your requirement. If you have a lot of computational resources and you want to get a perfect sentence representation then you should finetune all the layers.(Which was done in the original sentence bert model)</p> <p>But if you are a student and want to create an almost good sentence representation then you can train only the non-bert layers.</p>
190
BERT model
FastAPI return BERT model result and metrics
https://stackoverflow.com/questions/65885841/fastapi-return-bert-model-result-and-metrics
<p>I have sentiment analysis model using BERT and I want to get the result from predicting text via FastAPI but it always give negative answer (I think it is because the prediction didn't give prediction result).</p> <p>This is my code:</p> <pre><code>import uvicorn from fastapi import FastAPI import joblib # models sentiment_model = open(&quot;sentiment-analysis-model.pkl&quot;, &quot;rb&quot;) sentiment_clf = joblib.load(sentiment_model) # init app app = FastAPI() # Routes @app.get('/') async def index(): return {&quot;text&quot;: &quot;Hello World! huehue&quot;} @app.get('/predict/{text}') async def predict(text): prediction, raw_outputs = sentiment_clf.predict(text) if prediction == 0: result = &quot;neutral&quot; elif prediction == 1: result = &quot;positive&quot; else: result = &quot;negative&quot; return{&quot;text&quot;: text, &quot;prediction&quot;:result} if __name__ == '__main__': uvicorn.run(app, host=&quot;127.0.0.1&quot;, port=8000) </code></pre> <p>Also I want to print accuracy, F1 Score etc.</p> <p>I'm using this model</p> <pre><code>from simpletransformers.classification import ClassificationModel model = ClassificationModel('bert', 'bert-base-multilingual-uncased', num_labels=3, use_cuda=False, args={'reprocess_input_data': True, 'overwrite_output_dir': True, 'num_train_epochs': 1}, weight=[3, 0.5, 1]) </code></pre>
<p>You are using a <a href="https://fastapi.tiangolo.com/tutorial/path-params/" rel="nofollow noreferrer">Path parameter</a> construction. Meaning that to call your API endpoint, you need to do such a call: <code>http://localhost:8000/predict/some_text</code>. The issue is that <code>some_text</code> contains spaces in your case. Appart from putting explicit HTML espace things like <code>%20</code> (I am not sure this would even work), this will fail to register the space and you will just have the first word.</p> <p>Instead you would be better of using a <a href="https://fastapi.tiangolo.com/tutorial/body/" rel="nofollow noreferrer">Request body</a> construction. So a POST instead of a GET. Something like this:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class Text(BaseModel): text: str app = FastAPI() @app.post(&quot;/predict/&quot;) async def predict(text: Text): text = text.text ... </code></pre>
191
BERT model
RuntimeError, working on IA tryna use a pre-trained BERT model
https://stackoverflow.com/questions/60561504/runtimeerror-working-on-ia-tryna-use-a-pre-trained-bert-model
<p>Hi here is a part of my code to use a pre-trained bert model for classification: </p> <pre><code> model = BertForSequenceClassification.from_pretrained( "bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab. num_labels = 2, # The number of output labels--2 for binary classification. # You can increase this for multi-class tasks. output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False, # Whether the model returns all hidden-states. ) </code></pre> <p>...</p> <pre><code>for step, batch in enumerate(train_dataloader): b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) </code></pre> <p>but then I receive this error message:</p> <blockquote> <p>RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long;</p> </blockquote> <p>but got torch.IntTensor instead (while checking arguments for embedding) So I think I should transform my <code>b_input_ids</code> to tensor but don't know how to do it. Thanks a lot in advance for your help everyone !</p>
<p>Finally succeed using .to(torch.int64)</p>
192
BERT model
How to use pretrained checkpoints of BERT model on semantic text similarity task?
https://stackoverflow.com/questions/57461607/how-to-use-pretrained-checkpoints-of-bert-model-on-semantic-text-similarity-task
<p>I am unaware to use the derived checkpoints from pre-trained BERT model for the task of semantic text similarity.</p> <p>I have run a pre-trained BERT model with some domain of corpora from scratch. I have got the checkpoints and graph.pbtxt file from the code below. But I am unaware on how to use those files for evaluating semantic text similarity test file.</p> <pre><code>!python create_pretraining_data.py \ --input_file=/input_path/input_file.txt \ --output_file=/tf_path/tf_examples.tfrecord \ --vocab_file=/vocab_path/uncased_L-12_H-768_A-12/vocab.txt \ --do_lower_case=True \ --max_seq_length=128 \ --max_predictions_per_seq=20 \ --masked_lm_prob=0.15 \ --random_seed=12345 \ --dupe_factor=5 !python run_pretraining.py \ --input_file=/tf_path/tf_examples.tfrecord \ --output_dir=pretraining_output \ --do_train=True \ --do_eval=True \ --bert_config_file=/bert_path/uncased_L-12_H-768_A-12/bert_config.json \ --init_checkpoint=/bert_path/uncased_L-12_H-768_A-12/bert_model.ckpt\ --train_batch_size=32 \ --max_seq_length=128 \ --max_predictions_per_seq=20 \ --num_train_steps=20 \ --num_warmup_steps=10 \ --learning_rate=2e-5 </code></pre>
193
BERT model
BERT model loss function from one hot encoded labels
https://stackoverflow.com/questions/68104425/bert-model-loss-function-from-one-hot-encoded-labels
<p>For the line: loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) I have labels hot encoded such that it is a tensor of 32x17, since the batch size is 32 and there are 17 classes for the text categories. However, BERT model only takes for the label with a single dimension vector. Hence, I get the error:</p> <p>Expected input batch_size (32) to match target batch_size (544)</p> <p>The 544 is the product of 32x17. However, my question is how could I use one hot encoded labels to get the loss value in each iteration? I could use just label encoded labels, but that would not really be suitable for unordered labels.</p> <pre><code># BERT training loop for _ in trange(epochs, desc=&quot;Epoch&quot;): ## TRAINING # Set our model to training mode model.train() # Tracking variables tr_loss = 0 nb_tr_examples, nb_tr_steps = 0, 0 # Train the data for one epoch for step, batch in enumerate(train_dataloader): # Add batch to GPU batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Clear out the gradients (by default they accumulate) optimizer.zero_grad() # Forward pass loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) train_loss_set.append(loss.item()) # Backward pass loss.backward() # Update parameters and take a step using the computed gradient optimizer.step() # Update tracking variables tr_loss += loss.item() nb_tr_examples += b_input_ids.size(0) nb_tr_steps += 1 print(&quot;Train loss: {}&quot;.format(tr_loss/nb_tr_steps)) </code></pre>
<p>As stated in the comment, Bert for sequence classification expects the target tensor as a <code>[batch]</code> sized tensors with values spanning the range <em>[0, num_labels)</em>. A one-hot encoded tensor can be converted by <code>argmax</code>ing it over the label dim, i.e. <code>labels=b_labels.argmax(dim=1)</code>.</p>
194
BERT model
How to load a fine tuned pytorch huggingface bert model from a checkpoint file?
https://stackoverflow.com/questions/71561761/how-to-load-a-fine-tuned-pytorch-huggingface-bert-model-from-a-checkpoint-file
<p>I had fine tuned a bert model in pytorch and saved its checkpoints via <code>torch.save(model.state_dict(), 'model.pt')</code></p> <p>Now When I want to reload the model, I have to explain whole network again and reload the weights and then push to the device.</p> <p>Can anyone tell me how can I save the bert model directly and load directly to use in production/deployment?</p> <p>Following is the training code and you can try running there in colab itself! After training completion, you will notice in file system we have a checkpoint file. But I want to save the model itself.</p> <p><a href="https://colab.research.google.com/github/prateekjoshi565/Fine-Tuning-BERT/blob/master/Fine_Tuning_BERT_for_Spam_Classification.ipynb" rel="nofollow noreferrer">LINK TO COLAB NOTEBOOK FOR SAMPLE TRAINING</a></p> <p>Following is the current inferencing code I written.</p> <pre><code>import torch import torch.nn as nn from transformers import AutoModel, BertTokenizerFast import numpy as np import json tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') device = torch.device(&quot;cpu&quot;) class BERT_Arch(nn.Module): def __init__(self, bert): super(BERT_Arch, self).__init__() self.bert = bert # dropout layer self.dropout = nn.Dropout(0.1) # relu activation function self.relu = nn.ReLU() # dense layer 1 self.fc1 = nn.Linear(768, 512) # dense layer 2 (Output layer) self.fc2 = nn.Linear(512, 2) # softmax activation function self.softmax = nn.LogSoftmax(dim=1) # define the forward pass def forward(self, sent_id, mask): # pass the inputs to the model _, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False) x = self.fc1(cls_hs) x = self.relu(x) x = self.dropout(x) # output layer x = self.fc2(x) # apply softmax activation x = self.softmax(x) return x bert = AutoModel.from_pretrained('bert-base-uncased') model = BERT_Arch(bert) path = './models/saved_weights_new_data.pt' model.load_state_dict(torch.load(path, map_location=device)) model.to(device) def inference(comment): tokens_test = tokenizer.batch_encode_plus( list([comment]), max_length=75, pad_to_max_length=True, truncation=True, return_token_type_ids=False ) test_seq = torch.tensor(tokens_test['input_ids']) test_mask = torch.tensor(tokens_test['attention_mask']) predictions = model(test_seq.to(device), test_mask.to(device)) predictions = predictions.detach().cpu().numpy() predictions = np.argmax(predictions, axis=1) return predictions </code></pre> <p>I simply want to save a model from this notebook in a way such that I can use it for inferencing anywhere.</p>
<p>Just save your model using model.save_pretrained, here is an example:</p> <pre><code>model.save_pretrained(&quot;&lt;path_to_dummy_folder&gt;&quot;) </code></pre> <p>You can download the model from colab, save it on your gdrive or at any other location of your choice. While doing inference, you can just give path to this model (you may have to upload it) and start with inference.</p> <p>To load the model</p> <pre><code>model = AutoModel.from_pretrained(&quot;&lt;path_to_saved_pretrained_model&gt;&quot;) #Note: Instead of AutoModel class, you may use the task specific class as well. </code></pre>
195
BERT model
Can&#39;t get &#39;bert&#39; model to run using ktrain and pandas dataframe
https://stackoverflow.com/questions/73791130/cant-get-bert-model-to-run-using-ktrain-and-pandas-dataframe
<p>I try to work with ktrain to finetune bert model. I'm using pandas dataframe named train_df to store my data.</p> <p><code>x_train, x_val, y_train, y_val = train_test_split(train_df['text'], train_df['target'], shuffle=True, test_size = 0.2, random_state=random_seed, stratify=train_df['target'])</code></p> <br> I'm using function texts_from_array because I'm reading the data with pandas dataframe When I want to Convert data to features for BERT I get ValueError (ValueError: x_train must be a list or NumPy array). <br> <pre><code>(x_train_bert, y_train_bert), (x_val_bert, y_val_bert), preproc = text.texts_from_array(x_train=x_train, y_train=y_train, x_test = x_val, y_test=y_val, class_names= [&quot;0&quot;, &quot;1&quot;], preprocess_mode='bert', lang = 'en', maxlen=65, max_features=35000) </code></pre> <p>What I'm missing?</p>
<p>I found solution and now it is working correctly.</p> <pre><code>(x_train_bert, y_train_bert), (x_val_bert, y_val_bert), preproc = text.texts_from_array(x_train=x_train.tolist(), y_train=y_train.tolist(), x_test = x_val.tolist(), y_test=y_val.tolist(),class_names= [&quot;0&quot;, &quot;1&quot;],preprocess_mode='bert',lang = 'en', maxlen=65, max_features=35000) </code></pre>
196
BERT model
How to Extract Features from Text based on Fine-Tuned BERT Model
https://stackoverflow.com/questions/58061775/how-to-extract-features-from-text-based-on-fine-tuned-bert-model
<p>I am trying to make a binary predictor on some data which has one columns with text and some additional columns with numerical values. My first solution was to use word2vec on the text to extract 30 features and use them with the other values in a Random Forest. It produces good result. I am interested in improving the TEXT to FEATURE model.</p> <p>I then wanted to improve the feature extraction algorithm by using BERT. I managed to implement a pre-trained BERT model for feature extraction with some improvement to the word2vec.</p> <p>Now I want to know, how can i fine-tune the BERT model on my data - to improve the feature extraction model - to get better text-to-features for my Random Forest algorithm. I know how to fine-tune BERT for a binary predictor (BertForSequenceClassification), but not how to fine-tune it for a making a better BERT text-to-feature extraction model. Can I use the layers in the BertForSequenceClassification somehow?? I spent 2 days trying to find a solution, but did not manage so far...</p> <p>Kind Regards, Peter</p>
<p>I am dealing with this problem too. As far I know, you must fine-tune the BERT language model; according to <a href="https://github.com/google-research/bert/issues/145" rel="nofollow noreferrer">this issue</a>, <a href="https://github.com/google-research/bert#pre-training-with-bert" rel="nofollow noreferrer">masked LM</a> is suggested. Then you can use <a href="https://bert-as-service.readthedocs.io/en/latest/section/faq.html#can-i-use-my-own-fine-tuned-bert-model" rel="nofollow noreferrer">Bert-as-service</a> to extract the features. Note that I haven't tested it yet, but I am going to. I thought it would be good to share it with you :)</p>
197
BERT model
How to Answer Subjective/descriptive types of lQuestions using BERT Model?
https://stackoverflow.com/questions/74654341/how-to-answer-subjective-descriptive-types-of-lquestions-using-bert-model
<p>I am trying to implement BERT Model for Question Answering tasks, but Its a little different from the existing Q&amp;A models, The Model will be given some text(3-4 pages) and will be asked questions based on the text, and the expected answer may be asked in short or descriptive subjective type</p> <p>I tried to implement BERT, for this task.</p> <p><strong>The Problems I am facing:</strong> The input token limit for BERT is 512. How to get the answer in long form, which can describe any instance, process, event, etc.</p>
<p>Try longformer which can have input length 0f 4096 tokens, or even 16384 tokens with gradient checkpointing. See details in <a href="https://github.com/allenai/longformer" rel="nofollow noreferrer">https://github.com/allenai/longformer</a>. Or on huggingface model hub <a href="https://huggingface.co/docs/transformers/model_doc/longformer" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/model_doc/longformer</a>.</p>
198
BERT model
Fine tuning BERT model for text generation (crossword solver)
https://stackoverflow.com/questions/78636736/fine-tuning-bert-model-for-text-generation-crossword-solver
<p>I need assistance in my NLP project, where the goal is to predict a list of possible answers for a given crossword clue. The idea is to <strong>fine tune a BERT model using a dataset of crossword clue - answer pairs</strong>.</p> <p>train.source looks like this : Line at an airport, Kind of omelet, Susa was its capital, Suffix with cavern ... or gorge?, Nine: Prefix, Dragon's prey, Some pyrotechnics, ...</p> <p>train.targets looks like this : LIMOS, EGGWHITE, ELAM, OUS, ENNEA, MAIDEN, FLARES, ...</p> <p>For now, what I have is a loading function for the dataset in train, test and val. (len(train) = 433034, len(val) = 72304, len(test) = 72940)</p> <p>I've loaded two models from BERT, the tokeniser and the model :</p> <pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMaskedLM.from_pretrained('bert-base-uncased') </code></pre> <p>Then I've create a BERTDataset class :</p> <pre><code>class BERTDataset(Dataset): def __init__(self, tokenizer, texts, targets=None, max_length=512): self.tokenizer = tokenizer self.texts = texts self.targets = targets # Optionally used if specific target responses are provided self.max_length = max_length def __len__(self): return len(self.texts) def __getitem__(self, idx): text = self.texts[idx] # Append '[MASK]' to the end of the text text_with_mask = text + &quot; ? [MASK].&quot; # Encoding the text with the appended MASK token encoding = self.tokenizer.encode_plus( text_with_mask, max_length=self.max_length, padding='max_length', truncation=True, return_tensors='pt' ) input_ids = encoding['input_ids'].squeeze(0) # Remove batch dimension attention_mask = encoding['attention_mask'].squeeze(0) # Labels should ideally be -100 where no prediction is needed labels = input_ids.detach().clone() # Set labels to -100 where the input IDs are not masked labels[labels != self.tokenizer.mask_token_id] = -100 return { 'input_ids': input_ids, 'attention_mask': attention_mask, 'labels': labels } train_dataset = BERTDataset(tokenizer, train_sources, train_targets, max_length=128) val_dataset = BERTDataset(tokenizer, val_sources, val_targets, max_length=128) </code></pre> <p>And finally launched the training :</p> <pre><code>from transformers import TrainingArguments,Trainer # Define training arguments training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # number of training epochs per_device_train_batch_size=8, # batch size for training per_device_eval_batch_size=16, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.005, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) # Initialize the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, #compute_metrics=lambda eval_pred: {&quot;loss&quot; : eval_pred.loss} ) # Start training trainer.train() </code></pre> <p>I've encountered some problem, and the main one is that the training loss goes to 0 very quickly and I don't understand why (starting from step 510, the training loss is 0):</p> <p><a href="https://i.sstatic.net/fMykIq6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fMykIq6t.png" alt="enter image description here" /></a></p> <p>Is there anything I did wrong ? I really don't get why the model is not training correctly. Thank you !!</p>
199
implement regression
How to implement Poisson Regression?
https://stackoverflow.com/questions/37941881/how-to-implement-poisson-regression
<p>There are 2 types of Generalized Linear Models: <br>1. Log-Linear Regression, also known as Poisson Regression <br>2. Logistic Regression</p> <p>How to implement the Poisson Regression in Python for Price Elasticity prediction?</p>
<p>Have a look at the <a href="https://pypi.python.org/pypi/statsmodels" rel="noreferrer">statmodels</a> package in python.</p> <p>Here is an <a href="http://nbviewer.jupyter.org/urls/umich.box.com/shared/static/ir0bnkup9rywmqd54zvm.ipynb" rel="noreferrer">example</a></p> <p>A bit more of input to avoid the <strong><em>link only answer</em></strong></p> <p>Assumming you know python here is an extract of the example I mentioned earlier.</p> <pre><code>import numpy as np import pandas as pd from statsmodels.genmod.generalized_estimating_equations import GEE from statsmodels.genmod.cov_struct import (Exchangeable, Independence,Autoregressive) from statsmodels.genmod.families import Poisson </code></pre> <p><code>pandas</code> will hold the data frame with the data you want to use to feed your poisson model. <code>statsmodels</code> package contains large family of statistical models such as Linear, probit, poisson etc. from here you will import the Poisson family model (hint: see last import)</p> <p>The way you fit your model is as follow (assuming your dependent variable is called <code>y</code> and your IV are age, trt and base):</p> <pre><code>fam = Poisson() ind = Independence() model1 = GEE.from_formula("y ~ age + trt + base", "subject", data, cov_struct=ind, family=fam) result1 = model1.fit() print(result1.summary()) </code></pre> <p>As I am not familiar with the nature of your problem I would suggest to have a look at negative binomial regression if you need to count data is well overdispersed. with High overdispersion your poisson assumptions may not hold.</p> <p>Plethora of info for poisson regression in R - just google it.</p> <p>Hope now this answer helps.</p>
0
implement regression
Problems implementing regression neural network
https://stackoverflow.com/questions/51189147/problems-implementing-regression-neural-network
<p>I've been trying for a while to implement my first regression neural network in MATLAB, following the example from figure 5.3 in page 231 from '<a href="http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf" rel="nofollow noreferrer">Pattern Recognition and Machine Learning</a>' book from C. Bishop.</p> <p>In this example, a two-layer neural network has been used to implement several transformations, such as the <strong>sin</strong>, <strong>square</strong>, <strong>heaviside</strong> and <strong>absolute value</strong> functions. Hence, input and output layers do only have one neuron, whereas the hidden layer does have three of them. As mentioned in the image's title, hidden layers use <strong>tanh</strong> activation functions, whereas the output is linear.</p> <p>The only function for which I sometimes got """close""" to (find what the apostrophes mean in the picture attached) was the sin one. The rest of them are still so far from what I aim. Shouldn't the solution work out for all the cases as in the example?</p> <p>Please find both my picture and my code attached (it's a modified version of the one from <a href="https://es.mathworks.com/matlabcentral/fileexchange/55826-pattern-recognition-and-machine-learning-toolbox" rel="nofollow noreferrer">here</a>). </p> <pre><code>%% INITIALIZATION h = [3]; X = linspace(-1,1,1000); T = sin(X*pi); %% NEURAL NETWORK eta = 1/size(X,2); h = [size(X,1);h(:);size(T,1)]; L = numel(h); W = cell(L-1,1); for l = 1:L-1 W{l} = randn(h(l),h(l+1)); % Should I maybe initialize this differently? end Z = cell(L,1); Z{1} = X; maxiter = 10000; mse = zeros(1,maxiter); % forward for iter = 1:maxiter for l = 2:L-1 Z{l} = tanh(W{l-1}'*Z{l-1}); % 5.10, 5.49 end Z{L} = W{L-1}'*Z{L-1}; % Linear output activation function % backward E = T-Z{L}; % E = dk mse(iter) = mean(dot(E,E),1); dW = Z{L-1}*E'; W{L-1} = W{L-1}+eta*dW; for l = L-2:-1:1 df = 1-Z{l+1}.^2; % Derivative of tanh function dj = df.*(W{l+1}*E); dW = Z{l}*dj'; % 5.67 W{l} = W{l}+eta*dW; E = dj; end end mse = mse(1:maxiter); model.W = W; %% RESULTS plot(mse); disp(['T = [' num2str(T) ']']); W = model.W; Y = X; for l = 1:length(W)-1 Y = tanh(W{l}'*Y); end Y = W{length(W)}'*Y; disp(['Y = [' num2str(Y) ']']); figure plot(X, T, 'o'); hold on plot(X, Y); legend('T (target output)','Y (trained output)', 'Location', 'southeast'); hold off </code></pre> <h2><a href="https://i.sstatic.net/Q03wm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q03wm.jpg" alt="Target output versus trained output"></a> <a href="https://i.sstatic.net/4ebL7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ebL7.jpg" alt="Variation of error with the number of iterations"></a></h2> <p>As you may have already noticed, I'm far from being an expert in this field. If you know of any online course / nice reference apart from the aforementioned book where I can find some coded examples I would gladly appreciate that. Please feel free to ask or to comment whatever you may consider.</p>
1
implement regression
How to implement L1 logistic regression?
https://stackoverflow.com/questions/59881343/how-to-implement-l1-logistic-regression
<p>As part of pursuing a course, I was trying to implement L1 logistic regression using scikit-learn in Python. Unfortunately for the code</p> <pre><code>clf, pred = fit_and_plot_classifier(LogisticRegression(penalty = 'l1', C=1000000)) </code></pre> <p>I get the error message</p> <pre><code>ValueError: Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty. </code></pre> <p>I tried setting l1_ratio</p> <pre><code>clf, pred = fit_and_plot_classifier(LogisticRegression(l1_ratio = 1)) </code></pre> <p>but got the error message</p> <pre><code>C:\Users\HP\Anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:1499: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)"(penalty={})".format(self.penalty)) </code></pre> <p>So, how to implement L1 Logistic regression?</p>
<p>You can do it like you are doing in the first code snippet, but you have to define another solver. Use either ‘liblinear’ or ‘saga’, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression" rel="nofollow noreferrer">check more in the documentation</a>.</p>
2
implement regression
implement ordinal regression in Theano
https://stackoverflow.com/questions/44666127/implement-ordinal-regression-in-theano
<p>I want to implement <a href="https://en.wikipedia.org/wiki/Ordinal_regression" rel="nofollow noreferrer">ordinal regression</a> in Theano. But I've no idea how to implement the middle part: threshold definition and usage.</p> <p>For example(simply say): </p> <pre><code>X = T.matrix('X', dtype='float32') # Feature matrix y = T.vector('y', dtype='int32') # labels w = T.vector('w', dtype='float32') threshold = T.vector('threshold', dtype='float32') p = T.nnet.sigmoid(threshold - T.dot(X, w)) p_y_x = theano.ifelse.ifelse(T.eq(y, 0), p[y], (p[y] - p[y-1])) loss = -T.sum(T.log(p_y_x)) </code></pre> <p>I notice there would be something wrong with the definition of <code>p_y_x</code> as well as <code>p</code>. But I've no idea how to modify it. Can anyone help?</p>
3
implement regression
Manually implementing Regression Likelihood Ratio Test
https://stackoverflow.com/questions/49764026/manually-implementing-regression-likelihood-ratio-test
<p>I'm trying to implement my own linear regression likelihood ratio test.</p> <p>The test is where you take the sum of squares of a reduced model and the sum of squares of a full model and compare it to the F statistic.</p> <p>However, I am having some trouble implementing the function, especially when dealing with dummy variables.</p> <p><a href="https://drive.google.com/open?id=13evSuUw439WwMTMHjp--mLxUip3S1vtQ" rel="nofollow noreferrer">This</a> is the dataset I am working with and testing the function on.</p> <p>Here is the code so far: The function inputs are the setup matrix mat, the response matrix which has just one column, the indices (variables) being test, and the alpha value the test is at.</p> <pre><code>linear_regression_likelihood &lt;- function(mat, response, indices, alpha) { mat &lt;- as.matrix(mat) reduced &lt;- mat[,c(1, indices)] q &lt;- 1 #set q = 1 just to test on data p &lt;- dim(mat)[2] n &lt;- dim(mat)[1] f_stat &lt;- qf(1-alpha, df1 = p-q, df2 = n-(p+1)) beta_hat_full &lt;- qr.solve(t(mat)%*%mat)%*%t(mat)%*%response y_hat_full &lt;- mat%*%beta_hat_full SSRes_full &lt;- t(response - y_hat_full)%*%(response-y_hat_full) beta_hat_red &lt;- qr.solve(t(reduced)%*%reduced)%*%t(reduced)%*%response y_hat_red &lt;- reduced%*%beta_hat_red SSRes_red &lt;- t(response - y_hat_red)%*%(response-y_hat_red) s_2 &lt;- (t(response - mat%*%beta_hat_full)%*%(response - mat%*%beta_hat_full))/(n-p+1) critical_value &lt;- ((SSRes_red - SSRes_full)/(p-q))/s_2 print(critical_value) if (critical_value &gt; f_stat) { return ("Reject H0") } else { return ("Fail to Reject H0") } } </code></pre> <p>Here is the setup code, where I setup the matrix in the correct format. Data is the read in CSV file.</p> <pre><code>data &lt;- data[, 2:5] mat &lt;- data[, 2:4] response &lt;- data[, 1] library(ade4) df &lt;-data.frame(mat$x3) dummy &lt;- acm.disjonctif(df) dummy mat &lt;- cbind(1, mat[1:2], dummy) linear_regression_likelihood(mat, response, 2:3, 0.05) </code></pre> <p>This is the error I keep getting.</p> <pre><code>Error in solve.default(as.matrix(c)) : system is computationally singular: reciprocal condition number = 1.63035e-18 </code></pre> <p>I know it has to do with taking the inverse of the matrix after it is multiplied, but the function is unable to do so. I thought it may be due to the dummy variables having too small of values, but I am not sure of any other way to include the dummy variables.</p> <p>The test I am doing is to check whether the factor variable x3 has any affect on the response y. The actual answer which I verified using the built in functions states that we fail to reject the null hypothesis.</p>
<p>The error originates from line</p> <pre><code>beta_hat_full &lt;- qr.solve(t(mat)%*%mat)%*%t(mat)%*%response </code></pre> <p>If you go through your function step-by-step you will see an error</p> <blockquote> <p>Error in qr.solve(t(mat) %*% mat) : singular matrix 'a' in solve</p> </blockquote> <p>The problem here is that your model matrix does not have full column rank, which translates to your regression coefficients not being unique. This is a result of the way you "dummyfied" <code>x3</code>. In order to ensure full rank, you need to remove one dummy column (or manually remove the intercept). </p> <p>In the following example I remove the <code>A</code> column from <code>dummy</code> which means that resulting <code>x3</code> coefficients measure the effect of a unit-change in <code>B</code>, <code>C</code>, and <code>D</code> <em>against</em> <code>A</code>. </p> <pre><code># Read data data &lt;- read.csv("data_hw5.csv") data &lt;- data[, 2:5] # Extract predictor and response data mat &lt;- data[, 2:4] response &lt;- data[, 1] # Dummify categorical predictor x3 library(ade4) df &lt;-data.frame(mat$x3) dummy &lt;- acm.disjonctif(df) dummy &lt;- dummy[, -1] # Remove A to have A as baseline mat &lt;- cbind(1, mat[1:2], dummy) # Apply linear_regression_likelihood linear_regression_likelihood(mat, response, 2:3, 0.05); # [,1] #[1,] 8.291975 #[1] "Reject H0" </code></pre> <hr> <h2>A note</h2> <p>The error could have been avoided if you had used base R's function <code>model.matrix</code> which ensures full rank when "dummyfying" categorical variables (<code>model.matrix</code> is also implicitly called in <code>lm</code> and <code>glm</code> to deal with categorical, i.e. <code>factor</code> variables).</p> <p>Take a look at </p> <pre><code>mm &lt;- model.matrix(y ~ x1 + x2 + x3, data = data) </code></pre> <p>which by default omits the first level of <code>factor</code> variable <code>x3</code>. <code>mm</code> is identical to <code>mat</code> after (correct) "dummification". </p>
4
implement regression
How can I implement regression after multi-class multi-label classification?
https://stackoverflow.com/questions/78572569/how-can-i-implement-regression-after-multi-class-multi-label-classification
<p>I have a dataset where some objects (15%) belong to different classes and have a property value for each of those classes. How can I make a model that predicts multi-label or multi-class and then make a regression prediction based on the output of the classifier? I also need to output the probabilities for each class. unfortunately I can't delete this 15%. <a href="https://i.sstatic.net/TM6Pn3AJ.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I have no idea how to put it together. I have only found how to implement it separately. Any advice?</p>
5
implement regression
ms access: How to implement linear regression
https://stackoverflow.com/questions/30661809/ms-access-how-to-implement-linear-regression
<p>In excel it is possible to implement linear regression graph. But in ms access I could not find anything similar to excel. If there is nothing built in then how can I implement it.?</p>
<p>just take the data that you generated in Excel for the line regression and export it to Access.Then click on chart to get the line regression! This is the basic method.</p>
6
implement regression
Vectorized Implementation of Softmax Regression
https://stackoverflow.com/questions/8998321/vectorized-implementation-of-softmax-regression
<p>I’m implementing softmax regression in Octave. Currently I’m using a non-vectorized implementation using following cost function and derivatives.</p> <p><a href="https://i.sstatic.net/l6AQf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l6AQf.png" alt="alt text"></a> </p> <p><a href="https://i.sstatic.net/urgQh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/urgQh.png" alt="alt text"></a></p> <p>Source: <a href="http://ufldl.stanford.edu/wiki/index.php/Softmax_Regression" rel="nofollow noreferrer">Softmax Regression</a></p> <p>Now I want to implement vectorized version of it in Octave. It seems like bit hard for me to write vectorized versions for these equations. Can somebody help me to implement this ?</p> <p>Thanks</p> <p>Upul</p>
<p>This is very similar to an exercise in Andrew Ng's deep learning class, they give some hints <a href="http://ufldl.stanford.edu/wiki/index.php/Exercise:Vectorization" rel="nofollow">http://ufldl.stanford.edu/wiki/index.php/Exercise:Vectorization</a></p>
7
implement regression
Implement Logistic Regression
https://stackoverflow.com/questions/47278604/implement-logistic-regression
<p>I am applying multiple ML algorithm to this dataset so I tried logistic regression and I plotted the predictions and it seems completely off since the plot only shows data points from one class. Here is the data and what I attempted</p> <pre><code>set.seed(10) x1 &lt;- runif(500) - 0.5 x2 &lt;- runif(500) - 0.5 y &lt;- ifelse(x1 ^ 2 - x2 ^ 2 &gt; 0, 1, 0) dat &lt;- data.frame(x1, x2, y) #Logistic Regression fit.glm &lt;- glm(y ~ x1 + x2, data = dat, family = "binomial") y.hat.3 &lt;- predict(fit.glm,dat) plot(x1,x2,col = c("red","blue")[y.hat.3 + 1]) </code></pre>
<p><code>predict</code> returns log-odds for a logistic regression by default. To get predicted classes, use <code>type = "resp"</code> to get predicted probabilities and then use a decision rule like <code>p &gt; 0.5</code> to turn them into classes:</p> <pre><code>y.hat.3 &lt;- predict(fit.glm,dat, type = "resp") &gt; 0.5 plot(x1,x2,col = c("red","blue")[y.hat.3 + 1]) </code></pre>
8
implement regression
How to implement regressors in a Hierarchical Series in R, with the Fable package?
https://stackoverflow.com/questions/65685672/how-to-implement-regressors-in-a-hierarchical-series-in-r-with-the-fable-packag
<p>I am new to exploring the fable package, and I was wanting to implement Regressors in a Hierarchical Time Series model. How should the Dimensionality of the data be? Should there be an additional column inside the <code>tsibble</code> object? For example, in an ARIMA model. Thank you very much in advance.</p>
<p>The approach for modelling hierarchical data with exogenous regressors is the same as modelling regular data. The exogenous regressors should be a column of the tsibble object used to estimate the model, for each node in the hierarchy.</p> <p>The code below shows how a simple hierarchy (<code>T = M + F</code>) can be modelled with a dynamic regression model (ARIMA with xreg). Note that the exogenous regressors are just white noise here, but you would use some real data here.</p> <pre class="lang-r prettyprint-override"><code>library(fable) #&gt; Loading required package: fabletools library(dplyr) my_data &lt;- as_tsibble(cbind(mdeaths, fdeaths)) %&gt;% aggregate_key(key, value = sum(value)) %&gt;% # Add the regressor (if you have this in your data, could aggregate it above) # If the data is pre-aggregated, specify which keys are &lt;aggregated&gt; with agg_vec(). mutate(my_xreg = rnorm(nrow(.))) my_data #&gt; # A tsibble: 216 x 4 [1M] #&gt; # Key: key [3] #&gt; index key value my_xreg #&gt; &lt;mth&gt; &lt;chr*&gt; &lt;dbl&gt; &lt;dbl&gt; #&gt; 1 1974 Jan &lt;aggregated&gt; 3035 -1.87 #&gt; 2 1974 Feb &lt;aggregated&gt; 2552 1.93 #&gt; 3 1974 Mar &lt;aggregated&gt; 2704 -0.420 #&gt; 4 1974 Apr &lt;aggregated&gt; 2554 0.332 #&gt; 5 1974 May &lt;aggregated&gt; 2014 -1.10 #&gt; 6 1974 Jun &lt;aggregated&gt; 1655 1.22 #&gt; 7 1974 Jul &lt;aggregated&gt; 1721 1.68 #&gt; 8 1974 Aug &lt;aggregated&gt; 1524 -1.46 #&gt; 9 1974 Sep &lt;aggregated&gt; 1596 0.620 #&gt; 10 1974 Oct &lt;aggregated&gt; 2074 -0.505 #&gt; # … with 206 more rows my_data %&gt;% model(ARIMA(value ~ my_xreg)) #&gt; # A mable: 3 x 2 #&gt; # Key: key [3] #&gt; key `ARIMA(value ~ my_xreg)` #&gt; &lt;chr*&gt; &lt;model&gt; #&gt; 1 fdeaths &lt;LM w/ ARIMA(0,0,0)(1,1,1)[12] errors&gt; #&gt; 2 mdeaths &lt;LM w/ ARIMA(0,0,2)(0,1,2)[12] errors&gt; #&gt; 3 &lt;aggregated&gt; &lt;LM w/ ARIMA(0,0,2)(2,1,0)[12] errors&gt; </code></pre> <p><sup>Created on 2021-01-13 by the <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex package</a> (v0.3.0)</sup></p>
9
implement regression
Problem with implement a 4-D Gaussian Processes Regression through GPML
https://stackoverflow.com/questions/69673457/problem-with-implement-a-4-d-gaussian-processes-regression-through-gpml
<p>I refer to the link <a href="https://stats.stackexchange.com/questions/105516/how-to-implement-a-2-d-gaussian-processes-regression-through-gpml-matlab">https://stats.stackexchange.com/questions/105516/how-to-implement-a-2-d-gaussian-processes-regression-through-gpml-matlab</a> and create a 2-d Gaussian Process regression. I want to create a 4-d Gaussian Process regression, however the 'meshgrid' only allows 3 inputs<code>([X,Y,Z] = meshgrid(x,y,z))</code>; how do I add another input into meshgrid?</p> <p>The 3-d code is like:</p> <pre><code>X1train = linspace(-4.5,4.5,10); X2train = linspace(-4.5,4.5,10); X3train = linspace(-4.5,4.5,10); X = [X1train' X2train' X3train']; Y = [X1train + X2train + X3train]'; %Testdata [Xtest1, Xtest2, Xtest3] = meshgrid(-4.5:0.1:4.5, -4.5:0.1:4.5, -4.5:0.1:4.5); Xtest = [Xtest1(:) Xtest2(:) Xtest3(:)]; % implement regression [ymu ys2 fmu fs2] = gp(hyp, @infExact, [], covfunc, likfunc, X, Y, Xtest); </code></pre> <p>If I create an X4train, that means I need an Xtest4, how do I add Xtest4 into meshgrid?</p> <p>The GPML code is from <a href="http://www.gaussianprocess.org/gpml/code/matlab/doc/" rel="nofollow noreferrer">http://www.gaussianprocess.org/gpml/code/matlab/doc/</a></p>
<p>You may create n- dimensional grids using <a href="https://de.mathworks.com/help/matlab/ref/ndgrid.html" rel="nofollow noreferrer">ndgrid</a>, but please keep in mind that it does not directly create the same output as meshgrid, you have to convert it first. (How to do that is also explained in the documentation)</p>
10
implement regression
How to implement multiple regression?
https://stackoverflow.com/questions/61335228/how-to-implement-multiple-regression
<p>I am practicing simple regression models as an intro to machine learning. I have reviewed a few sample models for multiple regression, which is, I believe, an extension of linear regression, but with more than 1 feature. From the examples I have seen, the syntax is the same for linear regression and multiple regression. I get this error when running the code below: </p> <pre><code>ValueError: x and y must be the same size. </code></pre> <p>Why do I get this error, and how can I fix it?</p> <pre><code>import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression df = pd.read_csv(r"C:\Users\****\Desktop\data.csv") #x.shape =(20640, 2), y=(20640,) X = df[['total_rooms', 'median_income']] y = df['median_house_value'] X_test, y_test, X_train, y_train = train_test_split(X, y, test_size=.2, random_state=0) reg = LinearRegression() reg.fit(X_train, y_train) </code></pre> <p>Am I missing a step? Thanks for your time.</p>
<p>You have a mistake in your <code>train_test_split</code> - the order of results matters; the correct usage is:</p> <pre><code>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=0) </code></pre> <p>Check the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html" rel="nofollow noreferrer">documentation</a>.</p>
11
implement regression
Logistic regression Python implementation
https://stackoverflow.com/questions/66051281/logistic-regression-python-implementation
<p>I tried to implement logistic regression only with numpy in Python, but the result is not satisfying. The predictions seems incorrect and loss is not improving so it is probably something wrong with the code. Does anyone know what could fix it? Thank you very much!</p> <p>Here is algorithm:</p> <pre><code>import numpy as np # training data and labels X = np.concatenate((np.random.normal(0.25, 0.1, 50), np.random.normal(0.75, 0.1, 50)), axis=None) Y = np.concatenate((np.zeros((50,), dtype=np.int32), np.ones((50,), dtype=np.int32)), axis=None) def logistic_sigmoid(a): return 1 / (1 + np.exp(-a)) # forward pass def forward_pass(w, x): return logistic_sigmoid(w * x) # gradient computation def backward_pass(x, y, y_real): return np.sum((y - y_real) * x) # computing loss def loss(y, y_real): return -np.sum(y_real * np.log(y) + (1 - y_real) * np.log(1 - y)) # training def train(): w = 0.0 learning_rate = 0.01 i = 200 test_number = 0.3 for epoch in range(i): y = forward_pass(w, X) gradient = backward_pass(X, y, Y) w = w - learning_rate * gradient print(f'epoch {epoch + 1}, x = {test_number}, y = {forward_pass(w, test_number):.3f}, loss = {loss(y, Y):.3f}') train() </code></pre>
<p>At first glance you are missing you intercept term (typically called b_0, or bias) and its gradient update. Also in the backward_pass and loss calculations you are not dividing by the amount of data samples.</p> <p>You can see two examples of how to implement it from scratch here:</p> <p>1: <a href="https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc" rel="nofollow noreferrer">Example based on Andrew Ng explanations in the Machine Learning course in Coursera</a></p> <p>2: <a href="https://machinelearningmastery.com/implement-logistic-regression-stochastic-gradient-descent-scratch-python/" rel="nofollow noreferrer">Implementation of Jason Brownlee from Machine Learning mastery website</a></p>
12
implement regression
How do you properly implement regression with categorical explanatory variables missing values when gtsummary tbl_uvregression?
https://stackoverflow.com/questions/63180323/how-do-you-properly-implement-regression-with-categorical-explanatory-variables
<p>I am using <code>tbl_uvregression</code> doing logistic regression but some of the categorical explanatory variables have missing values. The missing value category is being chosen as the reference category. How do I implement the function such that I only use complete cases for each variable?</p> <p>Sample data below</p> <p><code>structure(list(time = structure(c(5, 42, 23, 3, 26, 1, 7, 28, 5, 2, 1, 3, 23, 3, 11, 4, 2, 36, 2, 4, 53, 4, 5, 64, 14, 4, 5, 3, 31, 10, 26, 39, 4, 24, 4, 4, 6, 21, 15, 5, 3, 9, 3, 29, 63, 2, 1, 1, 16, 9, 3, 24, 1, 9, 23, 1, 6, 4, 6, 22, 57, 18, 11, 5, 9, 40, 3, 9, 1, 5, 6, 4, 12, 13, 19, 3, 6, 9, 1, 3, 29, 5, 4, 47, 33, 31, 1, 10, 18, 3, 9, 7, 42, 5, 16, 52, 1, 5, 1, 3, 5, 5, 9, 8, 17, 4, 21, 1, 22, 12, 3, 19, 10, 1, 10, 10, 1, 9, 1, 13, 8, 14, 2, 1, 32, 9, 17, 1, 5, 1, 6, 7, 7, 28, 5, 8, 33, 2, 1, 4, 1, 31, 1, 1, 45, 5, 4, 11, 1, 8, 1, 21, 8, 14, 1, 1, 3, 1, 12, 4, 6, 1, 2, 1, 2, 1, 1, 21, 2, 1, 3, 8, 12, 7, 1, 6, 9, 12, 3, 1, 6, 1, 8, 3, 21, 3), format.stata = &quot;%10.0g&quot;), OutcomeDischarge0Death1 = structure(c(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), label = &quot;Outcome (Discharge = 0, Death = 1)&quot;, format.stata = &quot;%10.0g&quot;), HTN = structure(c(1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 1L, 3L, 1L, 3L, 1L, 3L, 1L, 3L, 3L, 1L, 3L, 1L, 2L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 3L, 3L, 3L, 1L, 1L, 3L, 3L, 1L, 1L, 3L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 1L, 1L, 3L, 1L, 3L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 3L, 3L, 3L, 1L, 1L, 3L, 1L, 3L, 1L, 3L, 1L, 3L, 1L, 3L, 1L, 3L, 3L, 1L, 1L, 3L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 3L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 3L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 2L, 3L, 1L, 1L, 2L), .Label = c(&quot;N&quot;, &quot;&quot;, &quot;Y&quot;), class = &quot;factor&quot;)), row.names = c(NA, -186L), class = c(&quot;tbl_df&quot;, &quot;tbl&quot;, &quot;data.frame&quot;))</code></p> <p><code>surv$HTN &lt;- forcats::fct_relevel(neil$HTN, &quot;N&quot;)</code></p> <pre><code> tbl_uvregression( neil[c(&quot;time&quot;, &quot;OutcomeDischarge0Death1&quot;, &quot;HTN&quot;)], method = coxph, y = Surv(time, OutcomeDischarge0Death1), exponentiate = TRUE, pvalue_fun = function(x) style_pvalue(x, digits = 2) )%&gt;% </code></pre>
<p>It looks like the issue is there are no missing values in the column <code>HTN</code>. There are, however, 4 observations that are blank, but this is not missing.</p> <p><a href="https://i.sstatic.net/1cm3u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1cm3u.png" alt="enter image description here" /></a></p> <p>If you first make the empty string an NA (aka missing), then those observations will be omitted from the model.</p> <pre class="lang-r prettyprint-override"><code>library(gtsummary) library(survival) neil %&gt;% # make the missing string NA (aka missing) dplyr::mutate(HTN = ifelse(HTN == &quot;&quot;, NA, HTN)) %&gt;% # building univariate regression models tbl_uvregression( method = coxph, y = Surv(time, OutcomeDischarge0Death1), exponentiate = TRUE, pvalue_fun = function(x) style_pvalue(x, digits = 2) ) </code></pre> <p><a href="https://i.sstatic.net/0g6lt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0g6lt.png" alt="enter image description here" /></a></p> <p>Happy Programming!</p>
13
implement regression
Python Implementation of Logistic Regression as Regression (Not Classification!)
https://stackoverflow.com/questions/65268985/python-implementation-of-logistic-regression-as-regression-not-classification
<p>I have a regression problem on which I want to use logistic regression - not logistic classification - because my target variables <code>y</code> are continuopus quantities between 0 and 1. However, the common implementations of logistic regression in Python seem to be exclusively logistic classification. I've also looked at GLM implementations and none seem to have implemented a sigmoid link function. Can someone point me in the direction of a Python implementation of logistic regression as a regression algorithm.</p>
<p>In statsmodels both GLM with family Binomial and discrete model Logit allow for a continuous target variable as long as the values are restricted to interval [0, 1].</p> <p>Similarly, Poisson is very useful to model non-negative valued continuous data.</p> <p>In these cases, the model is estimated by quasi maximum likelihood, QMLE, and not by MLE, because the distributional assumptions are not correct. Nevertheless, we can correctly (consistently) estimate the mean function. Inference needs to be based on misspecification robust standard errors which are available as <code>fit</code> option <code>cov_type=&quot;HC0&quot;</code></p> <p>Here is a notebook with example <a href="https://www.statsmodels.org/dev/examples/notebooks/generated/quasibinomial.html" rel="nofollow noreferrer">https://www.statsmodels.org/dev/examples/notebooks/generated/quasibinomial.html</a></p> <p>some issues with background for QMLE and fractional Logit <a href="https://www.github.com/statsmodels/statsmodels/issues/2040" rel="nofollow noreferrer">https://www.github.com/statsmodels/statsmodels/issues/2040</a> QMLE <a href="https://github.com/statsmodels/statsmodels/issues/2712" rel="nofollow noreferrer">https://github.com/statsmodels/statsmodels/issues/2712</a></p> <p>Reference</p> <p>Papke, L.E. and Wooldridge, J.M. (1996), Econometric methods for fractional response variables with an application to 401(k) plan participation rates. J. Appl. Econ., 11: 619-632. <a href="https://doi.org/10.1002/(SICI)1099-1255(199611)11:6" rel="nofollow noreferrer">https://doi.org/10.1002/(SICI)1099-1255(199611)11:6</a>&lt;619::AID-JAE418&gt;3.0.CO;2-1</p> <p><strong>update and Warning</strong></p> <p>as of statsmodels 0.12</p> <p>Investigating this some more, I found that discrete Probit does not support continuous interval data. It uses a computational shortcut that assumes that the values of the dependent variable are either 0 or 1. However, it does not raise an exception in this case. <a href="https://github.com/statsmodels/statsmodels/issues/7210" rel="nofollow noreferrer">https://github.com/statsmodels/statsmodels/issues/7210</a></p> <p>Discrete Logit works correctly for continuous data with optimization method &quot;newton&quot;. The loglikelihood function itself uses a similar computational shortcut as Probit, but not the derivatives and other parts of Logit.</p> <p>GLM-Binomial is designed for interval data and has no problems with it. The only numerical precision problems are currently in the Hessian of the probit link that uses numerical derivatives and is not very precise, which means that the parameters are well estimated but standard error can have numerical noise in GLM-Probit.</p> <p><strong>update</strong> Two changes in statsmodels 0.12.2:<br /> Probit now raises exception if response is not integer valued, and<br /> GLM Binomial with Probit link uses improved derivatives for Hessian with precision now similar to discrete Probit.</p>
14
implement regression
Implementing Kernel Ridge Regression in R
https://stackoverflow.com/questions/33863234/implementing-kernel-ridge-regression-in-r
<p>I want to implement kernel ridge regression in R. My problem is that I can't figure out how to generate the kernel values and I do not know how to use them for the ridge regression. I want to use the following kernel function:</p> <pre><code>kernel.eval &lt;- function(x1,x2,ker) { k=0 if (kertype == 'RBF') { # RBF kernel k=exp(-sum((x1-x2)*(x1-x2)/(2*kerparam^2))) } else { # polynomial kernel k=(1+sum(x1*x2))^ker$param } return(k) } </code></pre> <p>Furthermore, I know that the formula for ridge regression is:</p> <pre><code>myridge.fit &lt;- function(X,y,lambda) { w= solve((t(X) %% X) +(lambdadiag(dim(X)[2])), (t(X) %*% y)) return(w) } </code></pre> <p>Example training data:</p> <pre><code> [,1] [,2] [1,] -1.3981847 -1.3358413 [2,] 0.2698321 1.0661275 [3,] 0.3429286 0.8805642 [4,] 0.5210577 1.1228635 [5,] 1.5755659 0.2230754 [6,] -1.2167197 -0.6700215 </code></pre> <p>Example testing data: (I do not know if I need these at this moment)</p> <pre><code> [,1] [,2] [1,] -2.05 -2.050 [2,] -2.05 -2.009 [3,] -2.05 -1.968 [4,] -2.05 -1.927 [5,] -2.05 -1.886 [6,] -2.05 -1.845 </code></pre> <p>Is anyone able to help me with the first step(s). I have to do Ridge Regression for a RBF kernel as well as a Polynomial kernel.</p>
<p>Following is the code for polynomial kernel with degree 2, hope that helps!</p> <pre><code>poly.kernel &lt;- function(v1, v2=v1, p=2) { ((as.matrix(v1) %*% t(v2))+1)^p } KernelRidgeReg &lt;- function(TrainObjects,TrainLabels,TestObjects,lambda){ X &lt;- TrainObjects y &lt;- TrainLabels kernel &lt;- poly.kernel(X) design.mat &lt;- cbind(1, kernel) I &lt;- rbind(0, cbind(0, kernel)) M &lt;- crossprod(design.mat) + lambda*I #crossprod is just x times traspose of x, just looks neater in my openion M.inv &lt;- solve(M) #inverse of M k &lt;- as.matrix(diag(poly.kernel(cbind(TrainObjects,TrainLabels)))) #Removing diag still gives the same MSE, but will output a vector of prediction. Labels &lt;- rbind(0,as.matrix(TrainLabels)) y.hat &lt;- t(Labels) %*% M.inv %*% rbind(0,k) y.true &lt;- Y.test MSE &lt;-mean((y.hat - y.true)^2) return(list(MSE=MSE,y.hat=y.hat)) } </code></pre> <p>Solve built-in R function sometimes return singular matrix. You may want to write your own function to avoid that.</p>
15
implement regression
Random Forest Regression Implementation in PySpark
https://stackoverflow.com/questions/57587475/random-forest-regression-implementation-in-pyspark
<p>I want to implement Random forest regression in pyspark after all data preparation. I want sample code for implementation.</p>
<p>From the doc (<a href="https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.RandomForestRegressor" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.RandomForestRegressor</a>) :</p> <pre><code>&gt;&gt;&gt; from numpy import allclose &gt;&gt;&gt; from pyspark.ml.linalg import Vectors &gt;&gt;&gt; from pyspark.ml.regression import RandomForestRegressor &gt;&gt;&gt; df = spark.createDataFrame([ ... (1.0, Vectors.dense(1.0)), ... (0.0, Vectors.sparse(1, [], []))], ["label", "features"]) &gt;&gt;&gt; rf = RandomForestRegressor(numTrees=2, maxDepth=2, seed=42) &gt;&gt;&gt; model = rf.fit(df) &gt;&gt;&gt; model.featureImportances SparseVector(1, {0: 1.0}) &gt;&gt;&gt; allclose(model.treeWeights, [1.0, 1.0]) True &gt;&gt;&gt; test0 = spark.createDataFrame([(Vectors.dense(-1.0),)], ["features"]) &gt;&gt;&gt; model.transform(test0).head().prediction 0.0 &gt;&gt;&gt; model.numFeatures 1 &gt;&gt;&gt; model.trees [DecisionTreeRegressionModel (uid=...) of depth..., DecisionTreeRegressionModel...] &gt;&gt;&gt; model.getNumTrees 2 &gt;&gt;&gt; test1 = spark.createDataFrame([(Vectors.sparse(1, [0], [1.0]),)], ["features"]) &gt;&gt;&gt; model.transform(test1).head().prediction 0.5 &gt;&gt;&gt; rfr_path = temp_path + "/rfr" &gt;&gt;&gt; rf.save(rfr_path) &gt;&gt;&gt; rf2 = RandomForestRegressor.load(rfr_path) &gt;&gt;&gt; rf2.getNumTrees() 2 &gt;&gt;&gt; model_path = temp_path + "/rfr_model" &gt;&gt;&gt; model.save(model_path) &gt;&gt;&gt; model2 = RandomForestRegressionModel.load(model_path) &gt;&gt;&gt; model.featureImportances == model2.featureImportances True </code></pre>
16
implement regression
What is the easiest to implement linear regression algorithm?
https://stackoverflow.com/questions/31735595/what-is-the-easiest-to-implement-linear-regression-algorithm
<p>I want to implement single variable regression using ordinary least squares. I have no access to linear algebra or calculus libraries, so any matrix operations or differentiation methods needs to be implemented by me. What is the least complex method?</p>
<p>John D. Cook has an <a href="http://www.johndcook.com/blog/running_regression/" rel="nofollow">excelent post</a> on the subject with a simple C++ implementation. His implementation uses constant memory and can be parallelized with little effort. </p> <p>I wrote a simple Python version of it. Use with caution, there may be bugs:</p> <pre class="lang-py prettyprint-override"><code>class Regression: def __init__(self): self.n = 0.0 self.sXY = 0.0 self.xM1 = 0.0 self.xM2 = 0.0 self.yM1 = 0.0 self.yM2 = 0.0 def add(self, x, y): self.sXY += (self.xM1 - x) * (self.yM1 - y) * self.n / (self.n + 1.0); n1 = self.n; self.n+=1; xdelta = x - self.xM1; xdelta_n = xdelta / self.n; self.xM1 += xdelta_n; self.xM2 += xdelta * xdelta_n * n1; ydelta = y - self.yM1; ydelta_n = ydelta / self.n; self.yM1 += ydelta_n; self.yM2 += ydelta * ydelta_n * n1; def count(self): return self.n def slope(self): return self.sXY / self.xM2 def intercept(self): return self.yM1 - (self.sXY / self.xM2) * self.xM1 def correlation(self): return self.sXY / (self.xM2**0.5 * self.yM2**0.5) def covariance(self): return self.sXY / self.n r = Regression() r.add(1, 2) r.add(4, 9) r.add(16, 17) r.add(17, 13) r.add(21, 11) print 'Count:', r.count() print 'Slope:', r.slope() print 'Intercept:', r.intercept() print 'Correlation:', r.correlation() print 'Covariance:', r.covariance() </code></pre>
17
implement regression
How to implement multivariate regularized linear regression in Dlib?
https://stackoverflow.com/questions/54633033/how-to-implement-multivariate-regularized-linear-regression-in-dlib
<p>I implemented simple linear regression in <a href="http://dlib.net/" rel="nofollow noreferrer">Dlib</a> as a single-layer perceptron with MSE-loss with a single output. The network type is:</p> <pre><code>dlib::loss_mean_squared&lt;dlib::fc&lt;1,dlib::input&lt;dlib::matrix&lt;float&gt;&gt;&gt;&gt; </code></pre> <p>Now I want to modify the loss function to add L1 regularizer. Is there a simple way to do it? As I understand, there's no ready-to-use loss layer for that in Dlib? </p> <p>Also, is it ok to implement linear regression in Dlib this way, or are there more appropriate primitives in Dlib that would calculate regression coefficients analytically, e.g. like <code>LinearRegression</code> or <code>Lasso</code> from scikit-learn?</p> <p><strong>UPD:</strong> Seems like <a href="http://dlib.net/ml.html#rr_trainer" rel="nofollow noreferrer">dlib::rr_trainer</a> (ridge linear regression, L2 regularization) is roughly what I need. But I can't find out how to make it predict multiple outputs. If I supply a vector of samples as a second argument to <code>train()</code> function, where each sample is a column-matrix <code>dlib::matrix&lt;N,1&gt;</code> of target outputs, it segfaults. So far it worked only for a single output, i.e. each output sample is a <code>float</code>.</p>
18
implement regression
method for implementing regression tree on raster data - python
https://stackoverflow.com/questions/26104434/method-for-implementing-regression-tree-on-raster-data-python
<p>I'm trying to build and implement a regression tree algorithm on some raster data in python, and can't seem to find the best way to do so. I will attempt to explain what I'm trying to do:</p> <p>My desired output is a raster image, whose values represent lake depth, call it depth.tif. I have a series of raster images, each representing the reflectance values in different Landsat bands, say [B1.tif, B2.tif, ..., B7.tif] that I want to use as my independent variables to predict lake depth.</p> <p>For my training data, I have a shapefile of ~6000 points of known lake depth. To create a tree, I extracted the corresponding reflectance values for each of those points, then exported that to a table. I then used that table in weka, a machine-learning software, to create a 600-branch regression tree that would predict depth values based on the set of reflectance values. But because the tree is so large, I can't write it in python manually. I ran into the python-weka-wrapper module so I can use weka in python, but have gotten stuck with the whole raster part. Since my data has an extra dimension (if converted to array, each independent variable is actually a set of ncolumns x nrows values instead of just a row of values, like in all of the examples), I don't know if it can do what I want. In all the examples for the weka-python-wrapper, I can't find one that deals with spatial data, and I think this is what's throwing me off.</p> <p>To clarify, I want to use the training data (which is a point shapefile/table right now but can- if necessary- be converted into a raster of the same size as the reflectance rasters, with no data in all cells except for the few points I have known depth data at), to build a regression tree that will use the reflectance rasters to predict depth. Then I want to apply that tree to the same set of reflectance rasters, in order to obtain a raster of predicted depth values <em>everywhere</em>.</p> <p>I realize this is confusing and I may not be doing the best job at explaining. I am open to other options besides just trying to implement weka in python, such as sklearn, as long as they are open source. My question is, can what I described be done? I'm pretty sure it can, as it's very similar to image classification, with the exception that the target values (depth) are continuous and not discrete classes but so far I have failed. If so, what is the best/most straight-forward method and/or are there any examples that might help?</p> <p>Thanks</p>
<p>I have had some experience using LandSat Data for the prediction of environmental properties of soil, which seems to be somewhat related to the problem that you have described above. Although I developed my own models at the time, I could describe the general process that I went through in order to map the predicted data.</p> <p>For the training data, I was able to extract the LandSat values (in addition to other properties) for the spatial points where known soil samples were taken. This way, I could use the LandSat data as inputs for predicting the environmental data. A part of this data would also be reserved for testing to confirm that the trained models were not overfitting to training data and that it predicted the outputs well.</p> <p>Once this process was completed, it would be possible to map the desired area by getting the spatial information at each point of the desired area (matching the resolution of the desired image). From there, you should be able to input these LandSat factors into the model for prediction and the output used to map the predicted depth. You could likely just use Weka in this case to predict all of the cases, then use another tool to build the map from your estimates.</p> <p>I believe I whipped up some code long ago to extract each of my required factors in ArcGIS, but it's been a while since I did this. There should be some good tutorials out there that could help you in that direction.</p> <p>I hope this helps in your particular situation.</p>
19
implement regression
How to implement a weighted logistic regression in JAGS?
https://stackoverflow.com/questions/67762244/how-to-implement-a-weighted-logistic-regression-in-jags
<p>I am performing a resource selection function using use and availability locations for a set of animals. For this type of analysis, an infinitely weighted logistic regression is suggested (Fithian and Hastie 2013) and is done by setting weights of used locations to 1 and available locations to some large number (e.g. 10,000). I know that implementing this approach using the glm function in R would be relatively simple</p> <pre><code>model1 &lt;- glm(used ~ covariates , family=binomial, weights=weights) </code></pre> <p>I am attempting to implement this as part of a larger hierarchical bayesian model, and thus need to figure out how to incorporate weights in JAGS. In my searching online, I have not been able to find a clear example of how to use weights in specifically a logistic regression. For a poisson model, I have seen suggestions to just multiply the weights by lambda <a href="https://stats.stackexchange.com/questions/66832/weighted-generalized-regression-in-bugs-jags">such as described here.</a> I was uncertain if this logic would hold for weights in a logistic regression. Below is an excerpt of JAGS code for the logistic regression in my model.</p> <pre><code> alpha_NSel ~ dbeta(1,1) intercept_NSel &lt;- logit(alpha_NSel) beta_SC_NSel ~ dnorm(0, tau_NSel) tau_NSel &lt;- 1/(pow(sigma_NSel,2)) sigma_NSel ~ dunif(0,50) for(n in 1:N_NSel){ logit(piN[n]) &lt;- intercept_NSel + beta_SC_NSel*cov_NSel[n] yN[n] ~ dbern(piN[n]) } </code></pre> <p>To implement weights, would I simply change the bernoulli trial to the below? In this case, I assume I would need to adjust weights so that they are between 0 and 1. So weights for used are 1/10,000 and available are 1?</p> <pre><code>yN[n] ~ dbern(piN[n]*weights[n]) </code></pre>
20
implement regression
GridSearch implementation for Keras Regression
https://stackoverflow.com/questions/52551511/gridsearch-implementation-for-keras-regression
<p>Trying to understand and implement GridSearch method for the Keras Regression. Here is my simple producible regression application. </p> <pre><code>import pandas as pd import numpy as np import sklearn from sklearn.model_selection import train_test_split from sklearn import metrics from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/concrete/slump/slump_test.data") df.drop(['No','FLOW(cm)','Compressive Strength (28-day)(Mpa)'],1,inplace=True) # Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): # Classification dummies = pd.get_dummies(df[target]) return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32) else: # Regression return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32) x,y = to_xy(df,'SLUMP(cm)') x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) #Create Model model = Sequential() model.add(Dense(128, input_dim=x.shape[1], activation='relu')) model.add(Dense(64, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-5, patience=5, mode='auto') checkpointer = ModelCheckpoint(filepath="best_weights.hdf5",save_best_only=True) # save best model model.fit(x_train,y_train,callbacks=[monitor,checkpointer],verbose=0,epochs=1000) #model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor,checkpointer],verbose=0,epochs=1000) pred = model.predict(x_test) score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("(RMSE): {}".format(score)) </code></pre> <p>If you run the code, you can see <code>loss</code> is not too big numbers. </p> <p><a href="https://i.sstatic.net/Y1f2X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y1f2X.png" alt="Regression Result"></a></p> <p>And here is my producible GridSearch implementation. First of all, I have simply searched the web and find the GridSearch application for <code>KerasClassifier</code>, then tried to revise it for <code>KerasRegressor</code>. I am not sure if my revision is correct. If I assume the general concept is correct, there must be a problem in this code, because loss function does not make sense. The loss function is <code>MSE</code> but the output is negative, unfortunately I could not figure out where I am doing wrong. </p> <pre><code>from keras.wrappers.scikit_learn import KerasRegressor import pandas as pd import numpy as np import sklearn from sklearn.model_selection import train_test_split from sklearn import metrics from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.callbacks import EarlyStopping from keras.callbacks import ModelCheckpoint from sklearn.model_selection import GridSearchCV df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/concrete/slump/slump_test.data") df.drop(['No','FLOW(cm)','Compressive Strength (28-day)(Mpa)'],1,inplace=True) #Convert a Pandas dataframe to the x,y inputs that TensorFlow needs def to_xy(df, target): result = [] for x in df.columns: if x != target: result.append(x) # find out the type of the target column. Is it really this hard? :( target_type = df[target].dtypes target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type # Encode to int for classification, float otherwise. TensorFlow likes 32 bits. if target_type in (np.int64, np.int32): #Classification dummies = pd.get_dummies(df[target]) return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32) else: #Regression return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32) x,y = to_xy(df,'SLUMP(cm)') x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) def create_model(optimizer='adam'): # create model model = Sequential() model.add(Dense(128, input_dim=x.shape[1], activation='relu')) model.add(Dense(64, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer=optimizer,metrics=['mse']) return model model = KerasRegressor(build_fn=create_model, epochs=100, batch_size=10, verbose=0) optimizer = ['SGD', 'RMSprop', 'Adagrad'] param_grid = dict(optimizer=optimizer) grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1) grid_result = grid.fit(x_train, y_train) #summarize results print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) </code></pre> <p><a href="https://i.sstatic.net/1QSQN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1QSQN.png" alt="Grid Search Result"></a></p>
<p>I have tested your code, and I have seen that you are not using a scoring function in GridSearchCV so according to documentation <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV" rel="nofollow noreferrer">scikit-learn documentation</a>:</p> <blockquote> <p>If None, the estimator’s default scorer (if available) is used.</p> </blockquote> <p>It seems like if it would be using the <code>'neg_mean_absolute_error'</code> (or any of <a href="https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values" rel="nofollow noreferrer">these scoring functions for regression</a>) by default for scoring models.</p> <p>That is because probably it says that the best model is:</p> <pre><code>-75.820078 using {'optimizer':'Adagrad'} </code></pre>
21
implement regression
How to Implement GridSearch for Models other than Regression Models?
https://stackoverflow.com/questions/73031371/how-to-implement-gridsearch-for-models-other-than-regression-models
<p>Is it possible to use GridsearchCV for models which doesn't deal with regression? I have been able to implement HyperOpt and Bayes_Opt for the said model.</p>
22
implement regression
Getting NaN while implementing Linear regression
https://stackoverflow.com/questions/49600778/getting-nan-while-implementing-linear-regression
<p>I am trying to implement linear regression in R. Below is my code:</p> <pre><code>library(ggplot2) df &lt;- data.frame() df&lt;-cbind(c(10000,20000,5000,5123,5345,5454,11000,23000,6000,6100,6300), c(5600,21000,1000,2000,2300,3000,7000,21400,3200,3250,3300)) df &lt;- as.data.frame(df) colnames(df)&lt;-c("Population","Profit") plot(df,df$Population,df$Profit) X&lt;-df$Population Y&lt;-df$Profit X&lt;-cbind(1,X) theta&lt;-c(0,0) m&lt;-nrow(X) cost=sum(((X %*% theta)-Y)^2)/(2*m) alpha&lt;-0.001 iterations&lt;-1500 for(i in 1:iterations){ temp1 &lt;- theta[1] - alpha * (1/m) * sum(((X%*%theta)- Y)) temp2 &lt;- theta[2] &lt;- theta[2] - alpha * (1/m) * sum(((X%*%theta)- Y)*X[,2]) theta[1] = temp1 theta[2] = temp2 } </code></pre> <p>But am getting theta values as NaN. Need help in understanding why am getting NaN.</p>
<p>If we use the <code>print</code> for one of the 'temp', the values are getting to infinite at certain point and then for the next iteration onwards it becomes NaN</p> <pre><code>iterations &lt;- 62 for(i in 1:iterations){ temp1 &lt;- theta[1] - alpha * (1/m) * sum(((X%*%theta)- Y)) temp2 &lt;- theta[2] &lt;- theta[2] - alpha * (1/m) * sum(((X%*%theta)- Y)*X[,2]) print(temp1) #print(temp2) theta[1] = temp1 theta[2] = temp2 } </code></pre> <p>-print output</p> <pre><code>#[1] 6.640909 #[1] -981047.5 #[1] 122403140248 #[1] -1.527201e+16 #[1] 1.90546e+21 #[1] -2.377406e+26 #[1] 2.966245e+31 #[1] -3.700928e+36 #[1] 4.617578e+41 #... #... #[1] 1.894035e+286 #[1] -2.363151e+291 #[1] 2.948459e+296 #[1] -3.678737e+301 #[1] Inf #[1] NaN </code></pre>
23
implement regression
Multiclass logistic regression - implementation question
https://stackoverflow.com/questions/57911771/multiclass-logistic-regression-implementation-question
<p>This is my try to implement multi-class logistic regression in python using softmax as activation function and mnist digit data set as training and test set.</p> <pre><code>import numpy as np def softmax(z): return np.array([(np.exp(el)/np.sum(np.exp(el))) for el in z]) def cost(W,F,L): m = F.shape[0] #get number of rows mul = np.dot(F, W) sm_mul_T = softmax(mul) return -(1/m) * np.sum(L * np.log(sm_mul_T)) def gradient(W,F,L): m = F.shape[0] # get number of rows mul = np.dot(F, W) sm_mul_T = softmax(mul) return -(1 / m) * np.dot(F.T , (L - sm_mul_T)) from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("./datasets/MNIST_data/", one_hot=True) W = np.zeros((785, 10)) #784 features + 1 bias for _ in range(10000): F, L = mnist.train.next_batch(100) F = np.insert(F,0, values=1, axis=1) total_cost = cost(W,F,L) print("Total cost is {}".format(total_cost)) gradients = gradient(W,F,L) W = W - (0.1 * gradients) FU = mnist.test.images FU = np.insert(FU,0, values=1, axis=1) LU = mnist.test.labels mulU = np.dot(FU, W) sm_mulU = softmax(mulU) OK=0 NOK=0 for i in range(10000): a1 = np.argmax(sm_mulU[i]) a2 = np.argmax(LU[i]) if a1 == a2: OK = OK + 1 else: NOK = NOK + 1 print("{} OK vs {} NOK".format(OK, NOK)) print("accur {}%".format((OK/(NOK+OK))*100)) </code></pre> <p>What I wanted to do is basically to try to implement it myself and try to get similar results as using Tensor flow. But the problem is that tensor flow implementation gets in the end 91% accuracy, while I can get only around 70%. Also, it seems like my model diverges and cost starts to increase pretty fast.</p> <p>Is my implementation wrong, or is it due to more advanced algorithm inside TensorFlow implementation?</p>
24
implement regression
Trying to Implement Linear Regression with Stochastic Gradient Descent
https://stackoverflow.com/questions/66756559/trying-to-implement-linear-regression-with-stochastic-gradient-descent
<p>[<a href="https://docs.google.com/spreadsheets/d/1AVNrWBwn22c1QWc6X9zG8FkvTMXHXZGuZH2sPAT9a00/edit?usp=sharing" rel="nofollow noreferrer">Dataset</a>]<a href="https://docs.google.com/spreadsheets/d/1AVNrWBwn22c1QWc6X9zG8FkvTMXHXZGuZH2sPAT9a00/edit?usp=sharing" rel="nofollow noreferrer">1</a>I'm attempting to implement linear regression for stochastic gradient descent using python. I have the code to enable me do this but for some reason, its triggering an error at &quot;row[column] = float(row[column].strip())&quot;-could not convert string to float: 'C'&quot;. Anyone who will assist me troubleshoot this error will be greatly appreciated.</p> <pre><code> # Linear Regression With Stochastic Gradient Descent for Pima- Indians-Diabetes from random import seed from random import randrange from csv import reader from math import sqrt filename = 'C:/Users/Vince/Desktop/University of Wyoming PHD/Year 2/Machine Learning/Homeworks/Solutions/HW4/pima-indians-diabetes-training.csv' # Load a CSV file def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(filename) for row in csv_reader: if not row: continue dataset.append(row) return dataset # Convert string column to float def str_column_to_float(dataset, column): for row in dataset: row[column] = float(row[column].strip()) # Find the min and max values for each column def dataset_minmax(dataset): minmax = list() for i in range(len(dataset[0])): col_values = [row[i] for row in dataset] value_min = min(col_values) value_max = max(col_values) minmax.append([value_min, value_max]) return minmax # Rescale dataset columns to the range 0-1 def normalize_dataset(dataset, minmax): for row in dataset: for i in range(len(row)): row[i] = (row[i] - minmax[i][0]) / (minmax[i][1] - minmax[i][0]) # Split a dataset into k folds def cross_validation_split(dataset, n_folds): dataset_split = list() dataset_copy = list(dataset) fold_size = int(len(dataset) / n_folds) for i in range(n_folds): fold = list() while len(fold) &lt; fold_size: index = randrange(len(dataset_copy)) fold.append(dataset_copy.pop(index)) dataset_split.append(fold) return dataset_split # Calculate root mean squared error def rmse_metric(actual, predicted): sum_error = 0.0 for i in range(len(actual)): prediction_error = predicted[i] - actual[i] sum_error += (prediction_error ** 2) mean_error = sum_error / float(len(actual)) return sqrt(mean_error) # Evaluate an algorithm using a cross validation split def evaluate_algorithm(dataset, algorithm, n_folds, *args): folds = cross_validation_split(dataset, n_folds) scores = list() for fold in folds: train_set = list(folds) train_set.remove(fold) train_set = sum(train_set, []) test_set = list() for row in fold: row_copy = list(row) test_set.append(row_copy) row_copy[-1] = None predicted = algorithm(train_set, test_set, *args) actual = [row[-1] for row in fold] rmse = rmse_metric(actual, predicted) scores.append(rmse) return scores # Make a prediction with coefficients def predict(row, coefficients): yhat = coefficients[0] for i in range(len(row)-1): yhat += coefficients[i + 1] * row[i] return yhat # Estimate linear regression coefficients using stochastic gradient descent def coefficients_sgd(train, l_rate, n_epoch): coef = [0.0 for i in range(len(train[0]))] for epoch in range(n_epoch): for row in train: yhat = predict(row, coef) error = yhat - row[-1] coef[0] = coef[0] - l_rate * error for i in range(len(row)-1): coef[i + 1] = coef[i + 1] - l_rate * error * row[i] # print(l_rate, n_epoch, error) return coef # Linear Regression Algorithm With Stochastic Gradient Descent def linear_regression_sgd(train, test, l_rate, n_epoch): predictions = list() coef = coefficients_sgd(train, l_rate, n_epoch) for row in test: yhat = predict(row, coef) predictions.append(yhat) return(predictions) # Linear Regression on Indians Pima Database seed(1) # load and prepare data filename = 'C:/Users/Vince/Desktop/University of Wyoming PHD/Year 2/Machine Learning/Homeworks/Solutions/HW4/pima-indians-diabetes-training.csv' dataset = load_csv(filename) for i in range(len(dataset[0])): str_column_to_float(dataset, i) # normalize minmax = dataset_minmax(dataset) normalize_dataset(dataset, minmax) # evaluate algorithm n_folds = 5 l_rate = 0.01 n_epoch = 5 0 scores = evaluate_algorithm(dataset, linear_regression_sgd, n_folds, l_rate, n_epoch) print('Scores: %s' % scores) print('Mean RMSE: %.3f' % (sum(scores)/float(len(scores)))) </code></pre>
<p>Adding on to the answer from @Agni</p> <p>The CSV file that you are reading has a header line</p> <p><code>num_preg PlGlcConc BloodP tricept insulin BMI ped_func Age HasDiabetes</code></p> <p>When you use <code>reader(file)</code> to read the file and then iterate over it, the first line also gets added in <code>dataset</code>. Hence, the first element in <code>dataset</code> list is:</p> <pre><code>&gt;&gt;&gt; dataset [['num_preg', 'PlGlcConc', 'BloodP', 'tricept', 'insulin', 'BMI', 'ped_func', 'Age', 'HasDiabetes'], ...] </code></pre> <p>So when you try to convert it into float it throws the error, <code>Could not convert string to float): numpreg</code></p> <p>Here is the final edited code:</p> <pre><code>def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(file) fieldnames = next(csv_reader) # Skip the first row and store in case you need it dataset = list(csv_reader) # You can convert an iterator to list directly return dataset </code></pre>
25
implement regression
How to use regression in python-weka-wrapper?
https://stackoverflow.com/questions/66135163/how-to-use-regression-in-python-weka-wrapper
<p>I would like to implement regression algorithms using python-weka-wrapper in Jupyter Notebook. However, I couldn't find the correct function in <a href="https://fracpete.github.io/python-weka-wrapper/api.html#classifiers" rel="nofollow noreferrer">https://fracpete.github.io/python-weka-wrapper/api.html#classifiers</a></p> <p>Does someone know how to implement it?</p>
<p>In the olden days, Weka distinguished between classification and regression algorithms, but that got dropped in favor of just a single super class.</p> <p>The <em>capabilities</em> of a <code>weka.classifiers.Classifier</code> determine what types of attributes and class attributes an algorithm can handle. Some algorithms, like <code>RandomForest</code> can do both, classification and regression.</p> <p>All regression algorithms implemented the <code>Classifier</code> interface, so just pick a regression algorithm, like <code>LinearRegression</code> or <code>M5P</code>, and use the same Python wrapper which you would use for classification.</p> <p>The example code that you referenced uses the <code>classify_instance</code> method which you would use for regression schemes to get the numeric prediction. In case of classification algorithms, this method returns the index of the predicted class label.</p> <p><strong>BTW</strong> The Python 2.7-based <code>python-weka-wrapper</code> library is no longer maintained and you should use <code>python-weka-wrapper3</code> instead. Here is the same link, but for pww3:</p> <p><a href="https://fracpete.github.io/python-weka-wrapper3/api.html#classifiers" rel="nofollow noreferrer">https://fracpete.github.io/python-weka-wrapper3/api.html#classifiers</a></p>
26
implement regression
How does one implement a conditional poisson regression in R?
https://stackoverflow.com/questions/49024652/how-does-one-implement-a-conditional-poisson-regression-in-r
<p>I am keen to implement a conditional (bivariate?) poisson regression in R to assess the change in rates of a variable (stratified by treatment condition) pre- / post- an intervention. Is anyone familiar with a package that runs this type of analysis?</p>
<p>Check out this <a href="https://cran.r-project.org/web/packages/gnm/gnm.pdf" rel="nofollow noreferrer">&quot;gnm&quot;</a> package in R. It has a function of gnm() where you can specify you model formula, family=poisson(), offset, dataset and strata id in &quot;eliminate&quot;. Please read it.</p>
27
implement regression
Trying to implement linear regression in python
https://stackoverflow.com/questions/26678708/trying-to-implement-linear-regression-in-python
<p>I am implementing linear regression in Python, and I think I am doing something wrong while converting matrix to numpy array, but cannot seem to figure it out. Any help will be appreciated.</p> <p>I am loading data from a csv file that has 100 columns. y is the last column. I am not using col 1 and 2 for regression.</p> <pre><code>communities=np.genfromtxt("communities.csv", delimiter = ",", dtype=float) xdata = communities[1:,2:99] x = np.array([np.concatenate((v,[1]))for v in xdata]) y = communities[1:,99] </code></pre> <p>Function definition</p> <pre><code>def standRegress(xArr, yArr): xMat = mat(xArr); yMat = mat(yArr).T xTx = xMat.T*xMat if linalg.det(xTx)==0.0: print"singular matrix" return ws = xTx.I*(xMat.T*yMat) return ws </code></pre> <p>calling the function</p> <pre><code>w = standRegress(x,y) xMat = mat(x) #shape(1994L,98L) yMat = mat(y) #shape (1L, 1994L) yhat = xMat*w #shape (1994L, 1L) </code></pre> <p>Next I am trying to calculate RMSE and this is where I am having problem</p> <pre><code>yMatT = yMat.T #shape(1994L, 1L) err = yhat - yMatT #shape(1994L, 1L) error = np.array(err) total_error = np.dot(error,error) rmse = np.sqrt(total_error/len(p)) </code></pre> <p>I get an error while I am doing the dot product and thus not able to calculate rmse. I will appreciate if someone can help me find my mistake.</p> <pre><code>Error: ---&gt; 11 np.dot(error,error) 12 #test = (error)**2 13 #test.sum()/len(y) ValueError: matrices are not aligned </code></pre>
<p>I'm not quite sure what the last <code>dot</code> is supposed to do. But you can't multiple <code>error</code> with itself this way. <code>dot</code> does a matrix multiplication, thus the dimensions have to align.</p> <p>See, e.g., the following example:</p> <pre><code>import numpy as np A = np.ones((3, 4)) B = np.ones((3, 4)) print np.dot(A, B) </code></pre> <p>This yields the error <code>ValueError: matrices are not aligned</code>.</p> <p>What is possible, however, is:</p> <pre><code>print np.dot(A.T, B) </code></pre> <p>Output:</p> <pre><code>[[ 3. 3. 3. 3.] [ 3. 3. 3. 3.] [ 3. 3. 3. 3.] [ 3. 3. 3. 3.]] </code></pre> <hr> <p>In your example <code>error</code> is just a column vector - but stored as a 2D array:</p> <pre><code>A = np.ones((3, 1)) B = np.ones((3, 1)) print np.dot(A, B) </code></pre> <p>Same error.</p> <p>So you can either transpose one argument - as shown above - or extract one column as a 1D array:</p> <pre><code>print np.dot(A[:, 0], B[:, 0]) </code></pre> <p>Output:</p> <pre><code>3.0 </code></pre>
28
implement regression
Problem in the linear regression implementation
https://stackoverflow.com/questions/60133065/problem-in-the-linear-regression-implementation
<p>I am new to Machine learning and I was trying to implement vectorized linear regression from scratch using numpy. I tried testing out the implementation using y=x. But my loss is increasing and I am unable to understand why. It will be great if someone could point out why is this happening. Thanks in advance!</p> <pre class="lang-py prettyprint-override"><code>import numpy as np class LinearRegressor(object): def __init__(self, num_features): self.num_features = num_features self.w = np.random.randn(num_features, 1).astype(np.float32) self.b = np.array(0.0).astype(np.float32) def forward(self, x): return np.dot(x, self.w) + self.b @staticmethod def loss(y_pred, y_true): l = np.average(np.power(y_pred - y_true, 2)) / 2 return l def calculate_gradients(self, x, y_pred, y_true): self.dl_dw = np.dot(x.T, y_pred - y_true) / len(x) self.dl_db = np.mean(y_pred - y_true) def optimize(self, step_size): self.w -= step_size*self.dl_dw self.b -= step_size*self.dl_db def train(self, x, y, step_size=1.0): y_pred = self.forward(x) l = self.loss(y_pred=y_pred, y_true=y) self.calculate_gradients(x=x, y_pred=y_pred, y_true=y) self.optimize(step_size=step_size) return l def evaluate(self, x, y): return self.loss(self.forward(x), y_true) check_reg = LinearRegressor(num_features=1) x = np.array(list(range(1000))).reshape(-1, 1) y = x losses = [] for iteration in range(100): loss = check_reg.train(x=x,y=y, step_size=0.001) losses.append(loss) if iteration % 1 == 0: print("Iteration: {}".format(iteration)) print(loss) </code></pre> <p>Output</p> <pre><code>Iteration: 0 612601.7859402705 Iteration: 1 67456013215.98818 Iteration: 2 7427849474110884.0 Iteration: 3 8.179099502901393e+20 Iteration: 4 9.006330707513148e+25 Iteration: 5 9.917228672922966e+30 Iteration: 6 1.0920254505132042e+36 Iteration: 7 1.2024725981084638e+41 Iteration: 8 1.324090295064888e+46 Iteration: 9 1.4580083421516024e+51 Iteration: 10 1.60547085025467e+56 Iteration: 11 1.7678478362285333e+61 Iteration: 12 1.946647415292399e+66 Iteration: 13 2.1435307416407376e+71 Iteration: 14 2.3603265498975516e+76 Iteration: 15 2.599049318486855e+81 Iteration: 16 nan Iteration: 17 nan Iteration: 18 nan Iteration: 19 nan Iteration: 20 nan Iteration: 21 nan Iteration: 22 nan Iteration: 23 nan Iteration: 24 nan Iteration: 25 nan Iteration: 26 nan Iteration: 27 nan Iteration: 28 nan Iteration: 29 nan Iteration: 30 nan Iteration: 31 nan Iteration: 32 nan Iteration: 33 nan Iteration: 34 nan Iteration: 35 nan Iteration: 36 nan Iteration: 37 nan Iteration: 38 nan Iteration: 39 nan Iteration: 40 nan Iteration: 41 nan Iteration: 42 nan Iteration: 43 nan Iteration: 44 nan Iteration: 45 nan Iteration: 46 nan Iteration: 47 nan Iteration: 48 nan Iteration: 49 nan Iteration: 50 nan Iteration: 51 nan Iteration: 52 nan Iteration: 53 nan Iteration: 54 nan Iteration: 55 nan Iteration: 56 nan Iteration: 57 nan Iteration: 58 nan Iteration: 59 nan Iteration: 60 nan Iteration: 61 nan Iteration: 62 nan Iteration: 63 nan Iteration: 64 nan Iteration: 65 nan Iteration: 66 nan Iteration: 67 nan Iteration: 68 nan Iteration: 69 nan Iteration: 70 nan Iteration: 71 nan Iteration: 72 nan Iteration: 73 nan Iteration: 74 nan Iteration: 75 nan Iteration: 76 nan Iteration: 77 nan Iteration: 78 nan Iteration: 79 nan Iteration: 80 nan Iteration: 81 nan Iteration: 82 nan Iteration: 83 nan Iteration: 84 nan Iteration: 85 nan Iteration: 86 nan Iteration: 87 nan Iteration: 88 nan Iteration: 89 nan Iteration: 90 nan Iteration: 91 nan Iteration: 92 nan Iteration: 93 nan Iteration: 94 nan Iteration: 95 nan Iteration: 96 nan Iteration: 97 nan Iteration: 98 nan Iteration: 99 nan </code></pre>
<p>Nothing is wrong with your implementation. Your step size is just too high to converge. You are bouncing around the optimization crest to higher and higher error. <a href="https://i.sstatic.net/cSd1H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cSd1H.png" alt="enter image description here" /></a> edit your step size for this:</p> <pre class="lang-py prettyprint-override"><code>loss = check_reg.train(x=x,y=y, step_size=0.000001) </code></pre> <p>and you will get:</p> <pre><code>Iteration: 0 58305.102166924036 Iteration: 1 25952.192344178206 Iteration: 2 11551.585414406314 Iteration: 3 5141.729521746186 Iteration: 4 2288.6353484460747 Iteration: 5 1018.6952280352172 Iteration: 6 453.4320214875039 Iteration: 7 201.82728832044089 Iteration: 8 89.83519431606754 Iteration: 9 39.98665864625944 Iteration: 10 17.798416262435936 Iteration: 11 7.92229454258205 Iteration: 12 3.526272890501929 Iteration: 13 1.5696002444816197 Iteration: 14 0.6986516574778796 Iteration: 15 0.3109875219688626 Iteration: 16 0.13843156434074647 Iteration: 17 0.061616235257299326 Iteration: 18 0.027424318402401473 Iteration: 19 0.012205888201891543 Iteration: 20 0.005434012356344396 Iteration: 21 0.0024188644277583476 Iteration: 22 0.0010770380211645404 Iteration: 23 0.0004796730257022216 Iteration: 24 0.00021339295719587025 Iteration: 25 9.499628306355218e-05 Iteration: 26 4.244764386691682e-05 Iteration: 27 1.8965112443214162e-05 Iteration: 28 8.56069334821767e-06 Iteration: 29 3.848135476439999e-06 Iteration: 30 1.7367004907528985e-06 Iteration: 31 8.07976330965736e-07 Iteration: 32 4.0167090640020525e-07 Iteration: 33 2.253979336583221e-07 Iteration: 34 1.5365746125585947e-07 Iteration: 35 1.2480275459766612e-07 Iteration: 36 1.1147859663321005e-07 Iteration: 37 1.0288427880059631e-07 Iteration: 38 1.0036079530613815e-07 Iteration: 39 9.901975516098116e-08 Iteration: 40 9.901971962009025e-08 Iteration: 41 9.901968407922984e-08 Iteration: 42 9.901964853839991e-08 Iteration: 43 9.901961299760048e-08 Iteration: 44 9.901957745683155e-08 Iteration: 45 9.90195419160931e-08 Iteration: 46 9.901950637538515e-08 Iteration: 47 9.90194708347077e-08 Iteration: 48 9.901943529406073e-08 Iteration: 49 9.901939975344426e-08 Iteration: 50 9.901936421285829e-08 Iteration: 51 9.90193286723028e-08 Iteration: 52 9.901929313177781e-08 Iteration: 53 9.901925759128331e-08 Iteration: 54 9.901922205081931e-08 Iteration: 55 9.90191865103858e-08 Iteration: 56 9.901915096998278e-08 Iteration: 57 9.901911542961026e-08 Iteration: 58 9.901907988926822e-08 Iteration: 59 9.901904434895669e-08 Iteration: 60 9.901900880867564e-08 Iteration: 61 9.901897326842509e-08 Iteration: 62 9.901893772820503e-08 Iteration: 63 9.901890218801546e-08 Iteration: 64 9.901886664785639e-08 Iteration: 65 9.901883110772781e-08 Iteration: 66 9.901879556762973e-08 Iteration: 67 9.901876002756213e-08 Iteration: 68 9.901872448752503e-08 Iteration: 69 9.901868894751843e-08 Iteration: 70 9.901865340754231e-08 Iteration: 71 9.901861786759669e-08 Iteration: 72 9.901858232768157e-08 Iteration: 73 9.901854678779693e-08 Iteration: 74 9.901851124794279e-08 Iteration: 75 9.901847570811914e-08 Iteration: 76 9.901844016832599e-08 Iteration: 77 9.901840462856333e-08 Iteration: 78 9.901836908883116e-08 Iteration: 79 9.901833354912948e-08 Iteration: 80 9.90182980094583e-08 Iteration: 81 9.901826246981762e-08 Iteration: 82 9.901822693020742e-08 Iteration: 83 9.901819139062772e-08 Iteration: 84 9.901815585107851e-08 Iteration: 85 9.90181203115598e-08 Iteration: 86 9.901808477207157e-08 Iteration: 87 9.901804923261384e-08 Iteration: 88 9.90180136931866e-08 Iteration: 89 9.901797815378986e-08 Iteration: 90 9.901794261442361e-08 Iteration: 91 9.901790707508786e-08 Iteration: 92 9.901787153578259e-08 Iteration: 93 9.901783599650782e-08 Iteration: 94 9.901780045726355e-08 Iteration: 95 9.901776491804976e-08 Iteration: 96 9.901772937886647e-08 Iteration: 97 9.901769383971367e-08 Iteration: 98 9.901765830059137e-08 Iteration: 99 9.901762276149956e-08 </code></pre> <p>Hope it helps!</p>
29
implement regression
Linear Regression using fminunc Implementation
https://stackoverflow.com/questions/44848279/linear-regression-using-fminunc-implementation
<p>I'm trying to implement linear regression with only one feature using <code>fminunc</code> in Octave.</p> <p>Here is my code.</p> <pre class="lang-matlab prettyprint-override"><code>x = load('/home/battousai/Downloads/ex2Data/ex2x.dat'); y = load('/home/battousai/Downloads/ex2Data/ex2y.dat'); m = length(y); x = [ones(m , 1) , x]; theta = [0 , 0]'; X0 = [x , y , theta]; options = optimset('GradObj' , 'on' , 'MaxIter' , 1500); [x , val] = fminunc(@computeCost , X0 , options) </code></pre> <p>And here is the cost function which returns the gradient as well as the value of the cost function.</p> <pre class="lang-matlab prettyprint-override"><code>function [J , gradient] = computeCost(x , y , theta) m = length(y); J = (0.5 / m) .* (x * theta - y )' * (x * theta - y ); gradient = (1/m) .* x' * (x * theta - y); end </code></pre> <p>The length of the data set is <code>50</code>, i.e., the dimensions are <code>50 x 1</code>. I'm not getting the part that how should I pass <code>X0</code> to the <code>fminunc</code>.</p> <p>Updated Driver Code:</p> <pre class="lang-matlab prettyprint-override"><code>x = load('/home/battousai/Downloads/ex2Data/ex2x.dat'); y = load('/home/battousai/Downloads/ex2Data/ex2y.dat'); m = length(y); x = [ones(m , 1) x]; theta_initial = [0 , 0]; options = optimset('Display','iter','GradObj','on' , 'MaxIter' , 100); [X , Cost] = fminunc(@(t)(computeCost(x , y , theta)), theta_initial , options) </code></pre> <p>Updated Code for Cost function:</p> <pre class="lang-matlab prettyprint-override"><code>function [J , gradient] = computeCost(x , y , theta) m = length(y); J = (1/(2*m)) * ((x * theta) - y )' * ((x * theta) - y) ; gradient = (1 / m) .* x' * ((x * theta) - y); end </code></pre> <p>Now I'm getting values of <code>theta</code> to be <code>[0,0]</code> but when I used normal equation, values of <code>theta</code> turned out to be <code>[0.750163 , 0.063881]</code>.</p>
<p>From the documentation for fminunc:</p> <blockquote> <p>FCN should accept a vector (array) defining the unknown variables</p> </blockquote> <p>and</p> <blockquote> <p>X0 determines a starting guess. </p> </blockquote> <p>Since your input is a <em>cost</em> function (i.e. it associates your choice of parameter vector with a cost), the input argument to your cost function, that needs to be optimised via <code>fminunc</code> is only theta, since <code>x</code> and <code>y</code> (i.e. your observations and your targets) are considered 'given' aspects of the problem and are not things you're trying to optimise. So you either declare <code>x</code> and <code>y</code> global and access them from your function like so:</p> <pre><code>function [J , gradient] = computeCost(theta_0) global x; global y; % ... </code></pre> <p>and then call fminunc as: <code>fminunc (@computeCost, t_0, options)</code></p> <p><em>or</em>, keep your computeCost function as <code>computeCost(x, y, theta)</code>, and change your <code>fminunc</code> call to something like this:</p> <pre><code>[x , val] = fminunc(@ (t) computeCost(x, y, t) , t0 , options) </code></pre> <hr> <p><strong>UPDATE</strong> Not sure what you were doing wrong. Here is the full code and an octave session running it. Seems fine. </p> <pre><code>%% in file myscript.m x = load('ex2x.dat'); y = load('ex2y.dat'); m = length(y); x = [ones(m , 1) , x]; theta_0 = [0 , 0]'; options = optimset('GradObj' , 'on' , 'MaxIter' , 1500); [theta_opt, cost] = fminunc(@ (t) computeCost(x,y,t) , theta_0 , options) </code></pre> <p> <pre><code>%% in file computeCost.m function [J , gradient] = computeCost(x , y , theta) m = length(y); J = (0.5 / m) .* (x * theta - y )' * (x * theta - y ); gradient = (1/m) .* x' * (x * theta - y); end </code></pre> <p> <pre><code>%% in the octave terminal: &gt;&gt; myscript theta_opt = 0.750163 0.063881 cost = 9.8707e-04 </code></pre>
30
implement regression
Scratch implementation of ridge regression
https://stackoverflow.com/questions/79299410/scratch-implementation-of-ridge-regression
<p>I am trying to implement Ridge regression but I feel like I am missing something with the Python operators. Here is my code:</p> <pre><code>import numpy as np x = np.random.rand(10, 2) y = np.random.rand(10, 1) lambda_reg = 0.1 alpha = 0.1 num_iterations = 100000 X_train = np.hstack((np.ones((x.shape[0], 1)), x)) def ridge_regression_gradient_descent(X, y, lambda_reg, alpha, num_iterations): n, p = X.shape B = np.zeros(p) # Gradient descent loop for _ in range(num_iterations): y_pred = X.dot(B).reshape(-1, 1) gradient_B0 = - (1/n) * np.sum(y - y_pred) gradient_B = - (1/n) * (X[:, 1:].T @ (y - y_pred)) + lambda_reg * B[1:].reshape(-1, 1) # Gradients for B1 to Bp B[0] -= alpha * gradient_B0 B[1:] -= alpha * gradient_B.reshape(-1) return B B = ridge_regression_gradient_descent(X_train, y, lambda_reg, alpha, num_iterations) print(B) </code></pre> <p>Is anyone able to see what I am doing wrong?</p> <p>I tried multiple changes in the code on the matrix multiplications + also reshaping everything in the right format. I get actually 3 Beta's so this is ok but I don't get anything close to what I get with the formula: <code>B = (X.T X + lambda * I)^-1 * X.T Y</code></p>
<p>Some observations about your optimization function:</p> <ol> <li>Why are you dividing by N? You're making your gradient smaller than necessary, and you're not applying that scaling to the second term in your gradient, changing the relationship between the components of your gradient (making the residual-driven term much smaller than the coefficient-driven term).</li> <li>Why are you calculating your <code>B0</code> and <code>B1</code> gradients separately? You can take the gradient in terms of all of <code>B</code> using all of <code>X</code> instead of fragmenting your solution. You've applied regularization to <code>B[1:]</code> but not to <code>B[0]</code>.</li> </ol> <p>Here is an amended solution for you, without dividing the residual-driven term by <code>n</code>, and adding regularization to <code>B[0]</code> as well:</p> <pre><code>def ridge_regression_gradient_descent(X, y, lambda_reg, alpha, num_iterations): n, p = X.shape B = np.zeros(p) # Gradient descent loop for _ in range(num_iterations): y_pred = X.dot(B).reshape(-1, 1) gradient_B0 = -np.sum(y - y_pred) + lambda_reg * B[0] gradient_B = -(X[:, 1:].T @ (y - y_pred)) + lambda_reg * B[1:].reshape(-1, 1) # Gradients for B1 to Bp B[0] -= alpha * gradient_B0 B[1:] -= alpha * gradient_B.reshape(-1) return B B = ridge_regression_gradient_descent(X_train, y, lambda_reg, alpha=.001, num_iterations=10000) print(B) &gt; [ 0.53250528 -0.21478985 -0.15051757] </code></pre> <p>Indeed you can check the output of this function by calculating B explicitly as you've described.</p> <pre><code>beta = np.linalg.inv(X_train.T@X_train+np.identity(X_train.shape[1])*lambda_reg)@(X_train.T@y) print(beta) &gt; [[ 0.53256638] [-0.21485239] [-0.15058472]] </code></pre> <p>Here is a separate solution calculating the gradient in terms of all of <code>B</code> using all of <code>X</code>, it accomplishes the same thing, but it is more straightforward and efficient:</p> <pre><code>def descent(X,y,lambda_reg,alpha,num_iterations): B = np.zeros((X.shape[1],1)) for _ in range(num_iterations): grad = (X.T@((X@B).reshape(y.shape) - y) + lambda_reg*B) B = B - alpha*grad return B B = descent(X_train, y, lambda_reg, alpha=.1, num_iterations=10000) print(B) &gt; [[ 0.53256638] [-0.21485239] [-0.15058472]] </code></pre>
31
implement regression
Logistic Regression, Gradient Descent Octave implementation
https://stackoverflow.com/questions/63046676/logistic-regression-gradient-descent-octave-implementation
<p>I'm taking the Machine Learning class by Prof. Ng. There is a homework need to implement logistic regression gradient descent. And here is my code:</p> <pre><code>function [J, grad] = costFunction(theta, X, y) %COSTFUNCTION Compute cost and gradient for logistic regression % J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the % parameter for logistic regression and the gradient of the cost % w.r.t. to the parameters. % Initialize some useful values m = length(y); % number of training examples [~,n] = size(X); % You need to return the following variables correctly J = 0; grad = zeros(size(theta)); % ====================== YOUR CODE HERE ====================== % Instructions: Compute the cost of a particular choice of theta. % You should set J to the cost. % Compute the partial derivatives and set grad to the partial % derivatives of the cost w.r.t. each parameter in theta % % Note: grad should have the same dimensions as theta % J = ((-y'*log(sigmoid(X*theta)))-((1-y)'*log(1-sigmoid(X*theta))))/m; for j = 1:n temp_sum = 0; for i = 1:m temp_sum+=(sigmoid(X(i,:)*theta)-y(i))*X(i,j); endfor grad(j) = theta(j)-temp_sum; endfor % ============================================================= end </code></pre> <p>This is the formula that I'm trying to implement: <a href="https://i.sstatic.net/B9k58.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B9k58.png" alt="enter image description here" /></a></p> <p>where h of x represent sigmoid function. I have check that sigmoid function is correct, but I still cant understand where is wrong in this algorithm. Please let me know if you find anything wrong.</p>
<p>I believe you were supposed to get the average of the gradient <code>grad = grad / m</code> as well, just like for the cost <code>J</code>. But it has been a while since I last did Andrew Ng's course, so I might be wrong.</p>
32
implement regression
Logistic regression implementation not working
https://stackoverflow.com/questions/56041493/logistic-regression-implementation-not-working
<p>Recently, I have been reading about machine learning of which logistic regression is one. After reading, to test my understanding, I tried to implement LR in Java. When I tested it on Logical OR and Logical AND, it seemed to work. But, when I tried it on marks to decide accepted or rejected job applicants, it failed to learn to classify it. Can you spot what is wrong in this code?</p> <pre><code>public class LogisticRegression { int featureLength; ArrayList&lt;Double&gt; inputs = new ArrayList(); int targetOutput; ArrayList&lt;Double&gt; weights = new ArrayList(); double bias; static double learningRate = 0.1; LogisticRegression(int fs) { featureLength = fs; for (int i = 0; i &lt; featureLength; i++) { weights.add(Math.random()); } bias = Math.random(); } double sigmoidFunction(double x) { return 1.0 / (1.0 + Math.exp(-x)); } double weightedSum() { if (inputs.size() != featureLength) { System.out.println("Error: input does not match feature length"); System.exit(0); } double sum = 0; for (int i = 0; i &lt; featureLength; i++) { double inp = inputs.get(i); double wh = weights.get(i); sum += inp * wh; } sum += bias; double out = sigmoidFunction(sum); return out; } void learn() { double inp, wh, out, gradient; out = weightedSum(); for (int i = 0; i &lt; featureLength; i++) { inp = inputs.get(i); wh = weights.get(i); gradient = (out - (double) targetOutput) * inp; wh -= learningRate * gradient; weights.set(i, wh); } //update bias gradient = (out - targetOutput) * 1; bias -= learningRate * gradient; } </code></pre> <p>I tested it on <a href="https://github.com/animesh-agarwal/Machine-Learning/blob/master/LogisticRegression/data/marks.txt" rel="nofollow noreferrer">this dataset</a></p>
<p>The problem you are having with Logistic Regression is called underfitting, this is a very common problem for simple machine learning models. By this I mean that the model does not adjust correctly to the data. There are different reasons for this to happen:</p> <ul> <li><p>The model is to simple (or the dataset is too complicated)</p></li> <li><p>Your weights aren't correctly approximated</p></li> </ul> <p>The first problem can be solved by increasing the capacity/complexity of your model(with LR this is not possible), or choosing a more complex one. One problem that LR has is that it can only handle correctly linearly separable data, otherwise it will have problems giving correct predictions (XOR for example is not linearly separable).</p> <p>To solve the second problem you might wanna use another method other than gradient descent to calculate the value of the weights. Although, if you want to use gradient descent your have to adjust some hyper parameters. Gradient descent works by trying to find the global minima of the loss/cost function, this means that it tries to find the correct answer by making small steps toward the direction with the steepest slope. To better approximate the weights you can lower the learning rate(this will require more iterations). You can also change the type of initialization for the weights, a better starting point means a faster convergence. Finally you can change your loss function.</p> <p>Hope that help!</p>
33