diff --git "a/data/dataset.json" "b/data/dataset.json" --- "a/data/dataset.json" +++ "b/data/dataset.json" @@ -27910,5 +27910,1655 @@ "input": "", "output": "

Since v0.4 you can deploy detectron2 models to torchscript and ONNX. There is more information about it in the documentation (and also example code).

\n", "system": "" + }, + { + "instruction": "Keras: what does class_weight actually try to balance?", + "input": "", + "output": "

Keras uses the class weights during training but the accuracy is not reflective of that. Accuracy is calculated across all samples irrelevant of the weight between classes. This is because you're using the metric 'accuracy' in the compile(). You can define a custom and more accurate weighted accuracy and use that or use the sklearn metrics (e.g. f1_score() which can be 'binary', 'weighted' etc).

\n\n

Example:

\n\n
def macro_f1(y_true, y_pred):\n     return f1_score(y_true, y_pred, average='macro')\n\n\nmodel.compile(loss=custom_loss(extreme_weights),\n        metrics=['accuracy', macro_f1],\n        optimizer={{choice(['rmsprop', 'adam', 'sgd','Adagrad','Adadelta'])}}\n        )\n
\n", + "system": "" + }, + { + "instruction": "TypeError: ('Keyword argument not understood:', 'data_format')", + "input": "", + "output": "

You might be using a different version between platforms. Updating Keras to the same / latest should resolve the issue, with pip:

\n\n
pip install --upgrade keras\n
\n", + "system": "" + }, + { + "instruction": "ImportError: No module named 'tensorflow.python' with tensorflow-gpu", + "input": "", + "output": "

This solution worked for me:

\n\n

Uninstalling both CPU and GPU versions of TensorFlow and then installing only the GPU version of TensorFlow.

\n\n
pip uninstall tensorflow\npip uninstall tensorflow-gpu\n\npip install tensorflow-gpu\n
\n", + "system": "" + }, + { + "instruction": "Keras: what does class_weight actually try to balance?", + "input": "", + "output": "

Keras uses the class weights during training but the accuracy is not reflective of that. Accuracy is calculated across all samples irrelevant of the weight between classes. This is because you're using the metric 'accuracy' in the compile(). You can define a custom and more accurate weighted accuracy and use that or use the sklearn metrics (e.g. f1_score() which can be 'binary', 'weighted' etc).

\n\n

Example:

\n\n
def macro_f1(y_true, y_pred):\n     return f1_score(y_true, y_pred, average='macro')\n\n\nmodel.compile(loss=custom_loss(extreme_weights),\n        metrics=['accuracy', macro_f1],\n        optimizer={{choice(['rmsprop', 'adam', 'sgd','Adagrad','Adadelta'])}}\n        )\n
\n", + "system": "" + }, + { + "instruction": "TypeError: ('Keyword argument not understood:', 'data_format')", + "input": "", + "output": "

You might be using a different version between platforms. Updating Keras to the same / latest should resolve the issue, with pip:

\n\n
pip install --upgrade keras\n
\n", + "system": "" + }, + { + "instruction": "ImportError: No module named 'tensorflow.python' with tensorflow-gpu", + "input": "", + "output": "

This solution worked for me:

\n\n

Uninstalling both CPU and GPU versions of TensorFlow and then installing only the GPU version of TensorFlow.

\n\n
pip uninstall tensorflow\npip uninstall tensorflow-gpu\n\npip install tensorflow-gpu\n
\n", + "system": "" + }, + { + "instruction": "Jupyter can't find keras' module", + "input": "", + "output": "

Please try the following:

\n\n

Run these in the jupyter notebook cell:

\n\n
import sys\n\nsys.path\n\nsys.executable\n
\n\n

It may not be pointing to your virtual environment but to the root

\n\n

The fix is to install the jupyter notebook from inside your virtual environment

\n\n
$ . your_env/bin/activate\n\n(your_env)$ python -m pip install jupyter\n
\n\n

Now you can import tensorflow or keras

\n", + "system": "" + }, + { + "instruction": "Keras: Exception: Received unknown keyword arguments: {'epochs': 100}", + "input": "", + "output": "

The epoch flags were CHANGED in version 2+, for version 1+ use nb_epoch instead.

\n\n
model.fit(trainX, trainY, nb_epoch=100, batch_size=1, verbose=2)\n
\n\n

To check your Keras version ..

\n\n
import keras\nprint(keras.__version__)\n
\n", + "system": "" + }, + { + "instruction": "Python loop taking more time at each iteration", + "input": "", + "output": "

I've seen this quite a few times when preprocessing data; Typically, in my experience, the memory usage creeps up after each iteration with each following iteration slowing down slightly.

\n

I find that the easiest way to solve this is to separate the tasks into different processes and then use an orchestration process to manage the program flow.

\n

When each task is completed, the associated process is culled and your resources can continue to be allocated to the next task in the flow. This is most helpful for keeping long-running processes crisp.

\n

You could structure the process in this way:

\n
Parent Process\n     |_ Pickle Input to Child Proc\n     |_ Trigger Child Proc\n            |_ Collect Input\n            |_ Complete Task\n            |_ Pickle Output\n     |_ Collect Output\n\n\n\nParent Process -> pickle input -> Child Process\n      ^                              |\n      |                              |\n      ----------------pickle output <-\n
\n

One of the things you can do to manage the task flow is to create an id and use it to create an empty file, then pass that id to the child process and, once the work is complete, delete it with the child process. This is a simple and convenient way for the parent process to know the child process is complete.

\n", + "system": "" + }, + { + "instruction": "Defining an AUC metric for Keras to support evaluation of validation dataset", + "input": "", + "output": "

Here are the tricks that I often use. Basically, this allows you to use whatever existing metrics in sklearn

\n\n
from sklearn.metrics import roc_auc_score\nimport tensorflow as tf\ndef auc( y_true, y_pred ) :\n    score = tf.py_func( lambda y_true, y_pred : roc_auc_score( y_true, y_pred, average='macro', sample_weight=None).astype('float32'),\n                        [y_true, y_pred],\n                        'float32',\n                        stateful=False,\n                        name='sklearnAUC' )\n    return score\n
\n\n

Now we can create a simple model to verify this metric.

\n\n
from keras.layers import Input\nfrom keras.models import Model\n\nx = Input(shape=(100,))\ny = Dense(10, activation='sigmoid')(x)\nmodel = Model(inputs=x, outputs=y)\nmodel.compile( 'sgd', loss='binary_crossentropy', metrics=[auc] )\nprint model.summary()\n\n\na = np.random.randn(1000,100)\nb = np.random.randint(low=0,high=2,size=(1000,10))\nmodel.fit( a, b )\n
\n", + "system": "" + }, + { + "instruction": "Many to many sequence prediction with different sequence length", + "input": "", + "output": "

After asking this question on the Keras Github page, I got an answer, which I post here for completeness.

\n\n

The solution is to use a second LSTM layer, after shaping the output with RepeatVector to the desired number of output steps.

\n\n
model = Sequential()  \nmodel.add(LSTM(input_dim=1, output_dim=hidden_neurons, return_sequences=False))  \nmodel.add(RepeatVector(10))\nmodel.add(LSTM(output_dim=hidden_neurons, return_sequences=True))  \nmodel.add(TimeDistributed(Dense(1)))\nmodel.add(Activation('linear'))   \nmodel.compile(loss='mean_squared_error', optimizer='rmsprop')  \n
\n\n

The predictions are looking better now and look like this:\n\"enter

\n", + "system": "" + }, + { + "instruction": "How to use OpenCV functions in Keras Lambda Layer?", + "input": "", + "output": "

You confused with the symbolic operation in the Lambda layer with the numerical operation in a python function.

\n\n

Basically, your custom operation accepts numerical inputs but not symbolic ones. To fix this, what you need is something like py_func in tensorflow

\n\n

In addition, you have not considered the backpropagation. In short, although this layer is non-parametric and non-learnable, you need to take care of its gradient as well.

\n\n
import tensorflow as tf\nfrom keras.layers import Input, Conv2D, Lambda\nfrom keras.models import Model\nfrom keras import backend as K\nimport cv2\n\ndef image_func(img):\n    img=cv2.cvtColor(img,cv2.COLOR_BGR2YUV) \n    img=cv2.resize(img,(200,66))\n    return img.astype('float32')\n\ndef image_tensor_func(img4d) :\n    results = []\n    for img3d in img4d :\n        rimg3d = image_func(img3d )\n        results.append( np.expand_dims( rimg3d, axis=0 ) )\n    return np.concatenate( results, axis = 0 )\n\nclass CustomLayer( Layer ) :\n    def call( self, xin )  :\n        xout = tf.py_func( image_tensor_func, \n                           [xin],\n                           'float32',\n                           stateful=False,\n                           name='cvOpt')\n        xout = K.stop_gradient( xout ) # explicitly set no grad\n        xout.set_shape( [xin.shape[0], 66, 200, xin.shape[-1]] ) # explicitly set output shape\n        return xout\n    def compute_output_shape( self, sin ) :\n        return ( sin[0], 66, 200, sin[-1] )\n\nx = Input(shape=(None,None,3))\nf = CustomLayer(name='custom')(x)\ny = Conv2D(1,(1,1), padding='same')(x)\n\nmodel = Model( inputs=x, outputs=y )\nprint model.summary()\n
\n\n

Now you can test this layer with some dummy data.

\n\n
a = np.random.randn(2,100,200,3)\nb = model.predict(a)\nprint b.shape\n\nmodel.compile('sgd',loss='mse')\nmodel.fit(a,b)\n
\n", + "system": "" + }, + { + "instruction": "keras model.fit_generator() several times slower than model.fit()", + "input": "", + "output": "

You may want to check out the workers and max_queue_size parameters of fit_generator() in the documentation. Essentially, more workers creates more threads for loading the data into the queue that feeds data to your network. There is a chance that filling the queue might cause memory problems, though, so you might want to decrease max_queue_size to avoid this.

\n", + "system": "" + }, + { + "instruction": "Keras: Tokenizer with fit_generator() on text data", + "input": "", + "output": "

So basically you could define a text generator and feed it to fit_on_text method in a following manner:

\n\n
    \n
  1. Assuming that you have texts_generator which is reading partially your data from disk and returning an iterable collection of text you may define:

    \n\n
    def text_generator(texts_generator):\n    for texts in texts_generator:\n        for text in texts:\n            yield text\n
    \n\n

    Please take care that you should make this generator stop after reading a whole of data from disk - what could possible make you to change the original generator you want to use in model.fit_generator

  2. \n
  3. Once you have the generator from 1. you may simply apply a tokenizer.fit_on_text method by:

    \n\n
    tokenizer.fit_on_text(text_generator)\n
  4. \n
\n", + "system": "" + }, + { + "instruction": "Keras intermediate layers output", + "input": "", + "output": "

The issue arise from the fact, as the OP suggested, that the layer with index 0 (i.e. model.layers[0]) corresponds to the input layer: \"when using the functional API layer 0 is the input itself. And so everything is shifted one position forward.\"

\n\n

Note: this answer is posted as community wiki as suggested in accepted answer of \"Question with no answers, but issue solved in the comments (or extended in chat)\".

\n", + "system": "" + }, + { + "instruction": "TypeError: 'Tensor' object is not callable", + "input": "", + "output": "

Both get_output and get_input methods return either Theano or TensorFlow tensor. It's not callable because of the nature of this objects.

\n\n

In order to compile a function you should provide only layer tensors and a special Keras tensor called learning_phase which sets in which option your model should be called.

\n\n

Following this answer your function should look like this:

\n\n
convout1_f = K.function([model.input, K.learning_phase()], convout1.get_output)\n
\n\n

Remember that you need to pass either True or False when calling your function in order to make your model computations in either learning or training phase mode.

\n", + "system": "" + }, + { + "instruction": "Keras Implementation of Customized Loss Function that need internal layer output as label", + "input": "", + "output": "

I have figured out a way out, in case anyone is searching for the same, I posted here (based on the network given in this post):

\n\n

The idea is to define the customized loss function and use it as the output of the network. (Notation: A is the true label of variable A, and A' is the predicted value of variable A)

\n\n
def customized_loss(args):\n    #A is from the training data\n    #S is the internal state\n    A, A', S, S' = args \n    #customize your own loss components\n    loss1 = K.mean(K.square(A - A'), axis=-1)\n    loss2 = K.mean(K.square(S - S'), axis=-1)\n    #adjust the weight between loss components\n    return 0.5 * loss1 + 0.5 * loss2\n\n def model():\n     #define other inputs\n     A = Input(...) # define input A\n     #construct your model \n     cnn_model = Sequential()\n     ...\n     # get true internal state\n     S = cnn_model(prev_layer_output0)\n     # get predicted internal state output\n     S' = Dense(...)(prev_layer_output1)\n     # get predicted A output\n     A' = Dense(...)(prev_layer_output2)\n     # customized loss function\n     loss_out = Lambda(customized_loss, output_shape=(1,), name='joint_loss')([A, A', S, S'])\n     model = Model(input=[...], output=[loss_out])\n     return model\n\n  def train():\n      m = model()\n      opt = 'adam'\n      model.compile(loss={'joint_loss': lambda y_true, y_pred:y_pred}, optimizer = opt)\n      # train the model \n      ....\n
\n", + "system": "" + }, + { + "instruction": "Keras ImageDataGenerator Slow", + "input": "", + "output": "

I assume you already might have solved this, but nevertheless...

\n\n

Keras image preprocessing has the option of saving the results by setting the save_to_dir argument in the flow() or flow_from_directory() function:

\n\n

https://keras.io/preprocessing/image/

\n", + "system": "" + }, + { + "instruction": "Center Loss in Keras", + "input": "", + "output": "

For me, you can implement this layer following the steps:

\n\n
    \n
  1. write a custom layer ComputeCenter that

    \n\n
  2. \n
  3. To compute the center loss, you need to

    \n\n
  4. \n
  5. use model.add_loss() to compute this loss. Note, don't add this loss in model.compile( loss = ... ).

  6. \n
\n\n

Finally, you may add some loss coefficient to the center-loss if needed.

\n", + "system": "" + }, + { + "instruction": "Do you need to standardize inputs if you are using Batch Normalization?", + "input": "", + "output": "

While you can certainly use it for that, batch normalization is not designed to do that and you will most likely introduce sampling error in your normalization due to the limited sample size (sample size is your batch size).

\n\n

Another factor for why I would not recommend using batch normalization for normalizing your inputs is that it introduces the correction terms gamma and beta (trained parameters) which will skew your training data if not disabled.

\n\n

For normalization of your test data I would recommend using z-score normalization on the complete training set (e.g., via sklearn's StandardScaler) or some appropriate alternative, but not batch normalization.

\n", + "system": "" + }, + { + "instruction": "In Keras when does LSTM state reset in the call to model.predict?", + "input": "", + "output": "

I appreciate this is an old question, but hope that this answer can help other Keras beginners like me.

\n\n

I ran this example on my machine and observed that the hidden states and cell states of the LSTM were indeed changing with a call to model.predict.

\n\n
import numpy as np\nimport keras.backend as K\nfrom keras.models import Model\nfrom keras.layers import LSTM\n\nbatch_size = 1\ntimestep_size = 2\nnum_features = 4\n\ninputs = Input(batch_shape=(batch_size, timestep_size, num_features)\nx = LSTM(num_features, stateful=True)(inputs)\n\nmodel = Model(inputs=inputs, outputs=x)\nmodel.compile(loss=\"mse\",\n              optimizer=\"rmsprop\",\n              metrics=[\"accuracy\"])\n\nx = np.random.randint((10,2,4))\ny = np.ones((10,4))\nmodel.fit(x,y, epochs=100, batch_size=1)\n\ndef get_internal_state(model):\n    # get the internal state of the LSTM\n    # see https://github.com/fchollet/keras/issues/218\n    h, c = [K.get_value(s) for s, _ in model.state_updates]\n    return h, c\n\nprint \"After fitting:\", get_internal_state(model)\n\nfor i in range(3):\n    x = np.random.randint((10,2,4))\n    model.predict(x)\n    print \"After predict:\", get_internal_state(model)\n
\n\n

Here's a sample of the output of the the calls to get_internal_state after training:

\n\n
After_fitting: (array([[ 1.,  1.,  1.,  1.]], dtype=float32), array([[  11.33725166,   11.8036108 ,  181.75688171,   25.50110626]], dtype=float32))\nAfter predict (array([[ 1.        ,  0.99999994,  1.        ,  1.        ]], dtype=float32), array([[   9.26870918,    8.83847237,  179.92633057,   28.89341927]], dtype=float32))\nAfter predict (array([[ 0.99999571,  0.9992013 ,  1.        ,  0.9915328 ]], dtype=float32), array([[   6.5174489 ,    8.55165958,  171.42166138,   25.49199104]], dtype=float32))\nAfter predict (array([[ 1.,  1.,  1.,  1.]], dtype=float32), array([[   9.78496075,    9.27927303,  169.95401001,   28.74017715]], dtype=float32))\n
\n", + "system": "" + }, + { + "instruction": "Implementing a Siamese NN in Keras", + "input": "", + "output": "

As mentioned by Matias Valdenegro, Keras already has an example of Siamese network. The example uses only dense layers, though.

\n\n

Your problem is that you need to add a Flatten layer between the convolutional layers and the dense layers to have a correct shape, see this Keras CNN example

\n\n

These 2 examples should help you build your Siamese network.

\n", + "system": "" + }, + { + "instruction": "Obtaining a prediction in Keras", + "input": "", + "output": "

Softmax might yield \"one-hot\" like output. Consider the following example:

\n\n
# Input; Exponent; Softmax value \n20    485165195  0.99994\n 9         8103  0.00002\n 5          148  0.00000\n10        22026  0.00005\n------------------------\n# Sum 485195473  1\n
\n\n

Since the exponential function grows very fast softmax starts yielding one-hot like output starting from order of magnitude 1. In Keras implementation of the softmax function the maximum value is subtracted from the input, but in the stated above case it won't make any difference.

\n\n

Possible ways to fix this:

\n\n
    \n
  1. Make sure that input images are rescaled, so that pixels values are between 0 and 1.

  2. \n
  3. Add some regularizers to your model.

  4. \n
\n", + "system": "" + }, + { + "instruction": "keras: how to predict classes in order?", + "input": "", + "output": "

Look at the source code of flow_from_directory. In my case, I had to rename all images. They were named 1.jpg .. 1000.jpg, but to be in order, they had to be named 0001.jpg .. 1000.jpg. The sorting is important here.

\n\n

flow_from_directory uses sorted(os.listdir(directory)), thus the sorting is not always intuitive.

\n", + "system": "" + }, + { + "instruction": "nvcc fatal : Cannot find compiler 'cl.exe' in PATH although Visual Studio 12.0 is added to PATH", + "input": "", + "output": "

I had the same problem. I'm using 64 bit Windows 8.1 and I had to add the following to my path and now it works fine:

\n
C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\\amd64\n\nC:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\\amd64\\cl.exe\n
\n

Hope this helps

\n", + "system": "" + }, + { + "instruction": "Python keras how to transform a dense layer into a convolutional layer", + "input": "", + "output": "

Still looking for solution? Here it is:

\n\n
new_conv_weights = dense_weights.transpose(1,0).reshape(new_conv_shape)[:,:,::-1,::-1]\n
\n\n

in your case:

\n\n
weights[0] = weights[0].transpose(1,0).reshape((4096,512,7,7))[:,:,::-1,::-1]\n
\n\n

The tricky part is conv filters flipping [:,:,::-1,::-1]. Theano does convolution not correlation (unlike caffe e.g.). Hence, in Keras filter like:

\n\n
1 0\n0 0\n
\n\n

applied to matrix:

\n\n
1 2 3 4 5\n6 7 8 9 0\n1 2 3 4 5\n
\n\n

results in matrix:

\n\n
7 8 9 0 \n2 3 4 5\n
\n\n

not this, as one would expect with correlation:

\n\n
1 2 3 4\n6 7 8 9\n
\n\n

In order to make things working as expected, you need to rotate filters 180 deg. Just solved this problem for myself, hopefully this will be of help for you or for others. Cheers.

\n", + "system": "" + }, + { + "instruction": "Keras: ImportError: No module named data_utils", + "input": "", + "output": "

Even though this answer is correct but is not complete. Thanks to Ben J.'s answer but Tadhg McDonald-Jensen is the first one offering me the answers here.

\n\n

Summarize it

\n\n

I was using pip install keras to install keras, but it did not install the latest version of keras according to this. That is why I could do things like from keras.models import Sequential, from keras.layers.core import Dense, Activation, Dropout, and from keras.layers.recurrent import LSTM but not from keras.utils.data_utils import get_file. Because it is not in the previous versions.

\n\n

SO, just clone the keras from their github, cd into it, and run sudo python setup.py install will solve this problem.

\n\n

REMEMBER, if you already did this pip install keras, you have to make sure clear all keras versions you have installed by doing this pip uninstall keras many time until no keras existing, then do this sudo python setup.py install.

\n", + "system": "" + }, + { + "instruction": "Type ERROR when upgrading to tensorflow 2.9", + "input": "", + "output": "

This should be a comment instead of an answer, but I don't have enough reputation for that. I have seen the same type of error message appear in the output type of the tensorflow guide here https://www.tensorflow.org/guide/migrate/evaluator and here https://www.tensorflow.org/guide/migrate/migrating_feature_columns . You can easily find the lines with the error by searching for "type inference failed" after following the links.

\n", + "system": "" + }, + { + "instruction": "AssertionError: Tried to export a function which references untracked resource", + "input": "", + "output": "

Your issue is not related to 'transformer_transducer/transducer_encoder/inputs_embedding/ convolution_stack/conv2d/kernel:0'.
\nThe error code tells you this element is referring to a non trackable element. It seems the non-trackable object is not directly assigned to an attribute of this conv2d/kernel:0.

\n

To solve your issue, we need to localize Tensor("77040:0", shape=(), dtype=resource) from this error code:

\n
AssertionError: Tried to export a function which references untracked resource\\\nTensor("77040:0", shape=(), dtype=resource). \nTensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.\n
\n

Edit:

\n

Thanks to your comments, we found that "ConvolutionStack" seems to reproduce the error.

\n
\n

The problem only occurs if I use the ConvolutionStack layer in InputsEmbedding but I can save both of them successfully in a standalone model.

\n
\n

I understand you cannot share the config of this layer and that's why I suggest you try and localize this Tensor(77040:0 from the ConvolutionStack.

\n

This untrackable tensor must be an artifcat or a temporary tensor created by a process of a function of ConvolutionStack.

\n

Try to find a tensor that could be passed from a function to another instead of being assigned to an attribute of a layer's class

\n", + "system": "" + }, + { + "instruction": "In Keras what is the difference between Conv2DTranspose and Conv2D", + "input": "", + "output": "

Conv2D applies Convolutional operation on the input. On the contrary, Conv2DTranspose applies a Deconvolutional operation on the input.

\n\n
x = tf.random.uniform((1,3,3,1))\nconv2d = tf.keras.layers.Conv2D(1,2)(x)\nprint(conv2d.shape)\n# (1, 2, 2, 1)\nconv2dTranspose = tf.keras.layers.Conv2DTranspose(1,2)(x)\nprint(conv2dTranspose.shape)\n# (1, 4, 4, 1)\n
\n

To sum up:

\n\n

\"enter

\n

And if you want to know how Conv2DTranspose enlarges input, here you go:\n\"enter

\n

For example:

\n
kernel = tf.constant_initializer(1.)\nx = tf.ones((1,3,3,1))\nconv = tf.keras.layers.Conv2D(1,2, kernel_initializer=kernel)\ny = tf.ones((1,2,2,1))\nde_conv = tf.keras.layers.Conv2DTranspose(1,2, kernel_initializer=kernel)\n\nconv_output = conv(x)\nprint("Convolution\\n---------")\nprint("input  shape:",x.shape)\nprint("output shape:",conv_output.shape)\nprint("input  tensor:",np.squeeze(x.numpy()).tolist())\nprint("output tensor:",np.around(np.squeeze(conv_output.numpy())).tolist())\n'''\nConvolution\n---------\ninput  shape: (1, 3, 3, 1)\noutput shape: (1, 2, 2, 1)\ninput  tensor: [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]\noutput tensor: [[4.0, 4.0], [4.0, 4.0]]\n'''\nde_conv_output = de_conv(y)\nprint("De-Convolution\\n------------")\nprint("input  shape:",y.shape)\nprint("output shape:",de_conv_output.shape)\nprint("input  tensor:",np.squeeze(y.numpy()).tolist())\nprint("output tensor:",np.around(np.squeeze(de_conv_output.numpy())).tolist())\n'''\nDe-Convolution\n------------\ninput  shape: (1, 2, 2, 1)\noutput shape: (1, 3, 3, 1)\ninput  tensor: [[1.0, 1.0], [1.0, 1.0]]\noutput tensor: [[1.0, 2.0, 1.0], [2.0, 4.0, 2.0], [1.0, 2.0, 1.0]]\n'''\n
\n", + "system": "" + }, + { + "instruction": "How to implement Grad-CAM on a trained network", + "input": "", + "output": "

One thing I don't get is if you've your own classifier (2) why then use imagenet_utils.decode_predictions? I'm not sure if my following answer will satisfy you or not. But here are some pointer.

\n

DataSet

\n
import tensorflow as tf\nimport numpy as np \n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()\n\n# train set / data \nx_train = x_train.astype('float32') / 255\n# train set / target \ny_train = tf.keras.utils.to_categorical(y_train , num_classes=10)\n\n# validation set / data \nx_test = x_test.astype('float32') / 255\n# validation set / target \ny_test = tf.keras.utils.to_categorical(y_test, num_classes=10)\n\nprint(x_train.shape, y_train.shape)\nprint(x_test.shape, y_test.shape)  \n# (50000, 32, 32, 3) (50000, 10)\n# (10000, 32, 32, 3) (10000, 10\n
\n

Model

\n
input = tf.keras.Input(shape=(32,32,3))\nefnet = tf.keras.applications.EfficientNetB0(weights='imagenet',\n                                             include_top = False, \n                                             input_tensor = input)\n# Now that we apply global max pooling.\ngap = tf.keras.layers.GlobalMaxPooling2D()(efnet.output)\n\n# Finally, we add a classification layer.\noutput = tf.keras.layers.Dense(10, activation='softmax')(gap)\n\n# bind all\nfunc_model = tf.keras.Model(efnet.input, output)\n
\n

Compile and Run

\n
func_model.compile(\n          loss      = tf.keras.losses.CategoricalCrossentropy(),\n          metrics   = tf.keras.metrics.CategoricalAccuracy(),\n          optimizer = tf.keras.optimizers.Adam())\n# fit \nfunc_model.fit(x_train, y_train, batch_size=128, epochs=15, verbose = 2)\n\nEpoch 14/15\n391/391 - 13s - loss: 0.1479 - categorical_accuracy: 0.9491\nEpoch 15/15\n391/391 - 13s - loss: 0.1505 - categorical_accuracy: 0.9481\n
\n

Grad CAM

\n

Same as your set up.

\n
from tensorflow.keras.models import Model\nimport tensorflow as tf\nimport numpy as np\nimport cv2\n\nclass GradCAM:\n    def __init__(self, model, classIdx, layerName=None):\n        # store the model, the class index used to measure the class\n        # activation map, and the layer to be used when visualizing\n        # the class activation map\n        self.model = model\n        self.classIdx = classIdx\n        self.layerName = layerName\n        # if the layer name is None, attempt to automatically find\n        # the target output layer\n        if self.layerName is None:\n            self.layerName = self.find_target_layer()\n\n    def find_target_layer(self):\n        # attempt to find the final convolutional layer in the network\n        # by looping over the layers of the network in reverse order\n        for layer in reversed(self.model.layers):\n            # check to see if the layer has a 4D output\n            if len(layer.output_shape) == 4:\n                return layer.name\n        # otherwise, we could not find a 4D layer so the GradCAM\n        # algorithm cannot be applied\n        raise ValueError("Could not find 4D layer. Cannot apply GradCAM.")\n\n\n    def compute_heatmap(self, image, eps=1e-8):\n        # construct our gradient model by supplying (1) the inputs\n        # to our pre-trained model, (2) the output of the (presumably)\n        # final 4D layer in the network, and (3) the output of the\n        # softmax activations from the model\n        gradModel = Model(\n            inputs=[self.model.inputs],\n            outputs=[self.model.get_layer(self.layerName).output, self.model.output])\n\n        # record operations for automatic differentiation\n        with tf.GradientTape() as tape:\n            # cast the image tensor to a float-32 data type, pass the\n            # image through the gradient model, and grab the loss\n            # associated with the specific class index\n            inputs = tf.cast(image, tf.float32)\n            (convOutputs, predictions) = gradModel(inputs)\n            \n            loss = predictions[:, tf.argmax(predictions[0])]\n    \n        # use automatic differentiation to compute the gradients\n        grads = tape.gradient(loss, convOutputs)\n\n        # compute the guided gradients\n        castConvOutputs = tf.cast(convOutputs > 0, "float32")\n        castGrads = tf.cast(grads > 0, "float32")\n        guidedGrads = castConvOutputs * castGrads * grads\n        # the convolution and guided gradients have a batch dimension\n        # (which we don't need) so let's grab the volume itself and\n        # discard the batch\n        convOutputs = convOutputs[0]\n        guidedGrads = guidedGrads[0]\n\n        # compute the average of the gradient values, and using them\n        # as weights, compute the ponderation of the filters with\n        # respect to the weights\n        weights = tf.reduce_mean(guidedGrads, axis=(0, 1))\n        cam = tf.reduce_sum(tf.multiply(weights, convOutputs), axis=-1)\n\n        # grab the spatial dimensions of the input image and resize\n        # the output class activation map to match the input image\n        # dimensions\n        (w, h) = (image.shape[2], image.shape[1])\n        heatmap = cv2.resize(cam.numpy(), (w, h))\n        # normalize the heatmap such that all values lie in the range\n        # [0, 1], scale the resulting values to the range [0, 255],\n        # and then convert to an unsigned 8-bit integer\n        numer = heatmap - np.min(heatmap)\n        denom = (heatmap.max() - heatmap.min()) + eps\n        heatmap = numer / denom\n        heatmap = (heatmap * 255).astype("uint8")\n        # return the resulting heatmap to the calling function\n        return heatmap\n\n    def overlay_heatmap(self, heatmap, image, alpha=0.5,\n                        colormap=cv2.COLORMAP_VIRIDIS):\n        # apply the supplied color map to the heatmap and then\n        # overlay the heatmap on the input image\n        heatmap = cv2.applyColorMap(heatmap, colormap)\n        output = cv2.addWeighted(image, alpha, heatmap, 1 - alpha, 0)\n        # return a 2-tuple of the color mapped heatmap and the output,\n        # overlaid image\n        return (heatmap, output)\n
\n

Prediction

\n
image = cv2.imread('/content/dog.jpg')\nimage = cv2.resize(image, (32, 32))\nimage = image.astype('float32') / 255\nimage = np.expand_dims(image, axis=0)\n\npreds = func_model.predict(image) \ni = np.argmax(preds[0])\n
\n

To get the layer's name of the model

\n
for idx in range(len(func_model.layers)):\n  print(func_model.get_layer(index = idx).name)\n\n# we picked `block5c_project_con` layer \n
\n

Passing to GradCAM class

\n
icam = GradCAM(func_model, i, 'block5c_project_conv') \nheatmap = icam.compute_heatmap(image)\nheatmap = cv2.resize(heatmap, (32, 32))\n\nimage = cv2.imread('/content/dog.jpg')\nimage = cv2.resize(image, (32, 32))\nprint(heatmap.shape, image.shape)\n\n(heatmap, output) = icam.overlay_heatmap(heatmap, image, alpha=0.5)\n
\n

Visualization

\n
fig, ax = plt.subplots(1, 3)\n\nax[0].imshow(heatmap)\nax[1].imshow(image)\nax[2].imshow(output)\n
\n

\"enter

\n

Ref. Grad-CAM class activation visualization

\n", + "system": "" + }, + { + "instruction": "Speed up the initial TensorFlow startup", + "input": "", + "output": "

For your inference problem, you'll probably want a longer-lived process that you can request inference results from, maybe over HTTP, gRPC, XML-RPC, named pipes, reading files from a directory...?

\n

Failing that, get a faster machine or disk. On my machine, starting a new Python process and importing Keras takes about 2 seconds:

\n
$ pip install tensorflow\nCollecting tensorflow\n  Downloading tensorflow-2.3.1-cp38-cp38-macosx_10_14_x86_64.whl (165.2 MB)\n[...]\nSuccessfully installed absl-py-0.11.0 astunparse-1.6.3 cachetools-4.1.1 chardet-3.0.4 gast-0.3.3 google-auth-1.23.0 google-auth-oauthlib-0.4.2 google-pasta-0.2.0 grpcio-1.33.2 h5py-2.10.0 idna-2.10 keras-preprocessing-1.1.2 markdown-3.3.3 numpy-1.18.5 oauthlib-3.1.0 opt-einsum-3.3.0 packaging-20.4 protobuf-3.13.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.24.0 requests-oauthlib-1.3.0 rsa-4.6 tensorboard-2.3.0 tensorboard-plugin-wit-1.7.0 tensorflow-2.3.1 tensorflow-estimator-2.3.0 termcolor-1.1.0 werkzeug-1.0.1 wrapt-1.12.1\n$ time python -c 'import tensorflow.keras as keras'\n\n________________________________________________________\nExecuted in    2.02 secs   fish           external\n   usr time    2.85 secs  118.00 micros    2.85 secs\n   sys time    0.62 secs  946.00 micros    0.62 secs\n
\n", + "system": "" + }, + { + "instruction": "Reset all weights of Keras model", + "input": "", + "output": "

I wrote a function that reinitializes weights in tensorflow 2.

\n
def reinitialize(model):\n    for l in model.layers:\n        if hasattr(l,"kernel_initializer"):\n            l.kernel.assign(l.kernel_initializer(tf.shape(l.kernel)))\n        if hasattr(l,"bias_initializer"):\n            l.bias.assign(l.bias_initializer(tf.shape(l.bias)))\n        if hasattr(l,"recurrent_initializer"):\n            l.recurrent_kernel.assign(l.recurrent_initializer(tf.shape(l.recurrent_kernel)))\n
\n

It took me way longer than it should have to come up with this and i tried many things that failed in my specific use case. IMO this should be a standard TF feature.

\n", + "system": "" + }, + { + "instruction": "How to save Keras model as frozen graph?", + "input": "", + "output": "

Freeze_Graph is now gone in Tensorflow 2.0.
You can check it here Tensorflow 2.0 : frozen graph support.

\n\n

Except for the .save method that you have in your code.
\n.save Method is already saving a .pb ready for inference.\nAs an alternative, you can also use the below code.

\n\n

You can also use convert_variables_to_constants_v2

\n\n

Below is the sample code.

\n\n
\nimport tensorflow as tf\nimport os\nfrom tensorflow.python.tools import freeze_graph\nfrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2\n\nmodel = tf.keras.Sequential()\nmodel.add(tf.keras.layers.Dense(64, input_shape=(1,)))\nmodel.add(tf.keras.layers.Dense(32, activation='relu'))\nmodel.add(tf.keras.layers.Dense(16, activation='relu'))\nmodel.add(tf.keras.layers.Dense(1, activation='softmax'))\nmodel.compile(optimizer='adam', loss='mse')\nmodel.summary()\n\n# Convert Keras model to ConcreteFunction\nfull_model = tf.function(lambda x: model(x))\nfull_model = full_model.get_concrete_function(\n    tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype, name=\"yourInputName\"))\n# Get frozen ConcreteFunction\nfrozen_func = convert_variables_to_constants_v2(full_model)\nfrozen_func.graph.as_graph_def()\nlayers = [op.name for op in frozen_func.graph.get_operations()]\nprint(\"-\" * 50)\nprint(\"Frozen model layers: \")\nfor layer in layers:\n    print(layer)\nprint(\"-\" * 50)\nprint(\"Frozen model inputs: \")\nprint(frozen_func.inputs)\nprint(\"Frozen model outputs: \")\nprint(frozen_func.outputs)\n# Save frozen graph from frozen ConcreteFunction to hard drive\ntf.io.write_graph(graph_or_graph_def=frozen_func.graph,\n                  logdir=\"./frozen_models\",\n                  name=\"frozen_graph.pb\",\n                  as_text=False)\n\n### USAGE ##\ndef wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False):\n    def _imports_graph_def():\n        tf.compat.v1.import_graph_def(graph_def, name=\"\")\n\n    wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])\n    import_graph = wrapped_import.graph\n\n    print(\"-\" * 50)\n    print(\"Frozen model layers: \")\n    layers = [op.name for op in import_graph.get_operations()]\n    if print_graph == True:\n        for layer in layers:\n            print(layer)\n    print(\"-\" * 50)\n\n    return wrapped_import.prune(\n        tf.nest.map_structure(import_graph.as_graph_element, inputs),\n        tf.nest.map_structure(import_graph.as_graph_element, outputs))\n\n## Example Usage ###\n# Load frozen graph using TensorFlow 1.x functions\nwith tf.io.gfile.GFile(\"./frozen_models/frozen_graph.pb\", \"rb\") as f:\n    graph_def = tf.compat.v1.GraphDef()\n    loaded = graph_def.ParseFromString(f.read())\n\n# Wrap frozen graph to ConcreteFunctions\nfrozen_func = wrap_frozen_graph(graph_def=graph_def,\n                                inputs=[\"yourInputName:0\"],\n                                outputs=[\"Identity:0\"],\n                                print_graph=True)\nprint(\"-\" * 50)\nprint(\"Frozen model inputs: \")\nprint(frozen_func.inputs)\nprint(\"Frozen model outputs: \")\nprint(frozen_func.outputs)\n# Get predictions for test images\npredictions = frozen_func(yourInputName=tf.constant([[3.]]))\n# Print the prediction for the first image\nprint(\"-\" * 50)\nprint(\"Example prediction reference:\")\nprint(predictions[0].numpy())\n
\n", + "system": "" + }, + { + "instruction": "How to output the second layer of a network?", + "input": "", + "output": "

Looks like you are mixing old keras (before tensorflow 2.0: import keras) and new keras (from tensorflow import keras).

\n\n

Try not to use old keras alongside tensorflow>=2.0 (and not to refer to the old documentation as in your first link), as it is easily confused with the new one (although nothing strictly illogical):

\n\n
from tensorflow import keras\nfrom keras.models import Model\nprint(Model.__module__) #outputs 'keras.engine.training'\n\nfrom tensorflow.keras.models import Model\nprint(Model.__module__) #outputs 'tensorflow.python.keras.engine.training'\n
\n\n

Behaviour will be highly unstable mixing those two libraries.

\n\n

Once this is done, using an answer from what you tried, m being your model, and my_input_shape being the shape of your models input ie the shape of one picture (here (28, 28) or (1, 28, 28) if you have batches):

\n\n
from tensorflow import keras as K\nmy_input_data = np.random.rand(*my_input_shape) \nnew_temp_model = K.Model(m.input, m.layers[3].output) #replace 3 with index of desired layer\noutput_of_3rd_layer = new_temp_model.predict(my_input_data) #this is what you want\n
\n\n

If you have one image img you can directly write new_temp_model.predict(img)

\n", + "system": "" + }, + { + "instruction": "Decay parameter of Adam optimizer in Keras", + "input": "", + "output": "

From source code, decay adjusts lr per iterations according to

\n\n
lr = lr * (1. / (1. + decay * iterations))  # simplified\n
\n\n

see image below. This is epoch-independent. iterations is incremented by 1 on each batch fit (e.g. each time train_on_batch is called, or how many ever batches are in x for model.fit(x) - usually len(x) // batch_size batches).

\n\n

To implement what you've described, you can use a callback as below:

\n\n
from keras.callbacks import LearningRateScheduler\ndef decay_schedule(epoch, lr):\n    # decay by 0.1 every 5 epochs; use `% 1` to decay after each epoch\n    if (epoch % 5 == 0) and (epoch != 0):\n        lr = lr * 0.1\n    return lr\n\nlr_scheduler = LearningRateScheduler(decay_schedule)\nmodel.fit(x, y, epochs=50, callbacks=[lr_scheduler])\n
\n\n

The LearningRateScheduler takes a function as an argument, and the function is fed the epoch index and lr at the beginning of each epoch by .fit. It then updates lr according to that function - so on next epoch, the function is fed the updated lr.

\n\n

Also, there is a Keras implementation of AdamW, NadamW, and SGDW, by me - Keras AdamW.

\n\n
\n\n

\n\n
\n\n

Clarification: the very first call to .fit() invokes on_epoch_begin with epoch = 0 - if we don't wish lr to be decayed immediately, we should add a epoch != 0 check in decay_schedule. Then, epoch denotes how many epochs have already passed - so when epoch = 5, the decay is applied.

\n", + "system": "" + }, + { + "instruction": "Cannot clone object <tensorflow.python.keras.wrappers.scikit_learn.KerasClassifier object", + "input": "", + "output": "

This is sklearn bug. You should reduce the version of sklearn:

\n\n
conda install scikit-learn==0.21.2\n
\n\n

It's OK!

\n", + "system": "" + }, + { + "instruction": "How to speed up Tensorflow 2 keras model for inference?", + "input": "", + "output": "

One way to go about it is to optimize your model using Tensorflow with TensorRT (TF-TRT) (https://github.com/tensorflow/tensorrt). However, in Tensorflow 2, models are saved in a folder instead of a single .pb file. This is also the case for TF-TRT optimized models, they are stored in a folder. You can convert your model to TF-TRT as:

\n
from tensorflow.python.compiler.tensorrt import trt_convert as trt\nconverter = tf.experimental.tensorrt.Converter(input_saved_model_dir=saved_model_dir)\nconverter.convert() \nconverter.save("trt_optimized_model") # save it to a dir\n
\n

If you have a requirement that the model needs to be contained in a single file (and do not care about the optimization offered by TF-TRT) you can convert the SavedModel to ONNX. And use ONNX runtime for inference. You can even go one step further here and convert the ONNX file into TensorRT (https://developer.nvidia.com/Tensorrt). This will give you a single optimized file that you can run using TensorRT (note that you cannot run the resulting file with Tensorflow anymore).

\n", + "system": "" + }, + { + "instruction": "Why would the loss decrease while the accuracy stays the same?", + "input": "", + "output": "

Loss and accuracy are indeed connected, but the relationship is not so simple.

\n

Loss drops but accuracy is about the same

\n

Let's say we have 6 samples, our y_true could be:

\n
[0, 0, 0, 1, 1, 1]\n
\n

Furthermore, let's assume our network predicts following probabilities:

\n
[0.9, 0.9, 0.9, 0.1, 0.1, 0.1]\n
\n

This gives us loss equal to ~24.86 and accuracy equal to zero as every sample is wrong.

\n

Now, after parameter updates via backprop, let's say new predictions would be:

\n
[0.6, 0.6, 0.6, 0.4, 0.4, 0.4]\n
\n

One can see those are better estimates of true distribution (loss for this example is 16.58), while accuracy didn't change and is still zero.

\n

All in all, the relation is more complicated, network could fix its parameters for some examples, while destroying them for other which keeps accuracy about the same.

\n

Why my network is unable to fit to the data?

\n

Such situation usually occurs when your data is really complicated (or incomplete) and/or your model is too weak. Here both are the case, financial data prediction has a lot of hidden variables which your model cannot infer. Furthermore, dense layers are not the ones for this task; each day is dependent on the previous values, it is a perfect fit for Recurrent Neural Networks, you can find an article about LSTMs and how to use them here (and tons of others over the web).

\n", + "system": "" + }, + { + "instruction": "On fit_generator() / fit() and thread-safety", + "input": "", + "output": "

During my research on this I came across some information answering my questions.

\n

Note: As updated in the question in newer tensorflow/keras-versions (tf > 2) fit_generator() is deprecated. Instead, it is recommended to use fit() with the generator. However, the answer still applies to fit() using a generator as well.\n

\n
\n
\n

1. Does Keras emit this warning only because the generator is not inheriting Sequences, or does Keras also check if a generator is threadsafe in general?

\n
\n

Taken from Keras' gitRepo (training_generators.py) I found in lines 46-52 the following:

\n
use_sequence_api = is_sequence(generator)\nif not use_sequence_api and use_multiprocessing and workers > 1:\n    warnings.warn(\n        UserWarning('Using a generator with `use_multiprocessing=True`'\n                    ' and multiple workers may duplicate your data.'\n                    ' Please consider using the `keras.utils.Sequence'\n                    ' class.'))\n
\n

The definition of is_sequence() taken from training_utils.py in lines 624-635 is:

\n
def is_sequence(seq):\n    """Determine if an object follows the Sequence API.\n    # Arguments\n        seq: a possible Sequence object\n    # Returns\n        boolean, whether the object follows the Sequence API.\n    """\n    # TODO Dref360: Decide which pattern to follow. First needs a new TF Version.\n    return (getattr(seq, 'use_sequence_api', False)\n            or set(dir(Sequence())).issubset(set(dir(seq) + ['use_sequence_api'])))\n
\n

Regarding this piece of code Keras only checks if a passed generator is a Keras-sequence (or rather uses Keras' sequence API) and does not check if a generator is threadsafe in general.\n

\n
\n
\n

2. Is using the approach I choosed as threadsafe as using the generatorClass(Sequence)-version from the Keras-docs?

\n
\n

As Omer Zohar has shown on gitHub his decorator is threadsafe - I don't see any reason why it shouldn't be as threadsafe for Keras (even though Keras will warn as shown in 1.).\nThe implementation of thread.Lock() can be concidered as threadsafe according to the docs:

\n
\n

A factory function that returns a new primitive lock object. Once a thread has acquired it, subsequent attempts to acquire it block, until it is released; any thread may release it.

\n
\n

The generator is also picklable, which can be tested like (see this SO-Q&A here for further information):

\n
#Dump yielded data in order to check if picklable\nwith open("test.pickle", "wb") as outfile:\n    for yielded_data in generator(data):\n        pickle.dump(yielded_data, outfile, protocol=pickle.HIGHEST_PROTOCOL)\n
\n

Resuming this, I would even suggest to implement thread.Lock() when you extend Keras' Sequence() like:

\n
import threading\n\nclass generatorClass(Sequence):\n\n    def __init__(self, x_set, y_set, batch_size):\n        self.x, self.y = x_set, y_set\n        self.batch_size = batch_size\n        self.lock = threading.Lock()   #Set self.lock\n\n    def __len__(self):\n        return int(np.ceil(len(self.x) / float(self.batch_size)))\n\n    def __getitem__(self, idx):\n        with self.lock:                #Use self.lock\n            batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]\n            batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]\n\n            return ...\n
\n

Edit 24/04/2020:

\n

By using self.lock = threading.Lock() you might run into the following error:

\n
\n

TypeError: can't pickle _thread.lock objects

\n
\n

In case this happens try to replace with self.lock: inside __getitem__ with with threading.Lock(): and comment out / delete the self.lock = threading.Lock() inside the __init__.

\n

It seems there are some problems when storing the lock-object inside a class (see for example this Q&A).\n

\n
\n
\n

3. Are there any other approaches leading to a thread-safe-generator Keras can deal with which are different from these two examples?

\n
\n

During my research I did not encounter any other method.\nOf course I cannot say this with 100% certainty.

\n", + "system": "" + }, + { + "instruction": "Tensorflow/keras: "logits and labels must have the same first dimension" How to squeeze logits or expand labels?", + "input": "", + "output": "

No, you got the cause all wrong. You are giving one-hot encoded labels, but sparse_categorical_crossentropy expects integer labels, as it does the one-hot encoding itself (hence, sparse).

\n\n

An easy solution would be to change loss to categorical_crossentropy, not the sparse version. Also note that y_true with shape (7,) is incorrect, it should be (1, 7).

\n", + "system": "" + }, + { + "instruction": "Resume training with different loss function", + "input": "", + "output": "

My answers :\na) yes, and you should probably make your own learning rate scheduler in order to keep control of it :

\n\n
keras.callbacks.LearningRateScheduler(schedule, verbose=0)\n
\n\n

b) yes you can create your own loss function, including one that flutuates between two different loss methods. see : \"Advanced Keras\u200a\u2014\u200aConstructing Complex Custom Losses and Metrics\"\nhttps://towardsdatascience.com/advanced-keras-constructing-complex-custom-losses-and-metrics-c07ca130a618

\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'keras'", + "input": "", + "output": "

I think you are using old version tensorflow Try to update it like

\n\n
! pip install tensorflow --upgrade\n
\n", + "system": "" + }, + { + "instruction": "Keras - How to use argmax for predictions", + "input": "", + "output": "

You just have to index categories with the result of np.argmax:

\n\n
pred_name = CATEGORIES[np.argmax(prediction)]\nprint(pred_name)\n
\n", + "system": "" + }, + { + "instruction": "How to create a sparse layer in Keras (i.e. not all neurons are connected to each other)?", + "input": "", + "output": "

Have you tried adding dropout? This will randomly reset some subset of weights in a layer to 0 when performing updates, and sounds like what you want. This is one of many decent methods for combating overfitting.

\n

https://keras.io/api/layers/regularization_layers/dropout/

\n", + "system": "" + }, + { + "instruction": "Is deep learning bad at fitting simple non linear functions outside training scope (extrapolating)?", + "input": "", + "output": "
\n
    \n
  1. Is my analysis correct?
  2. \n
\n
\n\n

Given my remarks in the comments that your network is certainly not deep, let's accept that your analysis is indeed correct (after all, your model does seem to do a good job inside its training scope), in order to get to your 2nd question, which is the interesting one.

\n\n
\n
    \n
  1. If the answer to 1 is yes, then isn't the prediction scope of deep learning very limited?
  2. \n
\n
\n\n

Well, this is the kind of questions not exactly suitable for SO, since the exact meaning of \"very limited\" is arguably unclear...

\n\n

So, let's try to rephrase it: should we expect DL models to predict such numerical functions outside the numeric domain on which they have been trained?

\n\n

An example from a different domain may be enlightening here: suppose we have built a model able to detect & recognize animals in photos with very high accuracy (it is not hypothetical; such models do exist indeed); should we complain when the very same model cannot detect and recognize airplanes (or trees, refrigerators etc - you name it) in these same photos?

\n\n

Put like that, the answer is a clear & obvious no - we should not complain, and in fact we are certainly not even surprised by such a behavior in the first place.

\n\n

It is tempting for us humans to think that such models should be able to extrapolate, especially in the numeric domain, since this is something we do very \"easily\" ourselves; but ML models, while exceptionally good at interpolating, they fail miserably in extrapolation tasks, such as the one you present here.

\n\n

Trying to make it more intuitive, think that the whole \"world\" of such models is confined in the domain of their training sets: my example model above would be able to generalize and recognize animals in unseen photos as long as these animals are \"between\" (mind the quotes) the ones it has seen during training; in a similar manner, your model does a good job predicting the function value for arguments between the sample you have used for training. But in neither case these models are expected to go beyond their training domain (i.e. extrapolate). There is no \"world\" for my example model beyond animals, and similarly for your model beyond [-500, 500]...

\n\n

For corroboration, consider the very recent paper Neural Arithmetic Logic Units, by DeepMind; quoting from the abstract:

\n\n
\n

Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training.

\n
\n\n

See also a relevant tweet of a prominent practitioner:

\n\n

\"enter

\n\n

On to your third question:

\n\n
\n
    \n
  1. Is there a better algorithm for predicting functions like y = x**2 both inside and outside the scope of training data?
  2. \n
\n
\n\n

As it should be clear by now, this is a (hot) area of current research; see the above paper for starters...

\n\n
\n\n

So, are DL models limited? Definitely - forget the scary tales about AGI for the foreseeable future. Are they very limited, as you put it? Well, I don't know... But, given their limitation in extrapolating, are they useful?

\n\n

This is arguably the real question of interest, and the answer is obviously - hell, yeah!

\n", + "system": "" + }, + { + "instruction": "Stop Keras Training when the network has fully converge", + "input": "", + "output": "

Use an EarlyStopping callback. You may freely choose which loss/metric to observe and when to stop.

\n\n

Usually, you would look at the \"validation loss\" (val_loss), as this is the most important variable that tells that your model is still learning to generalize.

\n\n

But since you said you want to overfit, then you may look at the \"training loss\" (loss).

\n\n

The callback works with \"deltas\", not with absolute values, which is good, because the loss doesn't necessarily have \"zero\" as its goal. But you can use the baseline argument for setting absolute values.

\n\n

So, usually, a callback that looks at the validation loss:

\n\n
from keras.callbacks import EarlyStopping\nusualCallback = EarlyStopping()\n
\n\n

This is the same as EarlyStopping(monitor='val_loss', min_delta=0, patience=0)

\n\n

One that will overfit:

\n\n
overfitCallback = EarlyStopping(monitor='loss', min_delta=0, patience = 20)\n
\n\n

Watch out for the patience argument, it's important as the loss value doesn't always decrease at every epoch. Let the model keep trying for a few more epochs before ending.

\n\n

Finally, just pass the callback to fit along with a huge number of epochs:

\n\n
model.fit(X, Y, epochs=100000000, callbacks=[overfitCallback])\n
\n", + "system": "" + }, + { + "instruction": "Keras give input to intermediate layer and get final output", + "input": "", + "output": "

First you must learn that in Keras when you apply a layer on an input, a new node is created inside this layer which connects the input and output tensors. Each layer may have multiple nodes connecting different input tensors to their corresponding output tensors. To build a model, these nodes are traversed and a new graph of the model is created which consists all the nodes needed to reach output tensors from input tensors (i.e. which you specify when creating a model: model = Model(inputs=[...], outputs=[...]).

\n\n

Now you would like to feed an intermediate layer of a model and get the output of the model. Since this is a new data-flow path, we need to create new nodes for each layer corresponding to this new computational graph. We can do it like this:

\n\n
idx = 3  # index of desired layer\ninput_shape = model.layers[idx].get_input_shape_at(0) # get the input shape of desired layer\nlayer_input = Input(shape=input_shape) # a new input tensor to be able to feed the desired layer\n\n# create the new nodes for each layer in the path\nx = layer_input\nfor layer in model.layers[idx:]:\n    x = layer(x)\n\n# create the model\nnew_model = Model(layer_input, x)\n
\n\n

Fortunately, your model consists of one-branch and we could simply use a for loop to construct the new model. However, for more complex models it may not be easy to do so and you may need to write more codes to construct the new model.

\n", + "system": "" + }, + { + "instruction": "Tensorflow keras with tf dataset input", + "input": "", + "output": "

To your original question as to why you're getting the error:

\n\n
Error when checking input: expected input_1 to have 2 dimensions, but got array with shape (32,)\n
\n\n

The reason your code breaks is because you haven't applied the .batch() back to the dataset variable, like so:

\n\n
dataset = dataset.batch(10)\n
\n\n

You simply called dataset.batch().

\n\n

This breaks because without the batch() the output tensors are not batched, i.e. you get shape (32,) instead of (1,32).

\n", + "system": "" + }, + { + "instruction": "Keras: stop gradient after a certain layer", + "input": "", + "output": "

Since the gradient is flowing backwards through the network, you need to add the gradient stop layer directly after the layer, where no gradient should arrive.

\n

I.e.

\n
from keras import ops\n# weights in x should not be updated by gradients from x_1\nx = Convolution2D(...)(input_layer) \nx_1_stop_grad = Lambda(lambda x: ops.stop_gradient(x))(x)\nx_1 = Dense(64)(x_1_stop_grad)\nx_1 = Dense(32)(x_1)\n
\n", + "system": "" + }, + { + "instruction": "Keras TimeDistributed Not Masking CNN Model", + "input": "", + "output": "

Not entirely sure this will work, but based on the comment made here, with a newer version of tensorflow + keras it should work:

\n
final_model = TimeDistributed(Flatten())(final_input)\nfinal_model = Masking(mask_value = -2.)(final_model)\nfinal_model = TimeDistributed(Reshape(IMG_SIZE))(final_model)\nfinal_model = TimeDistributed(base_model)(final_model)\nfinal_model = Model(final_input,final_model)\n
\n

I took a look at the source code of masking, and I noticed Keras creates a mask tensor that only reduces the last axis. As long as you're dealing with 5D tensors, it will cause no problem, but when you reduce the dimensions for the LSTM, this masking tensor becomes incompatible.

\n

Doing the first flatten step, before masking, will assure that the masking tensor works properly for 3D tensors. Then you expand the image again to its original size.

\n
\n

I'll probably try to install newer versions soon to test it myself, but these installing procedures have caused too much trouble and I'm in the middle of something important here.

\n

On my machine, this code compiles, but that strange error appears in prediction time (see link at the first line of this answer).

\n
\n

Creating a model for predicting the intermediate layers

\n

I'm not sure, by the code I've seen, that the masking function is kept internally in tensors. I don't know exactly how it works, but it seems to be managed separately from the building of the functions inside the layers.

\n

So, try using a keras standard model to make the predictions:

\n
inp = final_model.input                                           # input placeholder\noutputs = [layer.output for layer in final_model.layers]          # all layer outputs\n\nfullModel = Model(inp,outputs)\nlayerPredictions = fullModel.predict(np.expand_dims(TEST_SAMPLE,0))\n\nprint(layerPredictions[-2])\n
\n", + "system": "" + }, + { + "instruction": "CNN with keras, accuracy not improving", + "input": "", + "output": "

The issue is caused by a mis-match between the number of output classes (three) and your choice of final layer activation (sigmoid) and loss-function (binary cross entropy).

\n\n

The sigmoid function 'squashes' real values into a value between [0, 1] but it is designed for binary (two class) problems only. For multiple classes you need to use something like the softmax function. Softmax is a generalised version of sigmoid (the two should be equivalent when you have two classes).

\n\n

The loss value also needs to be updated to one that can handle multiple classes - categorical cross entropy will work in this case.

\n\n

In terms of code, if you modify the model definition and compilation code to the version below it should work.

\n\n
model = Sequential()\nmodel.add(Conv2D(32, (3, 3), input_shape=input_shape))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\nmodel.add(Conv2D(32, (3, 3)))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\nmodel.add(Conv2D(64, (3, 3)))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\nmodel.add(Flatten())\nmodel.add(Dense(64))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(3))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss='categorical_crossentropy',\n              optimizer='rmsprop',\n              metrics=['accuracy'])\n
\n\n

Finally you need to specify class_mode='categorical' in your data generators. That will ensure that the output targets are formatted as a categorical 3-column matrix that has a one in the column corresponding to the correct value and zeroes elsewhere. This response format is needed by the categorical_cross_entropy loss function.

\n", + "system": "" + }, + { + "instruction": "is it possible to implement dynamic class weights in keras?", + "input": "", + "output": "

Option 1:

\n\n

Make a manual loop for epochs and batches, use the method train_on_batch, which also accepts class_weight:

\n\n
for epoch in range(epochs):\n    for batchX,batchY in batches: #adapt this loop to your way of creating/getting batches\n\n        weights = calculateOrGetTheWeights(batch)\n        model.train_on_batch(batchX,batchY,...,class_weight=weights)\n
\n\n

Option 2:

\n\n

Create a custom loss. May be more tricky and depends on the data format, the number of classes, the type of loss function, etc.

\n\n

Assuming 2D data (samples, classes) and a multiclass problem:

\n\n
import keras.backend as K\n\ndef customLoss(yTrue,yPred):\n\n    classes = K.argmax(yTrue)\n    classCount = K.sum(yTrue,axis=0)\n\n    loss = K.some_loss_function(yTrue,yPred)\n\n    return loss / K.gather(classCount, classes)\n
\n\n

Assuming a binary classification (1 class only) with 1D or 2D data:

\n\n
import keras.backend as K\n\ndef binaryCustomLoss(yTrue,yPred):\n\n    positives = yTrue\n    negatives = 1 - yTrue\n\n    positiveRatio = K.mean(positives)\n    negativeRatio = 1 - positiveRatio #or K.mean(negatives)\n\n    weights = (positives / positiveRatio) + (negatives / negativeRatio)\n\n    #you may need K.squeeze(weights) here\n\n    return weights * K.some_loss_function(yTrue,yPred)\n
\n\n

Warning: both loss functions will return Nan (or infinity) if any class count is zero.

\n", + "system": "" + }, + { + "instruction": "Does the TensorFlow backend of Keras rely on the eager execution?", + "input": "", + "output": "
\n

It is for a research purpose which I can't present here.

\n
\n\n

That makes it really difficult to answer your question. It would be better if you could find a toy example -- unrelated with your research -- of what you want and we try to build something from there.

\n\n
\n

Does the TensorFlow backend of Keras rely on the eager execution?

\n
\n\n

No, it doesn't. Keras was built before eager execution introduction. Keras (the one inside tf) can, however, work in eager execution mode (see fchollet's answer).

\n\n
\n

can I build a TensorFlow graph and combine it with a Keras model then train them jointly using Keras high-level API?

\n
\n\n

I'm not sure what you mean by \"build a TensorFlow graph\", because a graph already exists whenever you use keras. If you are talking about adding a bunch of operations to the existing graph, then it's definitely possible. You just need to wrap it up with a Lambda layer, just like you'd do if using Keras on symbolic mode:

\n\n
import tensorflow as tf\nfrom sacred import Experiment\n\nex = Experiment('test-18')\n\ntf.enable_eager_execution()\n\n\n@ex.config\ndef my_config():\n    pass\n\n\n@ex.automain\ndef main():\n    (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n\n    x_train, x_test = (e.reshape(e.shape[0], -1) for e in (x_train, x_test))\n    y_train, y_test = (tf.keras.utils.to_categorical(e) for e in (y_train, y_test))\n\n    def complex_tf_fn(x):\n        u, v = tf.nn.moments(x, axes=[1], keep_dims=True)\n        return (x - u) / tf.sqrt(v)\n\n    with tf.device('/cpu:0'):\n        model = tf.keras.Sequential([\n            tf.keras.layers.Lambda(complex_tf_fn, input_shape=[784]),\n            tf.keras.layers.Dense(1024, activation='relu'),\n            tf.keras.layers.Lambda(complex_tf_fn),\n            tf.keras.layers.Dense(10, activation='softmax')\n        ])\n        model.compile(optimizer=tf.train.AdamOptimizer(),\n                      loss='categorical_crossentropy')\n\n        model.fit(x_train, y_train,\n                  epochs=10,\n                  validation_data=(x_test, y_test),\n                  batch_size=1024,\n                  verbose=2)\n
\n\n
python test-18.py with seed=21\n\nINFO - test-18 - Running command 'main'\nINFO - test-18 - Started\nTrain on 60000 samples, validate on 10000 samples\nEpoch 1/10\n - 9s - loss: 3.4012 - val_loss: 1.3575\nEpoch 2/10\n - 9s - loss: 0.9870 - val_loss: 0.7270\nEpoch 3/10\n - 9s - loss: 0.6097 - val_loss: 0.6071\nEpoch 4/10\n - 9s - loss: 0.4459 - val_loss: 0.4824\nEpoch 5/10\n - 9s - loss: 0.3352 - val_loss: 0.4436\nEpoch 6/10\n - 9s - loss: 0.2661 - val_loss: 0.3997\nEpoch 7/10\n - 9s - loss: 0.2205 - val_loss: 0.4048\nEpoch 8/10\n - 9s - loss: 0.1877 - val_loss: 0.3788\nEpoch 9/10\n - 9s - loss: 0.1511 - val_loss: 0.3506\nEpoch 10/10\n - 9s - loss: 0.1304 - val_loss: 0.3330\nINFO - test-18 - Completed after 0:01:31\n\nProcess finished with exit code 0\n
\n", + "system": "" + }, + { + "instruction": "'Sequential' object has no attribute '_is_graph_network' when exporting Keras model to TensorFlow", + "input": "", + "output": "

You need this \nfrom tensorflow.python.keras import Sequential \nyou should use keras api implemented in tensorflow instead of using keras api directly.

\n", + "system": "" + }, + { + "instruction": "UserWarning: Discrepancy between trainable weights and collected trainable weights error", + "input": "", + "output": "

The error message says

\n\n
\n

did you set model.trainable without calling model.compile after ?

\n
\n\n

In your \"vgg16_model\" you compile your model first and then start changing the trainable flag of the contained layers. To begin with, you should compile your model after the trainability-changes instead of before and see whether this will resolve your issues.

\n", + "system": "" + }, + { + "instruction": "Siamese Network with LSTM for sentence similarity in Keras gives periodically the same result", + "input": "", + "output": "

You're seeing consecutive equal values because the output shape of the function cosine_distance is wrong. When you take K.mean(...) without the axis argument, the result is a scalar. To fix it, just use K.mean(..., axis=-1) in cosine_distance to replace K.mean(...).

\n\n

More Detailed Explanation:

\n\n

When model.predict() is called, the output array pred is first pre-allocated, and then filled with the batch predictions. From the source code training.py:

\n\n
if batch_index == 0:\n    # Pre-allocate the results arrays.\n    for batch_out in batch_outs:\n        shape = (num_samples,) + batch_out.shape[1:]\n        outs.append(np.zeros(shape, dtype=batch_out.dtype))\nfor i, batch_out in enumerate(batch_outs):\n    outs[i][batch_start:batch_end] = batch_out\n
\n\n

In your case you only have single output, so pred is just outs[0] in the code above. When batch_out is a scalar (for example, 0.847546 as seen in your results), the code above is equivalent to pred[batch_start:batch_end] = 0.847576. As the default batch size is 32 for model.predict(), you can see 32 consecutive 0.847576 values appear in your posted result.

\n\n
\n\n

Another possibly bigger problem is that the labels are wrong. You convert the relatedness score to labels by tr_y = 1- data['relatedness_score']/5. Now if two sentences are \"very similar\", the relatedness score is 5, so tr_y is 0 for these two sentences.

\n\n

However, in the contrastive loss, when y_true is zero, the term K.maximum(margin - y_pred, 0) actually means that \"these two sentences should have a cosine distance >= margin\". That's the opposite of what you want your model to learn (also I don't think you need K.square in the loss).

\n", + "system": "" + }, + { + "instruction": "LSTM Keras API predicting multiple outputs", + "input": "", + "output": "

The output of every layer is based on how many cells/units/filters it has.

\n\n

Your output has 1 feature because Dense(1...) has only one cell.

\n\n

Just making it a Dense(3...) would solve your problem.

\n\n
\n\n

Now, if you want the output to have the same number of time steps as the input, then you need to turn on return_sequences = True in all your LSTM layers.

\n\n

The output of an LSTM is:

\n\n\n\n

Then you use a TimeDistributed layer wrapper in your following layers to work as if they also had time steps (it will basically preserve the dimension in the middle).

\n\n
def build_model():\n    model = Sequential()\n\n    model.add(LSTM(\n        input_shape=(50,3),\n        return_sequences=True, units=50))\n    model.add(Dropout(0.2))\n\n    model.add(LSTM(\n        250,\n        return_sequences=True))\n    model.add(Dropout(0.2))\n\n    model.add(TimeDistributed(Dense(3)))\n    model.add(Activation(\"linear\"))\n\n    model.compile(loss=\"mse\", optimizer=\"rmsprop\")\n    return model\n
\n", + "system": "" + }, + { + "instruction": "Binary Keras LSTM model does not output binary predictions", + "input": "", + "output": "

It's normal behavior.

\n\n

There is no \"binary\" in neural networks, but a continuous function within limits.

\n\n

Only with continuous functions a model can train and learn using \"stochastic gradient descent\".

\n\n

For trying to achieve binary results, we use the sigmoid function, which goes from 0 to 1. But initially, your model is not trained, all its \"weights\" are sort of initialised randomly. The result is indeed results tending to mean values, which are 0.5 in sigmoid functions.

\n\n

All you need is to train your model with enough data for enough epochs, so the results will gradually approach (but never hit) 0 or 1 (or whatever targets \"y\" you have in your training data)

\n", + "system": "" + }, + { + "instruction": "Keras/TF: Time Distributed CNN+LSTM for visual recognition", + "input": "", + "output": "

[Edited]
\nSorry, only-a-link-answer was bad. So I try to answer question one by one.

\n\n
\n

if I should include the TimeDirstibuted function just for my Convolutional & Pooling Layers or also for the LSTMs?

\n
\n\n

Use TimeDistributed function only for Conv and Pooling layers, no need for LSTMs.

\n\n
\n

Is there a way to run the CNN Layers in parallel?

\n
\n\n

No, if you use CPU. It's possible if you utilize GPU.
\nTransparent Multi-GPU Training on TensorFlow with Keras

\n\n
\n

what is the best suitable input dimension?

\n
\n\n

Five. (batch, time, width, height, channel).

\n\n
\n

Is there a way to limit the number of CNNs in // to for example 4

\n
\n\n

You can do this in the preprocess by manually aligning frames into a specific number, not in the network. In other words, \"time\" dimension should be 4 if you want to have output after 4 frames are processed.

\n\n
model = Sequential()\n\nmodel.add(\n    TimeDistributed(\n        Conv2D(64, (3, 3), activation='relu'), \n        input_shape=(data.num_frames, data.width, data.height, 1)\n    )\n)\nmodel.add(TimeDistributed(MaxPooling2D((2, 2), strides=(1, 1))))\n\nmodel.add(TimeDistributed(Conv2D(128, (4,4), activation='relu')))\nmodel.add(TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2))))\n\nmodel.add(TimeDistributed(Conv2D(256, (4,4), activation='relu')))\nmodel.add(TimeDistributed(MaxPooling2D((2, 2), strides=(2, 2))))\n\n# extract features and dropout \nmodel.add(TimeDistributed(Flatten()))\nmodel.add(Dropout(0.5))\n\n# input to LSTM\nmodel.add(LSTM(256, return_sequences=False, dropout=0.5))\n\n# classifier with sigmoid activation for multilabel\nmodel.add(Dense(data.num_classes, activation='sigmoid'))\n
\n\n

Reference:
\nPRI-MATRIX FACTORIZATION - BENCHMARK

\n", + "system": "" + }, + { + "instruction": "Keras: real amount of GPU memory used", + "input": "", + "output": "

It can be done using Timeline, which can give you a full trace about memory logging. Similar to the code below:

\n
from keras import backend as K\nfrom tensorflow.python.client import timeline\nimport tensorflow as tf\n\n\nwith K.get_session()  as s:\n    run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\n    run_metadata = tf.RunMetadata()\n     \n    # your fitting code and s run with run_options \n\n    to = timeline.Timeline(run_metadata.step_stats)\n    trace = to.generate_chrome_trace_format()\n    with open('full_trace.json', 'w') as out:\n            out.write(trace)\n
\n

If you want to limit the gpu memory usage, it can alse be done from gpu_options. Like the following code:

\n
import tensorflow as tf\nfrom keras.backend.tensorflow_backend import set_session\nconfig = tf.ConfigProto()\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.2\nset_session(tf.Session(config=config))\n
\n

Check the following documentation about the Timeline object

\n

As you use TensorFlow in the backend, you can use tfprof profiling tool

\n", + "system": "" + }, + { + "instruction": "Keras embedding layer masking. Why does input_dim need to be |vocabulary| + 2?", + "input": "", + "output": "

I believe the docs are a bit misleading there. In the normal case you are mapping your n input data indices [0, 1, 2, ..., n-1] to vectors, so your input_dim should be as many elements as you have

\n\n
input_dim = len(vocabulary_indices)\n
\n\n

An equivalent (but slightly confusing) way to say this, and the way the docs do, is to say

\n\n
\n

1 + maximum integer index occurring in the input data.

\n
\n\n
input_dim = max(vocabulary_indices) + 1\n
\n\n

If you enable masking, value 0 is treated differently, so you increment your n indices by one: [0, 1, 2, ..., n-1, n], thus you need

\n\n
input_dim = len(vocabulary_indices) + 1\n
\n\n

or alternatively

\n\n
input_dim = max(vocabulary_indices) + 2\n
\n\n

The docs become especially confusing here as they say

\n\n
\n

(input_dim should equal |vocabulary| + 2)

\n
\n\n

where I would interpret |x| as the cardinality of a set (equivalent to len(x)), but the authors seem to mean

\n\n
\n

2 + maximum integer index occurring in the input data.

\n
\n", + "system": "" + }, + { + "instruction": "How to use a tensorflow model extracted from a trained keras model", + "input": "", + "output": "

You need to get the input and output tensors from the Keras model definition and then the current TensorFlow session. Then you can evaluate it using TensorFlow only. Assuming model is your loaded_model and x is your training data.

\n\n
sess = K.get_session()\ninput_tensor = model.input\noutput_tensor = model.output\n\noutput_tensor.eval(feed_dict={input_tensor: x}, session=sess)\n
\n", + "system": "" + }, + { + "instruction": "How can a neural network architecture be visualized with Keras?", + "input": "", + "output": "

The problem is also referenced on the issues page of the keras project.\nYou need to install a version of pydot <= 1.1.0 because the function find_graphviz was removed in version 1.2.0. Alternatively you could install pydot-ng instead, which is recommended by the keras developers.

\n", + "system": "" + }, + { + "instruction": "Problem with inputs when building a model with TFBertModel and AutoTokenizer from HuggingFace's transformers", + "input": "", + "output": "

For now I solved by taking the tokenization step out of the model:

\n
def tokenize(sentences, tokenizer):\n    input_ids, input_masks, input_segments = [],[],[]\n    for sentence in sentences:\n        inputs = tokenizer.encode_plus(sentence, add_special_tokens=True, max_length=128, pad_to_max_length=True, return_attention_mask=True, return_token_type_ids=True)\n        input_ids.append(inputs['input_ids'])\n        input_masks.append(inputs['attention_mask'])\n        input_segments.append(inputs['token_type_ids'])        \n        \n    return np.asarray(input_ids, dtype='int32'), np.asarray(input_masks, dtype='int32'), np.asarray(input_segments, dtype='int32')\n
\n

The model takes two inputs which are the first two values returned by the tokenize funciton.

\n
def build_classifier_model():\n   input_ids_in = tf.keras.layers.Input(shape=(128,), name='input_token', dtype='int32')\n   input_masks_in = tf.keras.layers.Input(shape=(128,), name='masked_token', dtype='int32') \n\n   embedding_layer = bert(input_ids_in, attention_mask=input_masks_in)[0]\n...\n   model = tf.keras.Model(inputs=[input_ids_in, input_masks_in], outputs = X)\n\n   for layer in model.layers[:3]:\n     layer.trainable = False\n   return model\n
\n

I'd still like to know if someone has a solution which integrates the tokenization step inside the model-building context so that an user of the model can simply feed phrases to it to get a prediction or to train the model.

\n", + "system": "" + }, + { + "instruction": "tf.Keras learning rate schedules\u2014pass to optimizer or callbacks?", + "input": "", + "output": "

Both tf.keras.callbacks.LearningRateScheduler() and tf.keras.optimizers.schedules.LearningRateSchedule() provide the same functionality i.e to implement a learning rate decay while training the model.

\n

A visible difference could be that tf.keras.callbacks.LearningRateScheduler takes in a function in its constructor, as mentioned in the docs,

\n
tf.keras.callbacks.LearningRateScheduler(schedule, verbose=0)\n
\n
\n

schedule: a function that takes an epoch index (integer, indexed from 0) and current learning rate (float) as inputs and returns a new learning rate as output (float).

\n
\n

The schedule function will return a learning rate given the current epoch index. To implement various types of LR decays like the Exponential Decay, Polynomial Decay etc., you need to code them in this schedule method on your own.

\n

On the other hand, tf.keras.optimizers.schedules.LearningRateSchedule() is a high-level class. Other types of decay included in tf.keras.optimizers.schedules.* like the PolynomialDecay or InverseTimeDecay inherit this class. Hence this module offers builtin LR decay methods which are commonly used in ML. Moreover, to implement a custom LR decay, your class needs to inherit tf.keras.optimizers.schedules.LearningRateSchedule() and override methods like __call__ and __init__, as mentioned in the docs,

\n
\n

To implement your own schedule object, you should implement the\ncall method, which takes a step argument (scalar integer tensor, the current training step count).

\n
\n

Conclusion:

\n\n", + "system": "" + }, + { + "instruction": "tensorflow - how to use 16 bit precision float", + "input": "", + "output": "

Use:

\n
tf.keras.backend.set_floatx('float16')\n
\n

You'll that the default everything will be tf.float16. For instance:

\n
import tensorflow as tf\n\ntf.keras.backend.set_floatx('float16')\n\ndense_layer = tf.keras.layers.Dense(1)\n\ndense_layer.build((4,))\n\ndense_layer.weights\n
\n
[<tf.Variable 'kernel:0' shape=(4, 1) dtype=float16, numpy=\n array([[-0.4214],\n        [-1.031 ],\n        [ 1.041 ],\n        [-0.6313]], dtype=float16)>,\n <tf.Variable 'bias:0' shape=(1,) dtype=float16, numpy=array([0.], dtype=float16)>]\n
\n

But this isn't recommended:

\n
\n

Note: It is not recommended to set this to float16 for training, as this will likely cause numeric stability issues. Instead, mixed precision, which is using a mix of float16 and float32, can be used by calling tf.keras.mixed_precision.experimental.set_policy('mixed_float16'). See the mixed precision guide for details.

\n
\n

Read the docs.

\n", + "system": "" + }, + { + "instruction": "SHAP DeepExplainer with TensorFlow 2.4+ error", + "input": "", + "output": "

TL;DR

\n
\n\n
\n

Fully reproducible example:

\n
import shap\nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\nimport tensorflow as tf    \ntf.compat.v1.disable_v2_behavior() # <-- HERE !\n\nimport tensorflow.keras.backend as K\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.python.keras.layers import Dense\nfrom tensorflow.python.keras import Sequential\nfrom tensorflow.keras import optimizers\n\nprint("SHAP version is:", shap.__version__)\nprint("Tensorflow version is:", tf.__version__)\n\nX_train, X_test, Y_train, Y_test = train_test_split(\n    *shap.datasets.iris(), test_size=0.2, random_state=0\n)\n\nY_train = to_categorical(Y_train, num_classes=3)\nY_test = to_categorical(Y_test, num_classes=3)\n\n# Define baseline model\nmodel = tf.keras.models.Sequential()\nmodel.add(tf.keras.layers.Dense(8, input_dim=len(X_train.columns), activation="relu"))\nmodel.add(tf.keras.layers.Dense(3, activation="softmax"))\n# model.summary()\n\n# compile the model\nmodel.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])\n\nhist = model.fit(X_train, Y_train, batch_size=5, epochs=200, verbose=0)\n\n# select a set of background examples to take an expectation over\nbackground = X_train.iloc[np.random.choice(X_train.shape[0], 100, replace=False)]\n\nexplainer = shap.DeepExplainer(\n    (model.layers[0].input, model.layers[-1].output), background\n)\nshap_values = explainer.shap_values(X_test[:3].values) # <-- HERE !\n\n# print the JS visualization code to the notebook\nshap.initjs()\nshap.force_plot(\n    explainer.expected_value[0], shap_values[0][0], feature_names=X_train.columns\n)\n
\n
\n
SHAP version is: 0.39.0\nTensorflow version is: 2.5.0\n
\n

\"shap

\n", + "system": "" + }, + { + "instruction": "What exactly is Keras's CategoricalCrossEntropy doing?", + "input": "", + "output": "

Here are some things that I noticed in your code.

\n

First, your predictions show two data instances, [0.0, 1.0] and [0.0, 1.0].

\n
pred = np.array([[0.0, 1.0], [0.0, 1.0]])\n
\n

They should indicate probabilities, but the values after softmax typically are not exactly 0.0 and 1.0. Try 0.01 and 0.99 instead.

\n

Second, the arguments to the CateogoricalCrossEntropy() call should be true, pred, not pred, true.

\n

So this is what I get:

\n
import tensorflow as tf\nfrom tensorflow.keras import backend as K\nimport numpy as np\n\ntrue = np.array([[0.0, 1.0], [1.0, 0.0]])\npred = np.array([[0.01, 0.99], [0.01, 0.99]])\n\nloss = tf.keras.losses.CategoricalCrossentropy()\nprint(loss(true, pred).numpy())\n# 2.307610273361206\n
\n

For completeness, let's try what you did, using pred, true:

\n
print(loss(pred, true).numpy())\n# 8.05904769897461\n
\n

That's where your mysterious 8.05 came from.

\n

Is my answer 2.307610273361206 correct? Let's compute the loss by hand. Following the explanation in this StackOverflow post, we can compute the loss of each of the two data instances and then compute their average.

\n
loss1 = -(0.0 * np.log(0.01) + 1.0 * np.log(0.99))\nprint(loss1) # 0.01005033585350145\n\nloss2 = -(1.0 * np.log(0.01) + 0.0 * np.log(0.99))\nprint(loss2) # 4.605170185988091\n\n# Total loss is the average of the per-instance losses.\nloss = (loss1 + loss2) / 2\nprint(loss) # 2.307610260920796\n
\n

So it looks like CategoricalCrossEntropy() is producing the right answer.

\n", + "system": "" + }, + { + "instruction": "Resnet50 produces different prediction when image loading and resizing is done with OpenCV", + "input": "", + "output": "
# Keras prediction\nimg = image.load_img(img_path, target_size=(224, 224))\n\n   # OpenCV prediction\nimgcv = cv2.imread(img_path)\ndim = (224, 224)\nimgcv_resized = cv2.resize(imgcv, dim, interpolation=cv2.INTER_LINEAR)\n
\n
    \n
  1. If you look attentively, the interpolation you specify in the case\nof cv2 is cv2.INTER_LINEAR (bilinear interpolation); however, by default,\nimage.load_img() uses an INTER_NEAREST interpolation method.

    \n
  2. \n
  3. img_to_array(img). The dtype argument here is: None

    \n
  4. \n
\n
\n

Default to None, in which case the global setting\ntf.keras.backend.floatx() is used (unless you changed it, it defaults\nto "float32")

\n
\n

Therefore, in img_to_array(img) you have an image that consists of float32 values, while the cv2.imread(img) returns a numpy array of uint8 values.

\n
    \n
  1. Ensure you convert to RGB from BGR, as OpenCV loads directly into BGR format. You can use image = image[:,:,::-1] or image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB); otherwise you will have the R and B channels reversed resulting in an incorrect comparison.
  2. \n
\n

Since the preprocessing that you apply is the same in both cases, the only differences are the ones that I mentioned above; adapting those changes should ensure reproducibility.

\n

There is one observation I would like to make: provided that one uses a library (cv2 in this case) which automatically (and arguably only loads ints) instead of floats, the only correct way is to cast the first prediction array (Keras) to uint8 because by casting the latter to float32, the possible difference in information is lost. For example, with cv2 you load to uint8, and by casting instead of 233 you get 233.0. However, maybe the initial pixel value was 233,3 but this was lost due to the first conversion.

\n", + "system": "" + }, + { + "instruction": "tensorboard: error: invalid choice: 'code' (choose from 'serve', 'dev') - while trying to run tensorboard", + "input": "", + "output": "

From Comments

\n
\n

The problem is the spaces in the path, try with\n--logdir="D:\\Documents\\Vs code python\\my_log_dir"(paraphrased from Dr. Snoopy)

\n
\n", + "system": "" + }, + { + "instruction": "Problems understanding linear regression model tuning in tf.keras", + "input": "", + "output": "

Foundation

\n

Problem statement

\n

Lets consider a linear regression model for a set of samples X where each sample is represented by one feature x. As part of model training, we are searching for the line w.x + b such that ((w.x+b) -y )^2 (squared loss) is minimal. For a set of data points we take mean of squared loss for each sample and so called mean squared error (MSE). The w and b which stands for weight and bias are together referred to as weights.

\n

Fitting the line/Training the model

\n
    \n
  1. We have a closed form solution for solving the linear regression problem and is (X^T.X)^-1.X^T.y
  2. \n
  3. We can also use gradient decent method to search for weights which minimize the squared loss. The frameworks like tensorflow, pytorch use gradient decent to search the weights (called training).
  4. \n
\n

Gradient decent

\n

A gradient decent algorithm for learning regression looks like blow

\n
w, b = some initial value\nWhile model has not converged:\n    y_hat = w.X + b\n    error = MSE(y, y_hat) \n    back propagate (BPP) error and adjust weights\n
\n

Each run of the above loop is called an epoch. However due to resource constrains the calculation of y_hat, error and BPP is not preformed on full dataset, instead the data is divided into smaller batches and the above operations are performed on one batch at a time. Also we normally fix the number of epoch and monitor if the model has converged.

\n
w, b = some initial value\nfor i in range(number_of_epochs)\n    for X_batch,y_batch in get_next_batch(X, y)\n        y_hat = w.X_batch + b\n        error = MSE(y_batch, y_hat) \n    back propagate (BPP) error and adjust weights\n
\n

Keras implementation of batches

\n

Lets say we would like to add root mean squared error for tracing the model performance while it is training. The way Keras implements is as below

\n
w, b = some initial value\nfor i in range(number_of_epochs)\n    all_y_hats = []\n    all_ys = []\n    for X_batch,y_batch in get_next_batch(X, y)\n        y_hat = w.X_batch + b\n        error = MSE(y_batch, y_hat)\n\n        all_y_hats.extend(y_hat) \n        all_ys.extend(y_batch)\n\n        batch_rms_error = RMSE(all_ys, all_y_hats)\n\n    back propagate (BPP) error and adjust weights\n
\n

As you can see above, the predictions are accumulated and RMSE is calculated on the accumulated predictions rather then taking the mean of the all previous batch RMSE.

\n

Implementation in keras

\n

Now that our foundation is clear, lets see how we can implement tracking the same in keras. keras has callbacks, so we can hook into on_batch_begin callback and accumulate the all_y_hats and all_ys. On the on_batch_end callback keras gives us the calculated RMSE. We will manually calculate RMSE using our accumulated all_y_hats and all_ys and verify if it is same as what keras calculated. We will also save the weights so that we can later plot the line which is being learned.

\n
import numpy as np\nfrom sklearn.metrics import mean_squared_error\nimport keras\nimport matplotlib.pyplot as plt\n\n# Some training data\nX = np.arange(16)\ny = 0.5*X +0.2\n\nbatch_size = 8\nall_y_hats = []\nlearned_weights = [] \n\nclass CustomCallback(keras.callbacks.Callback):\n  def on_batch_begin(self, batch, logs={}):    \n    w = self.model.layers[0].weights[0].numpy()[0][0]\n    b = self.model.layers[0].weights[1].numpy()[0]    \n    s = batch*batch_size\n    all_y_hats.extend(b + w*X[s:s+batch_size])    \n    learned_weights.append([w,b])\n\n  def on_batch_end(self, batch, logs={}):    \n    calculated_error = np.sqrt(mean_squared_error(all_y_hats, y[:len(all_y_hats)]))\n    print (f"\\n Calculated: {calculated_error},  Actual: {logs['root_mean_squared_error']}")\n    assert np.isclose(calculated_error, logs['root_mean_squared_error'])\n\n  def on_epoch_end(self, batch, logs={}):\n    del all_y_hats[:]    \n\n\nmodel = keras.models.Sequential()\nmodel.add(keras.layers.Dense(1, input_shape=(1,)))\nmodel.compile(optimizer=keras.optimizers.RMSprop(lr=0.01), loss="mean_squared_error",  metrics=[keras.metrics.RootMeanSquaredError()])\n# We should set shuffle=False so that we know how baches are divided\nhistory = model.fit(X,y, epochs=100, callbacks=[CustomCallback()], batch_size=batch_size, shuffle=False) \n
\n

Output:

\n
Epoch 1/100\n 8/16 [==============>...............] - ETA: 0s - loss: 16.5132 - root_mean_squared_error: 4.0636\n Calculated: 4.063645694548688,  Actual: 4.063645839691162\n\n Calculated: 8.10112834945773,  Actual: 8.101128578186035\n16/16 [==============================] - 0s 3ms/step - loss: 65.6283 - root_mean_squared_error: 8.1011\nEpoch 2/100\n 8/16 [==============>...............] - ETA: 0s - loss: 14.0454 - root_mean_squared_error: 3.7477\n Calculated: 3.7477213352845675,  Actual: 3.7477214336395264\n-------------- truncated -----------------------\n
\n

Ta-da! the assert assert np.isclose(calculated_error, logs['root_mean_squared_error']) never failed so our calculation/understanding is correct.

\n

The line

\n

Finally, lets plot the line which is being adjusted by the BPP algorithm based on the mean squared error loss. We can use the below code to create a png image of the line being learned at each batch along with the train data.

\n
for i, (w,b) in enumerate(learned_weights):\n  plt.close()\n  plt.axis([-1, 18, -1, 10])\n  plt.scatter(X, y)\n  plt.plot([-1,17], [-1*w+b, 17*w+b], color='green')\n  plt.savefig(f'img{i+1}.png')\n
\n

Below is the gif animation of the above images in the order they are learned.

\n

\"enter

\n

The hyperplane (line in this case) being learned when y = 0.5*X +5.2

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string", + "input": "", + "output": "

Simplest way I found is to create a subfolder and copy the files to that subfolder.\ni.e. Lets assume your files are 0.jpg, 1.jpg,2.jpg....2000.jpg and in directory named "patterns".

\n

Seems like the Keras API does not accept it as the files are named by numbers and for Keras it is in float32.

\n

To overcome this issue, either you can rename the files as one answer suggests, or you can simply create a subfolder under "patterns" (i.e. "patterndir"). So now your image files are under ...\\patterns\\patterndir

\n

Keras (internally) possibly using the subdirectory name and may be attaching it in front of the image file thus making it a string (sth like patterndir_01.jpg, patterndir_02.jpg) [Note this is my interpretation, does not mean that it is true]

\n

When you compile it this time, you will see that it works and you will get a compiler message as:

\n
Found 2001 files belonging to 1 classes.\nUsing 1601 files for training.\nFound 2001 files belonging to 1 classes.\nUsing 400 files for validation.\n
\n

My code looks like this

\n
import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\n#Generate a dataset\n\nimage_size = (28, 28)\nbatch_size = 32\n\ntrain_ds = tf.keras.preprocessing.image_dataset_from_directory(\n    "patterns",\n    validation_split=0.2,\n    subset="training",\n    seed=1337,\n    image_size=image_size,\n    batch_size=batch_size,\n)\nval_ds = tf.keras.preprocessing.image_dataset_from_directory(\n    "patterns",\n    validation_split=0.2,\n    subset="validation",\n    seed=1337,\n    image_size=image_size,\n    batch_size=batch_size,\n)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow Keras RMSE metric returns different results than my own built RMSE loss function", + "input": "", + "output": "

Two key differences, from source code:

\n\n
    \n
  1. RMSE is a stateful metric (it keeps memory) - yours is stateless
  2. \n
  3. Square root is applied after taking a global mean, not before an axis=-1 mean like MSE does\n\n
  4. \n
\n\n

The raw formula fix is easy - but integrating statefulness will require work, as is beyond the scope of this question; refer to source code to see how it's done. A fix for 2 with a comparison, below.

\n\n
\n\n
import numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.metrics import RootMeanSquaredError as RMSE\n\ndef root_mean_squared_error_loss(y_true, y_pred):\n    return tf.sqrt(tf.reduce_mean(tf.math.squared_difference(y_true, y_pred)))\n\nnp.random.seed(0)\n\n#%%###########################################################################\nrmse = RMSE(dtype='float64')\nrmsel = root_mean_squared_error_loss\n\nx1 = np.random.randn(32, 10)\ny1 = np.random.randn(32, 10)\nx2 = np.random.randn(32, 10)\ny2 = np.random.randn(32, 10)\n\n#%%###########################################################################\nprint(\"TensorFlow RMSE:\")\nprint(rmse(x1, y1))\nprint(rmse(x2, y2))\nprint(\"=\" * 46)\nprint(rmse(x1, y1))\nprint(rmse(x2, y2))\n\nprint(\"\\nMy RMSE:\")\nprint(rmsel(x1, y1))\nprint(rmsel(x2, y2))\n
\n\n
TensorFlow RMSE:\ntf.Tensor(1.4132492562096124, shape=(), dtype=float64)\ntf.Tensor(1.3875944990740972, shape=(), dtype=float64)\n==============================================\ntf.Tensor(1.3961984634354354, shape=(), dtype=float64)  # same inputs, different result\ntf.Tensor(1.3875944990740972, shape=(), dtype=float64)  # same inputs, different result\n\nMy RMSE:\ntf.Tensor(1.4132492562096124, shape=(), dtype=float64)  # first result agrees\ntf.Tensor(1.3614563994283353, shape=(), dtype=float64)  # second differs since stateless\n
\n", + "system": "" + }, + { + "instruction": "SystemError: unknown opcode when loading model with Keras", + "input": "", + "output": "

The opcodes in the model are not recognised by your Python interpreter. When loading the model, ensure you are running the same version of Python that was used to create the model.

\n", + "system": "" + }, + { + "instruction": "How to replace loss function during training tensorflow.keras", + "input": "", + "output": "

I'm currently working on google colab with Tensorflow and Keras and i was not able to recompile a model mantaining the weights, every time i recompile a model like this:

\n
with strategy.scope():\n  model = hd_unet_model(INPUT_SIZE)\n  model.compile(optimizer=Adam(lr=0.01), \n                loss=tf.keras.losses.MeanSquaredError() ,\n                metrics=[tf.keras.metrics.MeanSquaredError()]) \n
\n

the weights gets resetted.\nso i found an other solution, all you need to do is:

\n
    \n
  1. Get the model with the weights you want ( load it or something else )
  2. \n
  3. gets the weights of the model like this:
  4. \n
\n
weights = model.get_weights()\n
\n
    \n
  1. recompile the model ( to change the loss function )
  2. \n
  3. set again the weights of the recompiled model like this:
  4. \n
\n
model.set_weights(weights)\n
\n
    \n
  1. launch the training
  2. \n
\n

i tested this method and it seems to work.

\n

so to change the loss mid-Training you can:

\n
    \n
  1. Compile with the first loss.
  2. \n
  3. Train of the first loss.
  4. \n
  5. Save the weights.
  6. \n
  7. Recompile with the second loss.
  8. \n
  9. Load the weights.
  10. \n
  11. Train on the second loss.
  12. \n
\n", + "system": "" + }, + { + "instruction": "WARNING:tensorflow with constraint is deprecated and will be removed in a future version", + "input": "", + "output": "

This is internal TensorFlow message, you can safely ignore it. It will be gone in future versions of TensorFlow, no actions from your side is needed.

\n", + "system": "" + }, + { + "instruction": "correct order for SpatialDropout2D, BatchNormalization and activation function?", + "input": "", + "output": "

Dropout vs BatchNormalization - Standard deviation issue

\n

There is a big problem that appears when you mix these layers, especially when BatchNormalization is right after Dropout.

\n

Dropouts try to keep the same mean of the outputs without dropouts, but it does change the standard deviation, which will cause a huge difference in the BatchNormalization between training and validation. (During training, the BatchNormalization receives changed standard deviations, accumulates and stores them. During validation, the dropouts are turned off, the standard deviation is not a changed one anymore, but the original. But BatchNormalization, because it's in validation, will not use the batch statistics, but the stored statistics, which will be very different from the batch statistics)

\n

So, the first and most important rule is: don't place a BatchNormalization after a Dropout (or a SpatialDropout).

\n

Usually, I try to leave at least two convolutional/dense layers without any dropout before applying a batch normalization, to avoid this.

\n

Dropout vs BatchNormalization - Changing the zeros to another value

\n

Also important: the role of the Dropout is to "zero" the influence of some of the weights of the next layer. If you apply a normalization after the dropout, you will not have "zeros" anymore, but a certain value that will be repeated for many units. And this value will vary from batch to batch. So, although there is noise added, you are not killing units as a pure dropout is supposed to do.

\n

Dropout vs MaxPooling

\n

The problem of using a regular Dropout before a MaxPooling is that you will zero some pixels, and then the MaxPooling will take the maximum value, sort of ignoring part of your dropout. If your dropout happens to hit a maximum pixel, then the pooling will result in the second maximum, not in zero.

\n

So, Dropout before MaxPooling reduces the effectiveness of the dropout.

\n

SpatialDropout vs MaxPooling

\n

But, a SpatialDropout never hits "pixels", it only hits channels. When it hits a channel, it will zero all pixels for that channel, thus, the MaxPooling will effectively result in zero too.

\n

So, there is no difference between spatial dropout before of after the pooling. An entire "channel" will be zero in both orders.

\n

BatchNormalization vs Activation

\n

Depending on the activation function, using a batch normalization before it can be a good advantage.

\n

For a 'relu' activation, the normalization makes the model fail-safe against a bad luck case of "all zeros freeze a relu layer". It will also tend to guarantee that half of the units will be zero and the other half linear.

\n

For a 'sigmoid' or a 'tahn', the BatchNormalization will guarantee that the values are within a healthy range, avoiding saturation and vanishing gradients (values that are too far from zero will hit an almost flat region of these functions, causing vanishing gradients).

\n

There are people that say there are other advantages if you do the contrary, I'm not fully aware of these advantages, I like the ones I mentioned very much.

\n

Dropout vs Activation

\n

With 'relu', there is no difference, it can be proved that the results are exactly the same.

\n

With activations that are not centerd, such as 'sigmoid' putting a dropout before the activation will not result in "zeros", but in other values. For a sigmoid, the final results of the dropout before it would be 0.5.

\n

If you add a 'tanh' after a dropout, for instance, you will have the zeros, but the scaling that dropout applies to keep the same mean will be distorted by the tanh. (I don't know if this is a big problem, but might be)

\n

MaxPooling vs Activation

\n

I don't see much here. If the activation is not very weird, the final result would be the same.

\n

Conclusions?

\n

There are possibilities, but some are troublesome. I find the following order a good one and often use it

\n

I would do something like

\n\n", + "system": "" + }, + { + "instruction": "How to add a new class to an existing classifier in deep learning?", + "input": "", + "output": "
\n

I tried to check out "Online learning", but it seems to train on new data for existing classes.

\n
\n

Online learning is a term used to refer to a model which takes a continual or sequential stream of input data while training, in contrast to offline learning (also called batch learning), where the model is pre-trained on a static predefined dataset.

\n

Continual learning (also called incremental, continuous, lifelong learning) refers to a branch of ML working in an online learning context where models are designed to learn new tasks while maintaining performance on historic tasks. It can be applied to multiple problem paradigms (including Class-incremental learning, where each new task presents new class labels for an ever expanding super-classification problem).

\n
\n

Do I need to train my whole model again on all four classes or is there any way I can just train my model on new class?

\n
\n

Naively re-training the model on the updated dataset is indeed a solution. Continual learning seeks to address contexts where access to historic data (i.e. the original 3 classes) is not possible, or when retraining on an increasingly large dataset is impractical (for efficiency, space, privacy etc concerns). Multiple such models using different underlying architectures have been proposed, but almost all examples exclusively deal with image classification problems.

\n
\n\n

Related q's:

\n\n", + "system": "" + }, + { + "instruction": "Dictionary of tensors input for Keras Functional API TensorFlow 2.0", + "input": "", + "output": "
import tensorflow as tf\n\nprint(tf.version.VERSION)\n\ntoy_data = {'movie': [[0], [1], [0], [1]], 'user': [[10], [12], [12], [10]]}\ndataset = tf.data.Dataset.from_tensor_slices(toy_data).batch(2)\n\nfor x in dataset:\n    print(x)\n\ndef make_model():\n    inp_movie = tf.keras.Input(shape=(1,))\n    inp_user = tf.keras.Input(shape=(1,))\n    movie_embedding = tf.keras.layers.Dense(\n            units=40, activation=tf.keras.layers.Activation(\"relu\"))(inp_movie)\n    user_embedding = tf.keras.layers.Dense(\n            units=40, activation=tf.keras.layers.Activation(\"relu\"))(inp_user)\n    combined = tf.concat([movie_embedding, user_embedding], 1)\n    output = tf.keras.layers.Dense(\n            units=1, activation=tf.keras.layers.Activation(\"sigmoid\"))(combined)\n    model = tf.keras.Model(inputs=[inp_movie, inp_user], outputs=output)\n    return model\n\nmodel = make_model()\n\nfor x in dataset:\n    print(model(x))\n
\n\n

This works. Beware that the iterable you pass to the inputs argument of the tf.keras.Model call has to be sorted in the same order as the dictionary you will use, which is sorted by its keys, movie then user. So using inputs={'a': inp_movie, 'b': inp_user} or inputs={'movie': inp_movie, 'user': inp_user} also works, while inputs=[inp_user, inp_movie] won't.

\n\n

You can use this code to test this kind of interaction:

\n\n
def make_test_model():\n    inp_movie = tf.keras.Input(shape=(1,))\n    inp_user = tf.keras.Input(shape=(1,))\n    model = tf.keras.Model(inputs={'a': inp_movie, 'b': inp_user}, outputs=inp_movie)\n    return model\n\ndef make_test_model_2():\n    inp_movie = tf.keras.Input(shape=(1,))\n    inp_user = tf.keras.Input(shape=(1,))\n    model = tf.keras.Model(inputs=[inp_user, inp_movie], outputs=inp_movie)\n    return model\n\nmodel_test = make_test_model()\nmodel_test_2 = make_test_model_2()\n\nfor x in dataset:\n    print(model_test(x))\nfor x in dataset:\n    print(model_test_2(x))\n
\n\n

You can also name the Input layers using the keys of your dictonary, and give as the inputs argument a list of Input layers sorted by the layers names. This allows you to add or remove inputs in your model without having to worry about rewriting your inputs argument each time. So this is what I would do:

\n\n
def make_model_2():\n    input_list = []\n    inp_movie = tf.keras.Input(shape=(1,), name='movie')\n    input_list.append(inp_movie)\n    inp_user = tf.keras.Input(shape=(1,), name='user')\n    input_list.append(inp_user)\n    movie_embedding = tf.keras.layers.Dense(\n            units=40, activation=tf.keras.layers.Activation(\"relu\"))(inp_movie)\n    user_embedding = tf.keras.layers.Dense(\n            units=40, activation=tf.keras.layers.Activation(\"relu\"))(inp_user)\n    combined = tf.concat([movie_embedding, user_embedding], 1)\n    output = tf.keras.layers.Dense(\n            units=1, activation=tf.keras.layers.Activation(\"sigmoid\"))(combined)\n    input_list.sort(key=lambda inp: inp._keras_history.layer.name)\n    model = tf.keras.Model(inputs=input_list, outputs=output)\n    return model\n
\n\n

Here is a way to test that it works:

\n\n
def make_test_model_3(boolean):\n    input_list = []\n    inp_movie = tf.keras.Input(shape=(1,), name='movie')\n    inp_user = tf.keras.Input(shape=(1,), name='user')\n    if boolean:\n        input_list.append(inp_movie)\n        input_list.append(inp_user)\n    else:\n        input_list.append(inp_user)\n        input_list.append(inp_movie)\n    input_list.sort(key=lambda inp: inp._keras_history.layer.name)\n    model = tf.keras.Model(inputs=input_list, outputs=inp_movie)\n    return model\n\nmodel_test_3_0= make_test_model_3(True)\nmodel_test_3_1= make_test_model_3(False)\n\nfor x in dataset:\n    print(model_test_3_0(x))\nfor x in dataset:\n    print(model_test_3_1(x))\n
\n\n

Edit 2020-02-20:

\n\n

make_model does not work with tf2.1.0, but make_model_2 still does. I have oppened an issue on GitHub about this backward incompatibility. Here is the link if you are interested. Recall that both functions work if you plan to stay on tf2.0.0.

\n", + "system": "" + }, + { + "instruction": "Load weights from checkpoint not working in keras model", + "input": "", + "output": "

While when I encountered this issue I was using only Python and not C/C++, the actual issue was that I passed into the .load_weights() function the .index file instead of the stem:

\n

WRONG:

\n
model = make_some_model()\nmodel.load_weights("output/20220801-pretrain_test/checkpoints/checkpoint_weights_e10.ckpt.index")\n
\n

RIGHT:

\n
model = make_some_model()\nmodel.load_weights("output/20220801-pretrain_test/checkpoints/checkpoint_weights_e10.ckpt")\n
\n", + "system": "" + }, + { + "instruction": "Saving and loading multiple models with the same graph in TensorFlow Functional API", + "input": "", + "output": "

That is a cool question. The encoder and autoencoder no longer share the same graph because they are being saved as disjoint models. In fact, encoder is being saved twice, as it is also embedded in autoencoder.

\n\n

To restore both models while still sharing the same graph, I would suggest the following approach:

\n\n
    \n
  1. Name the encoder's output layer. For example:

    \n\n
    encoder_output = layers.GlobalMaxPooling2D(name='encoder_output')(x)\n
  2. \n
  3. Save only the autoencoder:

    \n\n
    autoencoder.save('autoencoder.h5')\n
  4. \n
  5. Restore the autoencoder:

    \n\n
    new_autoencoder = keras.models.load_model('autoencoder.h5')\n
  6. \n
  7. Reconstruct the encoder's graph from the restored autoencoder so that they share the common layers:

    \n\n
    encoder_input = new_autoencoder.get_layer('img').input\nencoder_output = new_autoencoder.get_layer('encoder_output').output\nnew_encoder = keras.Model(encoder_input, encoder_output)\n
  8. \n
\n\n

Alternatively, you could also save/load the weights and reconstruct the graphs manually.

\n", + "system": "" + }, + { + "instruction": "Custom loss function involving gradients in Keras/Tensorflow", + "input": "", + "output": "

I have came across a package on github which inspired me to customize the training loop (as described in here). I have attached an example which customizes the Sequential class and adds the mean of the loss function gradient (w.r.t. input) as an additional penalty.

\n
import tensorflow as tf\nfrom tensorflow import keras\n\nclass Custom(keras.Sequential):\n    \n    def train_step(self, data):\n        # Unpack the data. Its structure depends on your model and\n        # on what you pass to `fit()`.\n        x, y = data        \n        \n        with tf.GradientTape(persistent=True) as tape:\n            tape.watch(x)\n            y_pred = self(x, training=True)  # Forward pass\n            # Compute the loss value\n            # (the loss function is configured in `compile()`)\n            loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)\n\n            loss_grad = tape.gradient(loss, x)\n        \n            loss = loss + tf.math.reduce_mean(loss_grad)\n        \n        # Compute gradients\n        trainable_vars = self.trainable_variables\n        gradients = tape.gradient(loss, trainable_vars)\n        # Update weights\n        self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n        del tape\n        # Update metrics (includes the metric that tracks the loss)\n        self.compiled_metrics.update_state(y, y_pred)\n        # Return a dict mapping metric names to current value\n        return {m.name: m.result() for m in self.metrics}\n
\n", + "system": "" + }, + { + "instruction": "Keras' convolution layer on images coming from circular/cyclic domain", + "input": "", + "output": "

I'm implementing something like this so though I'd add code. I think the simplest way to actually implement the wrapped padding is to use Numpy pad function with the \"wrap\" option. For example with

\n\n
input = np.array([[1,2,3],[4,5,6],[7,8,9]])\nkernel = [1,1]\n#We want symmetrical padding (same top and bottom)\n# and np.pad format ((before_1, after_1), \u2026 (before_N, after_N))\npad = [[i,i] for i in kernel]\npadded_input = np.pad(input, pad, \"wrap\")\n
\n\n

which gives,

\n\n
array([[9, 7, 8, 9, 7],\n       [3, 1, 2, 3, 1],\n       [6, 4, 5, 6, 4],\n       [9, 7, 8, 9, 7],\n       [3, 1, 2, 3, 1]])\n
\n\n

It looks like creating a custom layer similar to ZeroPadding2D called something like CyclicPadding2D may then be the best idea to minimise changes to Keras code, like so,

\n\n
kernel = [7,7]\nmodel = Sequential()\nmodel.add(CyclicPadding2D(kernel, input_shape=(224, 224, 3)))\nmodel.add(Conv2D(32, kernel=kernel, padding=\"valid\"))\nmodel.build()\n
\n\n

You can also use this between both pooling and conv layers. The code in CyclicPadding2D would probably need to consider input format (channels, batch, etc) with something like,

\n\n
if self.data_format is \"channels_last\":\n    #(batch, depth, rows, cols, channels)\n    pad = [[0,0]] + [[i,i] for i in self.kernel] + [[0,0]]\nelif self.data_format is \"channels_first\":\n    #(batch, channels, depth, rows, cols)\n    pad = [[0, 0], [0, 0]] + [[i,i] for i in self.kernel]\ninputs = np.pad(inputs,  pad, \"wrap\")\n
\n\n

This is similar to what the Keras Numpy backend does with option \"constant\" hardwired while the tensorflow backend supplied no option and so defaults to constant (although interestingly tf.pad provides a reflect option).

\n\n

Looking at the Keras source, perhaps something like this could be added as a feature, by simply putting the code above in the call function of the _conv when a padding option is something like \"periodic\". That said, simply adding a new padding layer is probably the most flexible solution.

\n", + "system": "" + }, + { + "instruction": "KERAS: How to set weights of Conv2D Layer explicitly using a tensor of same shape as required by weights?", + "input": "", + "output": "

In Tensorflow 2.0 with eager execution, you might be able to do 1 of the below:

\n\n

1) You can call the build method on Oconv1 before using the set_weights method. You are getting the ValueError as the weights Variables in the layer are not yet initialized, hence the layer cannot take in any weights via set_weights before building.

\n\n
Oconv1= Conv2D(512, (7, 7), activation='relu', padding='valid',use_bias=False)\ninput_shape = tf.TensorShape([None, h, w, c])  # to define h, w, c based on shape of layer input\nOconv1.build(input_shape)\nOconv1.set_weights([K])\n
\n\n

2) You can also pass in a weights kwarg into the Conv2D constructor.

\n\n
Oconv1= Conv2D(512, (7, 7), activation='relu', padding='valid',use_bias=False,weights=[K])\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'model_to_dot'", + "input": "", + "output": "

model_to_dot can be imported if you change line 2 with:

\n\n
from keras.utils.vis_utils import model_to_dot\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Unknown layer:name when loading a keras model", + "input": "", + "output": "

If you are using a custom layer, you can load a keras model with such a layer as follows:

\n
model = keras.models.load_model(model_path, custom_objects={'MyCustomLayer': InstanceOfMyCustomLayer})\n
\n", + "system": "" + }, + { + "instruction": "How to Setup Adaptive Learning Rate in Keras", + "input": "", + "output": "

You don't need to recompile the model as the other answer suggested. Keras comes with callbacks which can be used for this task. More precisely, you can use LearningRateScheduler callback and pass it some function that will adapt the learning rate based on the current epoch index.

\n\n

Suppose that you want your learning rate to be some number times the epoch index (probably not the best idea but easy to comprehend)

\n\n
def adapt_learning_rate(epoch):\n    return 0.001 * epoch\n
\n\n

Now that we have our function we can create a learning scheduler that is responsible for calculating the learning rate at the beginning of each epoch.

\n\n
my_lr_scheduler = keras.callbacks.LearningRateScheduler(adapt_learning_rate)\n
\n\n

Last thing to do is to pass this callback to the fit method.

\n\n
model.fit(X, y, ..., callbacks=[my_lr_scheduler])\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name 'transpose_shape'", + "input": "", + "output": "

Try to uninstall tensorflow and keras and install keras using pip, it also installs tensorflow. It worked for me!!!

\n", + "system": "" + }, + { + "instruction": "Memory usage of neural network, Keras", + "input": "", + "output": "

You are correct, this is due to the number of filters in conv1. What you must compute is the memory required to store the activations:

\n\n

As shown by your model.summary(), the output size of this layer is (None, 1751, 480, 1024). For a single image, this is a total of 1751*480*1024 pixels. As your image is likely in float32, each pixel takes 4 bytes to store. So the output of this layer requires 1751*480*1024*4 bytes, which is around 3.2 GB per image just for this layer.

\n\n

If you were to change the number of filters to, say, 64, you would only need around 200 MB per image.

\n\n

Either change the number of filters or change the batch size to 1.

\n", + "system": "" + }, + { + "instruction": "Error in load a model saved by callbakcs.ModelCheckpoint() in Keras", + "input": "", + "output": "

I hit a similar problem that yields the same error message, but the cause might be different than yours:

\n\n

Code: (Tensorflow 1.11 and tf.keras.version: 2.1.6-tf)

\n\n
 if load_model_path.endswith('.h5'):\n        model = tf.keras.models.load_model(load_model_path)\n
\n\n

Error message:

\n\n
  File \"...../lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py\", line 251, in load_model\n    training_config['weighted_metrics'])\nKeyError: 'weighted_metrics'\n
\n\n

And I found out it's because the model was saved in an older Keras version.\nI had to comment out the code related to weighted_metrics to be able to load the model. However, it's just a workaround before I can find a sustainable solution to the mismatching problem. Interestingly, @fchollet just added weighted_metrics to the latest Keras version recently (Oct 2018).
\nhttps://github.com/keras-team/keras/blob/master/keras/engine/saving.py#L136\nI hope this will help the people who hit the same problem as I did.

\n", + "system": "" + }, + { + "instruction": "InvalidArgumentError: input_1:0 is both fed and fetched", + "input": "", + "output": "

As I posted on the thread Keras, How to get the output of each layer?, the way to solve this is to replace the line

\n\n
outputs = [\n    layer.output\n    for layer in model.layers\n    if layer.name == layer_name or layer_name is None\n]\n
\n\n

with

\n\n
outputs = [\n    layer.output\n    for layer in model.layers\n    if layer.name == layer_name or layer_name is None\n][1:]\n
\n\n

...in order to skip the input layer.

\n", + "system": "" + }, + { + "instruction": "Handle invalid/corrupted image files in ImageDataGenerator.flow_from_directory in Keras", + "input": "", + "output": "

Well, one solution is to modify the ImageDataGenerator code and put error handling mechanism (i.e. try/except) in it.

\n\n

However, one alternative is to wrap your generator inside another generator and use try/except there. The disadvantage of this solution is that it throws away the whole generated batch even if one single image is corrupted in that batch (this may mean that it is possible that some of the samples may not be used for training at all):

\n\n
data_gen = ImageDataGenerator(...)\n\ntrain_gen = data_gen.flow_from_directory(...)\n\ndef my_gen(gen):\n    while True:\n        try:\n            data, labels = next(gen)\n            yield data, labels\n        except:\n            pass\n\n# ... define your model and compile it\n\n# fit the model\nmodel.fit_generator(my_gen(train_gen), ...)\n
\n\n

Another disadvantage of this solution is that since you need to specify the number of steps of generator (i.e. steps_per_epoch) and considering that a batch may be thrown away in a step and a new batch is fetched instead in the same step, you may end up training on some of the samples more than once in an epoch. This may or may not have significant effects depending on how many batches include corrupted images (i.e. if there are a few, then there is nothing to be worried about that much).

\n\n

Finally, note that you may want to use the newer Keras data-generator i.e. Sequence class to read images one by one in the __getitem__ method in each batch and discard corrupted ones. However, the problem of the previous approach, i.e. training on some of the images more than once, is still present in this approach as well since you also need to implement the __len__ method and it is essentially equivalent to steps_per_epoch argument. Although, in my opinion, this approach (i.e. subclassing Sequence class) is superior to the above approach (of course, if you put aside the fact that you may need to write more code) and have fewer side effects (since you can discard a single image and not the whole batch).

\n", + "system": "" + }, + { + "instruction": "Plot loss evolution during a single epoch in Keras", + "input": "", + "output": "

You can use a callback for this purpose.

\n\n

Using the Keras MNIST CNN example (not copying the whole code here), with the following changes/additions:

\n\n
from keras.callbacks import Callback\n\nclass TestCallback(Callback):\n    def __init__(self, test_data):\n        self.test_data = test_data\n\n    def on_batch_end(self, batch, logs={}):\n        x, y = self.test_data\n        loss, acc = self.model.evaluate(x, y, verbose=0)\n        print('\\nTesting loss: {}, acc: {}\\n'.format(loss, acc))\n\nmodel.fit(x_train, y_train,\n          batch_size=batch_size,\n          epochs=1,\n          verbose=1,\n          validation_data=(x_test, y_test),\n          callbacks=[TestCallback((x_test, y_test))]\n         )\n
\n\n

for evaluating the test/validation set on each batch end, we get this:

\n\n
Train on 60000 samples, validate on 10000 samples\nEpoch 1/1\n\nTesting loss: 0.0672039743446745, acc: 0.9781\n\n  128/60000 [..............................] - ETA: 7484s - loss: 0.1450 - acc: 0.9531\n\n/var/venv/DSTL/lib/python3.4/site-packages/keras/callbacks.py:120: UserWarning: Method on_batch_end() is slow compared to the batch update (15.416976). Check your callbacks.\n  % delta_t_median)\n\n\nTesting loss: 0.06644540682602673, acc: 0.9781\n\n  256/60000 [..............................] - ETA: 7476s - loss: 0.1187 - acc: 0.9570\n\n/var/venv/DSTL/lib/python3.4/site-packages/keras/callbacks.py:120: UserWarning: Method on_batch_end() is slow compared to the batch update (15.450395). Check your callbacks.\n  % delta_t_median)\n\n\nTesting loss: 0.06575664376271889, acc: 0.9782\n
\n\n

However, as you will probably see for yourself, this has the severe drawback of slowing down the code significantly (and duly producing some relevant warnings). As a compromise, if you are OK with getting only the training performance at the end of each batch, you could use a slightly different callback:

\n\n
class TestCallback2(Callback):\n    def __init__(self, test_data):\n        self.test_data = test_data\n\n    def on_batch_end(self, batch, logs={}):\n        print()  # just a dummy print command\n
\n\n

The results now (replacing callbacks=[TestCallback2((x_test, y_test)) in model.fit()) are much faster, but giving only the training metrics at the end of each batch:

\n\n
Train on 60000 samples, validate on 10000 samples\nEpoch 1/1\n\n  128/60000 [..............................] - ETA: 346s - loss: 0.8503 - acc: 0.7188\n  256/60000 [..............................] - ETA: 355s - loss: 0.8496 - acc: 0.7109\n  384/60000 [..............................] - ETA: 339s - loss: 0.7718 - acc: 0.7396\n  [...]\n
\n\n

UPDATE

\n\n

All the above may be fine, but the resulting losses & accuracies are not stored anywhere, and hence they cannot be plotted; so, here is another callback solution that actually stores the metrics on the training set:

\n\n
from keras.callbacks import Callback\n\nclass Histories(Callback):\n\n    def on_train_begin(self,logs={}):\n        self.losses = []\n        self.accuracies = []\n\n    def on_batch_end(self, batch, logs={}):\n        self.losses.append(logs.get('loss'))\n        self.accuracies.append(logs.get('acc'))\n\n\nhistories = Histories()\n\nmodel.fit(x_train, y_train,\n          batch_size=batch_size,\n          epochs=1,\n          verbose=1,\n          validation_data=(x_test, y_test),\n          callbacks=[histories]\n         )\n
\n\n

which results in the metrics at the end of each batch during training being stored in histories.losses and histories.accuracies, respectively - here are the first 5 entries of each:

\n\n
histories.losses[:5]\n# [2.3115866, 2.3008101, 2.2479887, 2.1895032, 2.1491694]\n\nhistories.accuracies[:5]\n# [0.0703125, 0.1484375, 0.1875, 0.296875, 0.359375]\n
\n", + "system": "" + }, + { + "instruction": "Keras Applications and Preprocessing Versions for TensorFlow", + "input": "", + "output": "

It seems that you'll need Keras-2.1.6 instead of Keras-2.2.2. So, use

\n\n

sudo -H pip uninstall Keras

\n\n

to uninstall the current 2.2.2 version, then

\n\n

sudo pip install Keras==2.1.6

\n\n

Hopefully this can fix the issue you have.

\n\n

Regarding the reason why this happens, I think it is because TensorFlow requires Keras-Applications>=1.0.5, and Keras-Preprocessing>=1.0.3. The package management algorithm always go with the latest available package, which bring to us Kera-2.2.2. Whereas latest Keras have an odd dependency requirement, which specifically requires Keras-Applications==1.0.4, and Keras-Preprocessing==1.0.2. My fix is to roll back Keras to a slight older version that have >= requirements, to make pip happy.

\n\n

One step further, I think it is either a bug in Keras 2.2.2's dependency, or it is intensional because Keras 2.2.2 is somehow incompatible with latest Keras-Applications or Keras-Preprocessing.

\n", + "system": "" + }, + { + "instruction": "How to interpret clearly the meaning of the units parameter in Keras?", + "input": "", + "output": "

You can (sort of) think of it exactly as you think of fully connected layers. Units are neurons.

\n

The dimension of the output is the number of neurons, as with most of the well known layer types.

\n

The difference is that in LSTMs, these neurons will not be completely independent of each other, they will intercommunicate due to the mathematical operations lying under the cover.

\n

Before going further, it might be interesting to take a look at this very complete explanation about LSTMs, its inputs/outputs and the usage of stative = true/false: Understanding Keras LSTMs. Notice that your input shape should be input_shape=(look_back, 1). The input shape goes for (time_steps, features).

\n

While this is a series of fully connected layers:

\n\n

This is a series of LSTM layers:

\n

Where input_shape = (batch_size, arbitrary_steps, 3)

\n

\"enter

\n

Each LSTM layer will keep reusing the same units/neurons over and over until all the arbitrary timesteps in the input are processed.

\n\n

To be really precise, there will be two groups of units, one working on the raw inputs, the other working on already processed inputs coming from the last step. Due to the internal structure, each group will have a number of parameters 4 times bigger than the number of units (this 4 is not related to the image, it's fixed).

\n

Flow:

\n\n", + "system": "" + }, + { + "instruction": "Error when checking target: expected dense_3 to have shape (2,) but got array with shape (1,)", + "input": "", + "output": "

Your numpy arrays (both for inputs and outputs) should contain a batch dimension. If your labels are currently of shape (2,), you can reshape them to include a batch dimension as follows:

\n\n
label_array = label_array.reshape(1, -1)\n
\n", + "system": "" + }, + { + "instruction": "Keras plot_model not showing the input layer appropriately", + "input": "", + "output": "

It happened to me after upgrading Keras

\n\n

check this link: https://github.com/keras-team/keras/issues/10638

\n\n

In keras/engine/sequential.py

\n\n

Comment this out:

\n\n
@property\ndef layers(self):\n    # Historically, `sequential.layers` only returns layers that were added\n    # via `add`, and omits the auto-generated `InputLayer`\n    # that comes at the bottom of the stack.\n    if self._layers and isinstance(self._layers[0], InputLayer):\n        return self._layers[1:]\n    return self._layers\n
\n", + "system": "" + }, + { + "instruction": "What embedding-layer output_dim is really needed for a dictionary of just 10000 words?", + "input": "", + "output": "

This is a good question that does not have a good answer. You should surely use an embedding layer and not just go straight to an LSTM/GRU. However, the latent dimension of the embedding layer should be \"as large as possible while maintain peak validation performance\". For a dictionary around your size, 128 or 256 should be a reasonable decision. I doubt you will see drastically different performance.

\n\n

However, something that will really affect your results on a small data set is not using pre-trained word embeddings. This will cause your embeddings to brutally overfit to your training data. I recommend using GLove word embeddings. After downloading the glove data, you can use them to initialize the weights to your embedding layer and then the emebdding layer will fine-tune the weights to your usecase. Here is some code I use for the GloVe embeddings with Keras. It let's you load different sizes of them and also caches the matrix so that it is fast to run the second time around.

\n\n
class GloVeSize(Enum):\n\n    tiny = 50\n    small = 100\n    medium = 200\n    large = 300\n\n\n__DEFAULT_SIZE = GloVeSize.small\n\n\ndef get_pretrained_embedding_matrix(word_to_index,\n                                    vocab_size=10000,\n                                    glove_dir=\"./bin/GloVe\",\n                                    use_cache_if_present=True,\n                                    cache_if_computed=True,\n                                    cache_dir='./bin/cache',\n                                    size=__DEFAULT_SIZE,\n                                    verbose=1):\n\n    \"\"\"\n    get pre-trained word embeddings from GloVe: https://github.com/stanfordnlp/GloVe\n    :param word_to_index: a word to index map of the corpus\n    :param vocab_size: the vocab size\n    :param glove_dir: the dir of glove\n    :param use_cache_if_present: whether to use a cached weight file if present\n    :param cache_if_computed: whether to cache the result if re-computed\n    :param cache_dir: the directory of the project's cache\n    :param size: an enumerated choice of GloVeSize\n    :param verbose: the verbosity level of logging\n    :return: a matrix of the embeddings\n    \"\"\"\n    def vprint(*args, with_arrow=True):\n        if verbose > 0:\n            if with_arrow:\n                print(\">>\", *args)\n            else:\n                print(*args)\n\n    if not os.path.exists(cache_dir):\n        os.makedirs(cache_dir)\n\n    cache_path = os.path.join(cache_dir, 'glove_%d_embedding_matrix.npy' % size.value)\n    if use_cache_if_present and os.path.isfile(cache_path):\n        return np.load(cache_path)\n    else:\n        vprint('computing embeddings', with_arrow=True)\n        embeddings_index = {}\n        size_value = size.value\n        f = open(os.path.join(glove_dir, 'glove.6B.' + str(size_value) + 'd.txt'),\n                 encoding=\"ascii\", errors='ignore')\n\n        for line in f:\n            values = line.split()\n            word = values[0]\n            coefs = np.asarray(values[1:], dtype='float32')\n            embeddings_index[word] = coefs\n\n        f.close()\n        vprint('Found', len(embeddings_index), 'word vectors.')\n\n        embedding_matrix = np.random.normal(size=(vocab_size, size.value))\n\n        non = 0\n        for word, index in word_to_index.items():\n            embedding_vector = embeddings_index.get(word)\n            if embedding_vector is not None:\n                embedding_matrix[index] = embedding_vector\n            else:\n                non += 1\n\n        vprint(non, \"words did not have mappings\")\n        vprint(with_arrow=False)\n\n        if cache_if_computed:\n            np.save(cache_path, embedding_matrix)\n\nreturn embedding_matrix\n
\n\n

then instantiate your embedding layer with that weight matrix:

\n\n
 embedding_size = GloVeSize.small\n    embedding_matrix = get_pretrained_embedding_matrix(data.word_to_index,\nsize=embedding_size)\n\nembedding = Embedding(\n     output_dim=self.embedding_size,\n     input_dim=self.vocabulary_size + 1,\n     input_length=self.input_length,\n     mask_zero=True,\n     weights=[np.vstack((np.zeros((1, self.embedding_size)),\n                         self.embedding_matrix))],\n     name='embedding'\n)(input_layer)\n
\n", + "system": "" + }, + { + "instruction": "Keras: Weighted Binary Crossentropy Implementation", + "input": "", + "output": "

Atop true vs pred loss, Keras train and val loss includes regularization losses. A simple testing scheme, along a working implementation of binary_crossentropy and l2 weight (not 'activity') loss, below.

\n

Update: more complete implementation of weights loss.

\n\n
\n

WORKING IMPLEMENTATION: (numerically stable version)

\n
def binary_crossentropy(y_true, y_pred, sample_weight=1):\n    if len(y_pred.shape)==1:\n        y_pred = np.atleast_2d(y_pred).T\n    y_pred = [max(min(pred[0], 1-K.epsilon()), K.epsilon()) for pred in y_pred]\n    y_true,y_pred,sample_weight = force_2d_shape([y_true,y_pred,sample_weight])\n\n    logits = np.log(y_pred) - np.log(1-y_pred) # sigmoid inverse\n    neg_abs_logits = -np.abs(logits)\n    relu_logits    = (logits > 0)*logits\n\n    loss_vec = relu_logits - logits*y_true + np.log(1 + np.exp(neg_abs_logits))\n    return np.mean(sample_weight*loss_vec)\n\ndef force_2d_shape(arr_list):\n    for arr_idx, arr in enumerate(arr_list):\n        if len(np.array(arr).shape) != 2:\n            arr_list[arr_idx] = np.atleast_2d(arr).T\n    return arr_list\n
\n
def l1l2_weight_loss(model):\n    l1l2_loss = 0\n    for layer in model.layers:\n        if 'layer' in layer.__dict__ or 'cell' in layer.__dict__:\n            l1l2_loss += _l1l2_rnn_loss(layer)\n            continue\n            \n        if 'kernel_regularizer' in layer.__dict__ or \\\n           'bias_regularizer'   in layer.__dict__:\n            l1l2_lambda_k, l1l2_lambda_b = [0,0], [0,0] # defaults\n            if layer.__dict__['kernel_regularizer'] is not None:\n                l1l2_lambda_k = list(layer.kernel_regularizer.__dict__.values())\n            if layer.__dict__['bias_regularizer']   is not None:\n                l1l2_lambda_b = list(layer.bias_regularizer.__dict__.values())\n                \n            if any([(_lambda != 0) for _lambda in (l1l2_lambda_k + l1l2_lambda_b)]):\n                W = layer.get_weights()\n    \n                for idx,_lambda in enumerate(l1l2_lambda_k + l1l2_lambda_b):\n                    if _lambda != 0:\n                        _pow = 2**(idx % 2) # 1 if idx is even (l1), 2 if odd (l2)\n                        l1l2_loss += _lambda*np.sum(np.abs(W[idx//2])**_pow)\n    return l1l2_loss\n
\n
def _l1l2_rnn_loss(layer):\n    l1l2_loss = 0\n    if 'backward_layer' in layer.__dict__:\n        bidirectional = True\n        _layer = layer.layer\n    else:\n        _layer = layer\n        bidirectional = False\n    ldict = _layer.cell.__dict__\n        \n    if 'kernel_regularizer'    in ldict or \\\n       'recurrent_regularizer' in ldict or \\\n       'bias_regularizer'      in ldict:\n        l1l2_lambda_k, l1l2_lambda_r, l1l2_lambda_b = [0,0], [0,0], [0,0]\n        if ldict['kernel_regularizer']    is not None:\n            l1l2_lambda_k = list(_layer.kernel_regularizer.__dict__.values())\n        if ldict['recurrent_regularizer'] is not None:\n            l1l2_lambda_r = list(_layer.recurrent_regularizer.__dict__.values())\n        if ldict['bias_regularizer']      is not None:\n            l1l2_lambda_b = list(_layer.bias_regularizer.__dict__.values())\n        \n        all_lambda = l1l2_lambda_k + l1l2_lambda_r + l1l2_lambda_b\n        if any([(_lambda != 0) for _lambda in all_lambda]):\n            W = layer.get_weights()\n            idx_incr = len(W)//2 # accounts for 'use_bias'\n            \n            for idx,_lambda in enumerate(all_lambda):\n                if _lambda != 0:\n                    _pow = 2**(idx % 2) # 1 if idx is even (l1), 2 if odd (l2)\n                    l1l2_loss += _lambda*np.sum(np.abs(W[idx//2])**_pow)\n                    if bidirectional:\n                        l1l2_loss += _lambda*np.sum(\n                                    np.abs(W[idx//2 + idx_incr])**_pow)\n        return l1l2_loss  \n
\n
\n

TESTING IMPLEMENTATION:

\n
from keras.layers import Input, Dense, LSTM, GRU, Bidirectional\nfrom keras.models import Model\nfrom keras.regularizers import l1, l2, l1_l2\nimport numpy as np \n\nipt   = Input(shape=(1200,16))\nx     = LSTM(60, activation='relu', return_sequences=True,\n                                                 recurrent_regularizer=l2(1e-3),)(ipt)\nx     = Bidirectional(GRU(60, activation='relu', bias_regularizer     =l1(1e-4)))(x)\nout   = Dense(1,  activation='sigmoid',          kernel_regularizer   =l1_l2(2e-4))(x)\nmodel = Model(ipt,out)\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam')\n
\n
X = np.random.rand(10,1200,16) # (batch_size, timesteps, input_dim)\nY = np.random.randint(0,2,(10,1))\nclass_weights = {'0':1, '1': 6}\nsample_weights = np.array([class_weights[str(label[0])] for label in Y])\n
\n
keras_loss   = model.evaluate(X,Y,sample_weight=sample_weights)\ncustom_loss  = binary_crossentropy(Y, model.predict(X))\ncustom_loss += l1l2_weight_loss(model)\n\nprint('%.6f'%keras_loss  + ' -- keras_loss')\nprint('%.6f'%custom_loss + ' -- custom_loss') \n
\n\n0.763822 -- keras_loss
\n0.763822 -- custom_loss\n
\n", + "system": "" + }, + { + "instruction": "Connecting Keras models / replacing input but keeping layers", + "input": "", + "output": "

Ok, what I could come up with is to really manually go through each layer of the model and reconnect them one by one again like this:

\n\n
l = model.layers[1](decoded)  # layer 0 is the input layer, which we're replacing\nfor i in range(2, len(model.layers)):\n    l = model.layers[i](l)\nstacked_model = Model(ae_input, l)\nstacked_model.compile(...)\n
\n\n

while this works and produces the correct plot and no errors, this does not seem like the most elegant solution...

\n\n

(btw, the copying of the model actually seems to be unnecessary as I'm not retraining anything.)

\n", + "system": "" + }, + { + "instruction": "Keras LSTM: dropout vs recurrent_dropout", + "input": "", + "output": "

Keras LSTM documentation contains high-level explanation:

\n\n
\n

dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.

\n \n

recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation\n of the recurrent state.

\n
\n\n

But this totally corresponds to the answer you refer to:

\n\n
\n

Regular dropout is applied on the inputs and/or the outputs, meaning the vertical arrows from x_t and to h_t. ...

\n \n

Recurrent dropout masks (or \"drops\") the connections between the recurrent units; that would be the horizontal arrows in your picture.

\n
\n\n

If you're interested in details on the formula level, the best way is to inspect the source code: keras/layers/recurrent.py, look for rec_dp_mask (recurrent dropout mask) and dp_mask. One is affecting the h_tm1 (the previous memory cell), the other affects the inputs.

\n", + "system": "" + }, + { + "instruction": "how to install pydot & graphviz on google colab?", + "input": "", + "output": "

To install pydot, run:

\n\n
!pip install -q pydot\n
\n\n

Then, restart your VM to reload keras which should then detect pydot's existence. (Runtime menu -> Restart runtime...)

\n", + "system": "" + }, + { + "instruction": "How to use Keras with GPU?", + "input": "", + "output": "

You don't have to explicitly tell to Keras to use the GPU. If a GPU is available (and from your output I can see it's the case) it will use it.

\n\n

You could also check this empirically by looking at the usage of the GPU during the model training: if you're on Windows 10 you only need to open the task manager and look under the 'Performance' tab (see here).

\n", + "system": "" + }, + { + "instruction": "Unable to transform string column to categorical matrix using Keras and Sklearn", + "input": "", + "output": "

Its because np_utils.to_categorical takes y of datatype int, but you have strings either convert them into int by giving them a key i.e :

\n\n
cats = data.PriceRange.values.categories\ndi = dict(zip(cats,np.arange(len(cats))))\n#{'0 - 50000': 0,\n# '10000001 - 10050000': 200,\n# '1000001 - 1050000': 20,\n# '100001 - 150000': 2,\n# '10050001 - 10100000': 201,\n# '10100001 - 10150000': 202,\n\ntarget = np_utils.to_categorical(data.PriceRange.map(di))\n
\n\n

or since you are using pandas you can use pd.get_dummies to get one hot encoding.

\n\n
onehot = pd.get_dummies(data.PriceRange)\ntarget_labels = onehot.columns\ntarget = onehot.as_matrix()\n\narray([[ 1.,  0.,  0., ...,  0.,  0.,  0.],\n       [ 0.,  0.,  0., ...,  0.,  0.,  0.],\n       [ 0.,  0.,  0., ...,  0.,  0.,  0.],\n       ..., \n       [ 0.,  0.,  0., ...,  0.,  0.,  0.],\n       [ 1.,  0.,  0., ...,  0.,  0.,  0.],\n       [ 0.,  0.,  0., ...,  0.,  0.,  0.]])\n
\n", + "system": "" + }, + { + "instruction": "Unable to load and use multiple keras models", + "input": "", + "output": "

The OP is correct here. There is a serious bug when you try to load multiple weight files in the same script. The above answer doesn't solve this. If you actually interrogate the weights when loading weights for multiple models in the same script you will notice that the weights are different than when you just load weights for one model on its own. This is where the randomness is the OP observes coming from.

\n\n

EDIT: To solve this problem you have to encapsulate the model.load_weight command within a function and the randomness that you are experiencing should go away. The problem is that something weird screws up when you have multiple load_weight commands in the same script like you have above. If you load those model weights with a function you issues should go away.

\n", + "system": "" + }, + { + "instruction": "Using Keras, How can I load weights generated from CuDNNLSTM into LSTM Model?", + "input": "", + "output": "

The reason is that the CuDNNLSTM layer has a bias twice as large as that of LSTM. It's because of the underlying implementation of cuDNN API. You can compare the following equations (copied from cuDNN user's guide) to the usual LSTM equations:

\n\n

\"cuDNN

\n\n

CuDNN uses two bias terms, so the number of bias weights is doubled. To convert it back to what LSTM uses, the two bias terms need to be summed.

\n\n

I've submitted a PR to do the conversion and it's merged. You can install the latest Keras from GitHub and the problem in weight loading should be solved.

\n", + "system": "" + }, + { + "instruction": "Keras Dropout with noise_shape", + "input": "", + "output": "

Question 1:

\n\n

It's kind of like a numpy broadcast I think.

\n\n

Imagine you have 2 batches witch 3 timesteps and 4 features (It's a small example to make it easier to show it):\n(2, 3, 4)

\n\n

If you use a noise shape of (2, 1, 4), each batch will have its own\ndropout mask that will be applied to all timesteps.

\n\n

So let's say these are the weights of shape (2, 3, 4):

\n\n
array([[[  1,   2,   3,   4],\n        [  5,   6,   7,   8],\n        [ 10,  11,  12,  13]],\n\n       [[ 14,  15,  16,  17],\n        [ 18,  19,  20,  21],\n        [ 22,  23,  24,  25]]])\n
\n\n

And this would be the random noise_shape (2, 1, 4)\n(1 is like keep and 0 is like turn it off):

\n\n
array([[[ 1,  1,  1,  0]],\n\n       [[ 1,  0,  0,  1]]])\n
\n\n

So you have these two noise shapes (For every batch one).\nThen it will be kinda broadcast along the timestep axis.

\n\n
array([[[ 1,  1,  1,  0],\n        [ 1,  1,  1,  0],\n        [ 1,  1,  1,  0]],\n\n       [[ 1,  0,  0,  1],\n        [ 1,  0,  0,  1],\n        [ 1,  0,  0,  1]]])\n
\n\n

and applied to the weights:

\n\n
array([[[  1,   2,   3,   0],\n        [  5,   6,   7,   0],\n        [ 10,  11,  12,   0]],\n\n       [[ 14,   0,   0,  17],\n        [ 18,   0,   0,  21],\n        [ 22,   0,   0,  25]]])\n
\n\n

Question 2:

\n\n

I'm not sure about your second question to be honest.

\n\n

Edit:\nWhat you can do is take the first dimension of the shape of the input,\nwhich should be the batch_size, as proposed in this github issue:

\n\n
import tensorflow as tf\n\n...\n\nbatch_size = tf.shape(inp)[0]\ndrop1 = Dropout((0.1, noise_shape=[batch_size, max1._keras_shape[1], 1, 1]))\n
\n\n

As you can see I'm on tensorflow backend. Dunno if theano also\nhas these problems and if it does you might just be able to solve it with\nthe theano shape equivalent.

\n", + "system": "" + }, + { + "instruction": "Tensor indexing in custom loss function", + "input": "", + "output": "

Often you work just with backend functions, and you never try to know the actual values of the tensors.

\n\n
from keras.losses import mean_square_error\n\ndef new_mse(y_true,y_pred): \n\n    #swapping elements 1 and 3 - concatenate slices of the original tensor\n    swapped = K.concatenate([y_pred[:1],y_pred[3:],y_pred[2:3],y_pred[1:2]])\n    #actually, if the tensors are shaped like (batchSize,4), use this:\n    #swapped = K.concatenate([y_pred[:,:1],y_pred[:,3:],y_pred[:,2:3],Y_pred[:,1:2])\n\n    #losses\n    regularLoss = mean_squared_error(y_true,y_pred)\n    swappedLoss = mean_squared_error(y_true,swapped)\n\n    #concat them for taking a min value\n    concat = K.concatenate([regularLoss,swappedLoss])\n\n    #take the minimum\n    return K.min(concat)\n
\n\n
\n\n

So, for your items:

\n\n
    \n
  1. You're totally right. Avoid numpy at all costs in tensor operations (loss functions, activations, custom layers, etc.)

  2. \n
  3. A K.shape() is also a tensor. It probably has shape (2,) because it has two values, one value will be 7032, the other value will be 6. But you can only see these values when you eval this tensor. Doing this inside loss functions is often a bad idea.

  4. \n
\n", + "system": "" + }, + { + "instruction": "keras combining two losses with adjustable weights", + "input": "", + "output": "

It seems that propagating the \"same loss\" into both branches will not take effect, unless alpha is dependent on both branches. If alpha is not variable depending on both branches, then part of the loss will be just constant to one branch.

\n\n

So, in this case, just compile the model with the two losses separate and add the weights to the compile method:

\n\n
model.compile(optmizer='someOptimizer',loss=[loss1,loss2],loss_weights=[alpha,1-alpha])\n
\n\n

Compile again when you need alpha to change.

\n\n
\n\n

But if indeed alpha is dependent on both branches, then you need to concatenate the results and calculate alpha's value:

\n\n
singleOut = Concatenate()([x1,x2])\n
\n\n

And a custom loss function:

\n\n
def weightedLoss(yTrue,yPred):\n    x1True = yTrue[0]\n    x2True = yTrue[1:]\n\n    x1Pred = yPred[0]\n    x2Pred = yPred[1:]\n\n    #calculate alpha somehow with keras backend functions\n\n    return (alpha*(someLoss(x1True,x1Pred)) + ((1-alpha)*(someLoss(x2True,x2Pred))\n
\n\n

Compile with this function:

\n\n
model.compile(loss=weightedLoss, optimizer=....)\n
\n", + "system": "" + }, + { + "instruction": "Keras Extremely High Loss", + "input": "", + "output": "

As @Yu-Yang said you are using mean squared error as loss function. I had this same problem before where the loss value will be very large, on changing the loss function to mean_squared_logarithmic_error, I got the desired result.

\n\n
model %>% compile(\noptimizer = optimizer_rmsprop(lr=0.0001),\nloss = loss_mean_squared_logarithmic_error,\nmetrics = c(\"accuracy\")\n)\n
\n\n

The loss value changed to

\n\n
\n

Epoch 1/10
\n 326981/326981 [==============================] - 17s - loss: 0.0048 - acc: 0.9896

\n
\n\n

Hope this finds useful !

\n", + "system": "" + }, + { + "instruction": "Using binary_crossentropy loss in Keras (Tensorflow backend)", + "input": "", + "output": "

You're right, that's exactly what's happening. I believe this is due to historical reasons.

\n\n

Keras was created before tensorflow, as a wrapper around theano. And in theano, one has to compute sigmoid/softmax manually and then apply cross-entropy loss function. Tensorflow does everything in one fused op, but the API with sigmoid/softmax layer was already adopted by the community.

\n\n

If you want to avoid unnecessary logit <-> probability conversions, call binary_crossentropy loss withfrom_logits=True and don't add the sigmoid layer.

\n", + "system": "" + }, + { + "instruction": "How to dynamically freeze weights after compiling model in Keras?", + "input": "", + "output": "

I've tried this example code a couple months ago and it worked:\nhttps://github.com/fchollet/keras/blob/master/examples/mnist_acgan.py

\n\n

It's not the simplest form of GAN, but as far as I remembered, it's not too difficult to remove the classification loss and turn the model into a GAN.

\n\n

You don't need to turn on/off the discriminator's trainable property and recompile. Simply create and compile two model objects, one with trainable=True (discriminator in the code) and another one with trainable=False (combined in the code).

\n\n

When you're updating the discriminator, call discriminator.train_on_batch(). When you're updating the generator, call combined.train_on_batch().

\n", + "system": "" + }, + { + "instruction": "In Neural Networks: accuracy improvement after each epoch is GREATER than accuracy improvement after each batch. Why?", + "input": "", + "output": "

This has nothing to do with your model or your dataset; the reason for this \"jump\" lies in how metrics are calculated and displayed in Keras.

\n\n

As Keras processes batch after batch, it saves accuracies at each one of them, and what it displays to you is not the accuracy on the latest processed batch, but the average over all batches in the current epoch. And, as the model is being trained, accuracies over successive batches tend to improve.

\n\n

Now consider: in the first epoch, let's say, there are 50 batches, and network went from 0% to 90% during these 50 batches. Then at the end of the epoch Keras will show accuracy of, e.g. (0 + 0.1 + 0.5 + ... + 90) / 50%, which is, obviously, much less than 90%! But, because your actual accuracy is 90%, the first batch of the second epoch will show 90%, giving the impression of a sudden \"jump\" in quality. The same, obviously, goes for loss or any other metric.

\n\n

Now, if you want more realistic and trustworthy calculation of accuracy, loss, or any other metric you may find yourself using, I would suggest using validation_data parameter in model.fit[_generator] to provide validation data, which will not be used for training, but will be used only to evaluate the network at the end of each epoch, without averaging over various points in time.

\n", + "system": "" + }, + { + "instruction": "How can I handle TensorFlow sessions to train multiple Keras models at the same time?", + "input": "", + "output": "

You are right, Keras automatically works with the default session.\nYou could use tf.compat.v1.keras.backend.get_session() or tf.compat.v1.keras.backend.set_session(sess) to manually set the global Keras session (see documentation).

\n

For instance:

\n
sess1 = tf.Session()\ntf.compat.v1.keras.backend.set_session(sess1)\n# Train your first Keras model here ...\n\nsess2 = tf.Session()\ntf.compat.v1.keras.backend.set_session(sess2)\n# Train your second Keras model here ...\n
\n", + "system": "" + }, + { + "instruction": "Combining the outputs of multiple models into one model", + "input": "", + "output": "

Yes, you can create such models using Multi-input and multi-output models, refer keras documentation for more details. Here I am sharing code sample, hope this helps

\n\n
import numpy as np\nimport keras\nfrom keras.optimizers import SGD\nfrom keras.models import Sequential, Model\nfrom keras.layers import Activation, Dense, Dropout, Flatten, Input, Merge, Convolution2D, MaxPooling2D\n\n# Generate dummy data\ntrain1 = np.random.random((100, 100, 100, 3))\ntrain2 = np.random.random((100, 100, 100, 3))\ntrain3 = np.random.random((100, 100, 100, 3))\ntrain4 = np.random.random((100, 100, 100, 3))\n\ny_train = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)\n\n#parallel ip for different sections of image\ninp1 = Input(shape=train1.shape[1:])\ninp2 = Input(shape=train2.shape[1:])\ninp3 = Input(shape=train3.shape[1:])\ninp4 = Input(shape=train4.shape[1:])\n\n# paralle conv and pool layer which process each section of input independently\nconv1 = Conv2D(64, (3, 3), activation='relu')(inp1)\nconv2 = Conv2D(64, (3, 3), activation='relu')(inp2)\nconv3 = Conv2D(64, (3, 3), activation='relu')(inp3)\nconv4 = Conv2D(64, (3, 3), activation='relu')(inp4)\n\nmaxp1 = MaxPooling2D((3, 3))(conv1)\nmaxp2 =MaxPooling2D((3, 3))(conv2)\nmaxp3 =MaxPooling2D((3, 3))(conv3)\nmaxp4 =MaxPooling2D((3, 3))(conv4)\n\n# can add multiple parallel conv, pool layes to reduce size\n\nflt1 = Flatten()(maxp1)\nflt2 = Flatten()(maxp2)\nflt3 = Flatten()(maxp3)\nflt4 = Flatten()(maxp4)\n\nmrg = Merge(mode='concat')([flt1,flt2,flt3,flt4])\n\ndense = Dense(256, activation='relu')(mrg)\n\nop = Dense(10, activation='softmax')(dense)\n\nmodel = Model(input=[inp1, inp2, inp3, inp4], output=op)\nmodel.compile(optimizer='rmsprop',\n              loss='categorical_crossentropy',\n              metrics=['accuracy'])\nmodel.fit([train1,train2,train3,train4], y_train,\n          nb_epoch=10, batch_size=28)\n
\n", + "system": "" + }, + { + "instruction": "Input LSTM on multivariate time series", + "input": "", + "output": "

As you can read in the Keras documentation :

\n
\n

Input shapes

\n

3D tensor with shape (batch_size, timesteps, input_dim).

\n
\n

So the 'time' dimension is first. Since your time dimension is 10, your input shape will be (50000,10,15)

\n

I hope this helps :-)

\n", + "system": "" + }, + { + "instruction": "One to many LSTM in Keras", + "input": "", + "output": "

It's possible with a RepeatVector layer. For example:

\n\n
model = Sequential()\nmodel.add(Dense(10, input_shape=(1))\nmodel.add(RepeatVector(10))\nmodel.add(LSTM(1, return_sequences=True))\n
\n\n

Then - the input shape is (1) and the output is (10, 1).

\n", + "system": "" + }, + { + "instruction": "loss function design to incorporate different weight for false positive and false negative", + "input": "", + "output": "

You can use the class_weight parameter of model.fit to weight your classes and, as such, punish misclassifications differently depending on the class.

\n\n
\n

class_weight: optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to \"pay more attention\" to samples from an under-represented class.

\n
\n\n

For example:

\n\n
out = Dense(2, activation='softmax')\nmodel = Model(input=..., output=out)\nmodel.fit(X, Y, class_weight={0: 1, 1: 0.5})\n
\n\n

This would punish the second class less than the first.

\n", + "system": "" + }, + { + "instruction": "Keras LSTM training data format", + "input": "", + "output": "

The input format for the LSTM should have a shape (sequence_length, input_dim).\nSo in your case, numpy arrays of shape (4,3) should do it.

\n\n

What you will feed to the model will then be a numpy array of shape (number_of_train_examples, sequence_length, input_dim).\nIn other words, you will feed number_of_train_examples tables of shape (4,3). \nBuild a list of :

\n\n
1,0,0\n0,1,0\n0,1,0\n0,0,1\n
\n\n

and then do np.array(list_of_train_example).

\n\n

However, I don't understand why you return the whole sequence for the second LSTM? It will output you something with the shape (4,4), the Dense layer will probably fail on that. Return sequence means that you will return the whole sequence, so every hidden output at each step of LSTM. I would set this to False for the second LSTM to only get a \"summary\" vector of shape (4,) that your Dense layer can read.\nAnyway, even for the first LSTM it means that with an input of shape (4,3), you output something which has shape (4,4), so you will have more parameters than input data for this layer... Can't be really good.

\n\n

Regarding the activations, I would also use softmax but only on the last layer, softmax is used to get probabilities as output of the layer. It doesn't make really sense to use a softmax out of LSTM's and the Dense before the last. Go for some other non linearity like \"sigmoid\" or \"tanh\".

\n\n

This is what I would do model-wise

\n\n
def createNet(summary=False):\n    print(\"Start Initialzing Neural Network!\")\n    model = Sequential()\n    model.add(LSTM(4,input_dim=input_dim,input_length=input_length,\n            return_sequences=True,activation='tanh'))\n    model.add(Dropout(0.1))\n    # output shape : (4,4)\n    model.add(LSTM(4,\n            return_sequences=False,activation='tanh'))\n    model.add(Dropout(0.1))\n    # output shape : (4,)\n    model.add(Dense(3,activation='tanh'))\n    model.add(Dropout(0.1))\n    # output shape : (3,)\n    model.add(Dense(3,activation='softmax'))\n    # output shape : (3,)\n    model.compile(loss='categorical_crossentropy',optimizer='Adam',metrics=['accuracy'])\n    if summary:\n        print(model.summary())\n    return model\n
\n", + "system": "" + }, + { + "instruction": "Max over time pooling in Keras", + "input": "", + "output": "

Assuming that your data shape is (batch_size, seq_len, features) you may apply:

\n\n
seq_model = Reshape((seq_len * features, 1))(seq_model)\nseq_model = GlobalMaxPooling1D()(seq_model)\n
\n", + "system": "" + }, + { + "instruction": "Autoencoder not learning identity function", + "input": "", + "output": "

I believe the problem could be either the number of epoch or the way you inizialize X.\nI ran your code with an X of mine for 100 epochs and printed the argmax() and max values of the weights, it gets really close to the identity function.

\n\n

I'm adding the code snippet that I used

\n\n
from keras.models import Sequential\nfrom keras.layers import Dense\nimport numpy as np\nimport random\nimport pandas as pd\n\nX = np.array([[random.random() for r in xrange(84)] for i in xrange(1,100000)])\nmodel = Sequential([Dense(84, input_dim=84)], name=\"layer1\")\nmodel.compile(optimizer='sgd', loss='mean_squared_error')\nmodel.fit(X, X, nb_epoch=100, batch_size=80, validation_split=0.3)\n\nl_weights = np.round(model.layers[0].get_weights()[0],3)\n\nprint l_weights.argmax(axis=0)\nprint l_weights.max(axis=0)\n
\n\n

And I'm getting:

\n\n
Train on 69999 samples, validate on 30000 samples\nEpoch 1/100\n69999/69999 [==============================] - 1s - loss: 0.2092 - val_loss: 0.1564\nEpoch 2/100\n69999/69999 [==============================] - 1s - loss: 0.1536 - val_loss: 0.1510\nEpoch 3/100\n69999/69999 [==============================] - 1s - loss: 0.1484 - val_loss: 0.1459\n.\n.\n.\nEpoch 98/100\n69999/69999 [==============================] - 1s - loss: 0.0055 - val_loss: 0.0054\nEpoch 99/100\n69999/69999 [==============================] - 1s - loss: 0.0053 - val_loss: 0.0053\nEpoch 100/100\n69999/69999 [==============================] - 1s - loss: 0.0051 - val_loss: 0.0051\n[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83]\n[ 0.85000002  0.85100001  0.79799998  0.80500001  0.82700002  0.81900001\n  0.792       0.829       0.81099999  0.80800003  0.84899998  0.829       0.852\n  0.79500002  0.84100002  0.81099999  0.792       0.80800003  0.85399997\n  0.82999998  0.85100001  0.84500003  0.847       0.79699999  0.81400001\n  0.84100002  0.81        0.85100001  0.80599999  0.84500003  0.824\n  0.81999999  0.82999998  0.79100001  0.81199998  0.829       0.85600001\n  0.84100002  0.792       0.847       0.82499999  0.84500003  0.796\n  0.82099998  0.81900001  0.84200001  0.83999997  0.815       0.79500002\n  0.85100001  0.83700001  0.85000002  0.79900002  0.84100002  0.79699999\n  0.838       0.847       0.84899998  0.83700001  0.80299997  0.85399997\n  0.84500003  0.83399999  0.83200002  0.80900002  0.85500002  0.83899999\n  0.79900002  0.83399999  0.81        0.79100001  0.81800002  0.82200003\n  0.79100001  0.83700001  0.83600003  0.824       0.829       0.82800001\n  0.83700001  0.85799998  0.81999999  0.84299999  0.83999997]\n
\n\n

When I used only 5 numbers as an input and printed the actual weights I got this:

\n\n
array([[ 1.,  0., -0.,  0.,  0.],\n       [ 0.,  1.,  0., -0., -0.],\n       [-0.,  0.,  1.,  0.,  0.],\n       [ 0., -0.,  0.,  1., -0.],\n       [ 0., -0.,  0., -0.,  1.]], dtype=float32)\n
\n", + "system": "" + }, + { + "instruction": "Keras How to use max_value in Relu activation function", + "input": "", + "output": "

You can use the ReLU function of the Keras backend. Therefore, first import the backend:

\n\n
from keras import backend as K\n
\n\n

Then, you can pass your own function as activation using backend functionality.\nThis would look like

\n\n
def relu_advanced(x):\n    return K.relu(x, max_value=250)\n
\n\n

Then you can use it like

\n\n
model.add(Dense(512, input_dim=1, activation=relu_advanced))\n
\n\n

or

\n\n
model.add(Activation(relu_advanced))\n
\n\n

Unfortunately, you must hard code additional arguments.\nTherefore, it is better to use a function, that returns your function and passes your custom values:

\n\n
def create_relu_advanced(max_value=1.):        \n    def relu_advanced(x):\n        return K.relu(x, max_value=K.cast_to_floatx(max_value))\n    return relu_advanced\n
\n\n

Then you can pass your arguments by either

\n\n
model.add(Dense(512, input_dim=1, activation=create_relu_advanced(max_value=250)))\n
\n\n

or

\n\n
model.add(Activation(create_relu_advanced(max_value=250)))\n
\n", + "system": "" + }, + { + "instruction": "merging recurrent layers with dense layer in Keras", + "input": "", + "output": "

It is correct that in Keras, RNN layer expects input as (nb_samples, time_steps, input_dim). However, if you want to add RNN layer after a Dense layer, you still can do that after reshaping the input for the RNN layer. Reshape can be used both as a first layer and also as an intermediate layer in a sequential model. Examples are given below:

\n\n

Reshape as first layer in a Sequential model

\n\n
model = Sequential()\nmodel.add(Reshape((3, 4), input_shape=(12,)))\n# now: model.output_shape == (None, 3, 4)\n# note: `None` is the batch dimension\n
\n\n

Reshape as an intermediate layer in a Sequential model

\n\n
model.add(Reshape((6, 2)))\n# now: model.output_shape == (None, 6, 2)\n
\n\n

For example, if you change your code in the following way, then there will be no error. I have checked it and the model compiled without any error reported. You can change the dimension as per your need.

\n\n
from keras.models import Sequential\nfrom keras.layers import Dense, SimpleRNN, Reshape\nfrom keras.optimizers import Adam\n\nmodel = Sequential()\nmodel.add(Dense(150, input_dim=23,init='normal',activation='relu'))\nmodel.add(Dense(80,activation='relu',init='normal'))\nmodel.add(Reshape((1, 80)))\nmodel.add(SimpleRNN(2,init='normal')) \nadam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)\nmodel.compile(loss=\"mean_squared_error\", optimizer=\"rmsprop\")\n
\n", + "system": "" + }, + { + "instruction": "Why does Keras' train_on_batch produce zero loss and accuracy at the second epoch?", + "input": "", + "output": "

This seems like the exploding/vanishing gradient problem. Like someone said try tuning your learning rate and/or the depth/width of your NN layers

\n", + "system": "" + }, + { + "instruction": "How to dynamically freeze weights after compiling model in Keras?", + "input": "", + "output": "

I've tried this example code a couple months ago and it worked:\nhttps://github.com/fchollet/keras/blob/master/examples/mnist_acgan.py

\n\n

It's not the simplest form of GAN, but as far as I remembered, it's not too difficult to remove the classification loss and turn the model into a GAN.

\n\n

You don't need to turn on/off the discriminator's trainable property and recompile. Simply create and compile two model objects, one with trainable=True (discriminator in the code) and another one with trainable=False (combined in the code).

\n\n

When you're updating the discriminator, call discriminator.train_on_batch(). When you're updating the generator, call combined.train_on_batch().

\n", + "system": "" + }, + { + "instruction": "How to split folders to 3 datasets with ImageDataGenerator?", + "input": "", + "output": "

I like working with the flow_from_dataframe() method of ImageDataGenerator, where I interact with a simple Pandas DataFrame (perhaps containig other features), not with the directory. But you can easily change my code if you insist on flow_from_directory().

\n

So this is my go-to function, e.g. for a regression task, where we try to predict a continuous y:

\n
def get_generators(train_samp, test_samp, validation_split = 0.1):\n    train_datagen = ImageDataGenerator(validation_split=validation_split, rescale = 1. / 255)\n    test_datagen = ImageDataGenerator(rescale = 1. / 255)\n    \n    train_generator = train_datagen.flow_from_dataframe(\n        dataframe = images_df[images_df.index.isin(train_samp)],\n        directory = images_dir,\n        x_col = 'img_file',\n        y_col = 'y',\n        target_size = (IMG_HEIGHT, IMG_WIDTH),\n        class_mode = 'raw',\n        batch_size = batch_size,\n        shuffle = True,\n        subset = 'training',\n        validate_filenames = False\n    )\n    valid_generator = train_datagen.flow_from_dataframe(\n        dataframe = images_df[images_df.index.isin(train_samp)],\n        directory = images_dir,\n        x_col = 'img_file',\n        y_col = 'y',\n        target_size = (IMG_HEIGHT, IMG_WIDTH),\n        class_mode = 'raw',\n        batch_size = batch_size,\n        shuffle = False,\n        subset = 'validation',\n        validate_filenames = False\n    )\n\n    test_generator = test_datagen.flow_from_dataframe(\n        dataframe = images_df[images_df.index.isin(test_samp)],\n        directory = images_dir,\n        x_col = 'img_file',\n        y_col = 'y',\n        target_size = (IMG_HEIGHT, IMG_WIDTH),\n        class_mode = 'raw',\n        batch_size = batch_size,\n        shuffle = False,\n        validate_filenames = False\n    )\n    return train_generator, valid_generator, test_generator\n
\n

Things to notice:

\n\n

This can be further generalized for multiple outputs, classification, what have you.

\n", + "system": "" + }, + { + "instruction": "How to prevent Keras from computing metrics during training", + "input": "", + "output": "

I was able to use learning_phase but only in symbolic tensor mode (graph) mode:

\n

So, at first we need to disable eager mode (this must be done right after importing tensorflow):

\n
import tensorflow as tf\ntf.compat.v1.disable_eager_execution()\n
\n

Then you can create your metric using a symbolic if (backend.switch):

\n
def metric_graph(in1, in2, out):\n    actual_metric = out * (in1 + in2)\n    return K.switch(K.learning_phase(), tf.zeros((1,)), actual_metric) \n
\n

The method add_metric will ask for a name and an aggregation method, which you can set to "mean".

\n

So, here is one example:

\n
x1 = numpy.ones((5,3))\nx2 = numpy.ones((5,3))\ny = 3*numpy.ones((5,1))\n\nvx1 = numpy.ones((5,3))\nvx2 = numpy.ones((5,3))\nvy = 3*numpy.ones((5,1))\n\ndef metric_eager(in1, in2, out):\n    if (K.learning_phase()):\n        return 0\n    else:\n        return out * (in1 + in2)\n\ndef metric_graph(in1, in2, out):\n    actual_metric = out * (in1 + in2)\n    return K.switch(K.learning_phase(), tf.zeros((1,)), actual_metric) \n\nins1 = Input((3,))\nins2 = Input((3,))\nouts = Concatenate()([ins1, ins2])\nouts = Dense(1)(outs)\nmodel = Model([ins1, ins2],outs)\nmodel.add_metric(metric_graph(ins1, ins2, outs), name='my_metric', aggregation='mean')\nmodel.compile(loss='mse', optimizer='adam')\n\nmodel.fit([x1, x2],y, validation_data=([vx1, vx2], vy), epochs=3)\n
\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'keras.applications.resnet50 on google colab", + "input": "", + "output": "
from tensorflow.keras.applications.resnet50 import ResNet50\n
\n", + "system": "" + }, + { + "instruction": "Can SigmoidFocalCrossEntropy in Tensorflow (tf-addons) be used in Multiclass Classification? ( What is the right way)?", + "input": "", + "output": "

Some basics first.

\n

Categorical Crossentropy is designed to incentivize a model a model to predict 100% for the correct label. It was designed for models that predict single-label multi-class classification - like CIFAR10 or Imagenet. Usually these models finish in a Dense layer with more than one output.

\n

Binary Crossentropy is designed to incentivize a model to predict 100% if the label is one, or, 0% is the label is zero. Usually these models finish in a Dense layer with exactly one output.

\n

When you apply Binary Crossentropy to a single-label multi-class classification problem, you are doing something that is mathematically valid but defines a slightly different task: you are incentivizing a single-label classification model to not only get the true label correct, but also minimize the false labels.

\n

For example, if your target is dog, and your model predict 60% dog, CCE doesn't care if your model predicts 20% cat and 20% French horn, or, 40% cat and 0% French horn. So this is aligned with a top-1 accuracy concept.

\n

But if you take that same model and apply BCE, and your model predictions 60% dog, BCE DOES care if your models predict 20%/20% cat/frenchhorn, vs 40%/0% cat/frenchhorn. To put it in precise terminology, the former is more "calibrated" and so it has some additional measure of goodness. However, this has little correlation to top-1 accuracy.

\n

When you use BCE, presumably you are wasting the model's energy to focus on calibration at the expense of top-1 acc. But as you might have seen, it doesn't always work out that way. Sometimes BCE gives you superior results. I don't know that there's a clear explanation of that but I'd assume that the additional signals (in the case of Imagenet, you'll literally get 1000 times more signals) somehow creates a smoother loss value that perhaps helps smooth the gradients you receive.

\n

The alpha value of focal loss additionally penalizes very wrong predictions and lessens the penalty if your model predicts something close to the right answer - like predicting 90% cat if the ground truth is cat. This would be a shift from the original definition of CCE, based on the theory of Maximum Likelihood Estimation... which focuses on calibration... vs the normal metric most ML practitioners care about: top-1 accuracy.

\n

Focal loss was originally designed for binary classification so the original formulation only has a single alpha value. The repo you pointed to extends the concept of Focal Loss to single-label classification and therefore there are multiple alpha values: one per class. However, by my read, it loses the additional possible smoothing effect of BCE.

\n

Net net, for the best results, you'll want to benchmark CCE, BCE, Binary Focal Loss (out of TFA and per the original paper), and the single-label multi-class Focal Loss that you found in that repo. In general, those the discovery of those alpha values is done via guess & check, or grid search.

\n

There's a lot of manual guessing and checking in ML unfortunately.

\n", + "system": "" + }, + { + "instruction": "How to fix error: Cannot register 2 metrics with the same name: /tensorflow/api/keras/optimizers", + "input": "", + "output": "

This appears to be a bug with tensorflow 2.6 see https://forums.developer.nvidia.com/t/unable-to-import-keras-models-on-tensorflow-2-6-0-jetpack-v46/191904

\n

I had the same issue and solved it by downgrading to tensorflow 2.5 until the issue is resolved in a future update.

\n

**Edit: 2.7 is out and seems to have fixed the issue.

\n

*Note: I am using tensorflow with my CPU, not a GPU.

\n", + "system": "" + }, + { + "instruction": "'str' object has no attribute 'decode' for Tensorflow in Python", + "input": "", + "output": "

The problem was solved by uninstalling h5py and installing h5py==2.10.0 as below:

\n
pip uninstall h5py\npip install h5py==2.10.0\n
\n

If h5py>3, the code shows the error.

\n", + "system": "" + }, + { + "instruction": "Import ResNeXt into Keras", + "input": "", + "output": "

I never understand why some well-used model architectures are not part of the keras application, like SE-Net, ResNeXt. However, there is a well-known keras model zoo repository from where you can get what you need. Classification models Zoo - Keras (and TensorFlow Keras)..

\n

Installing

\n
!pip install git+https://github.com/qubvel/classification_models.git\n
\n

Importing

\n
# for keras\nfrom classification_models.keras import Classifiers\n\n# for tensorflow keras\nfrom classification_models.tfkeras import Classifiers\n\nClassifiers.models_names()\n
\n
['resnet18',\n 'resnet34',\n 'resnet50',\n 'resnet101',\n 'resnet152',\n 'seresnet18',\n 'seresnet34',\n 'seresnet50',\n 'seresnet101',\n 'seresnet152',\n 'seresnext50',\n 'seresnext101',\n 'senet154',\n 'resnet50v2',\n 'resnet101v2',\n 'resnet152v2',\n 'resnext50',\n 'resnext101',\n 'vgg16',\n 'vgg19',\n 'densenet121',\n 'densenet169',\n 'densenet201',\n 'inceptionresnetv2',\n 'inceptionv3',\n 'xception',\n 'nasnetlarge',\n 'nasnetmobile',\n 'mobilenet',\n 'mobilenetv2']\n
\n

How to use

\n
SeResNeXT, preprocess_input = Classifiers.get('seresnext50')\nmodel = SeResNeXT(include_top = False, input_shape=(224, 224, 3), weights='imagenet')\n
\n
ResNeXt50, preprocess_input = Classifiers.get('resnext50')\nmodel = ResNeXt50(include_top = False, input_shape=(224, 224, 3), weights='imagenet')\n
\n", + "system": "" + }, + { + "instruction": "How to resolve the ERROR: Failed building wheel for h5py", + "input": "", + "output": "

while : h5py which use PEP 517 and cannot be installed directly, try this

\n
pip install --upgrade pip setuptools wheel\n
\n

or check the python version , for example h5py 2.6 only supports up to python 3.6 , look at this.

\n", + "system": "" + }, + { + "instruction": "'SparseTensor' object is not subscriptable keras", + "input": "", + "output": "

Keras can't work with csr_matrix. Convert to a numpy array.

\n
X_train = X_train.toarray()\n
\n", + "system": "" + }, + { + "instruction": "How to do Multiclass classification with Keras?", + "input": "", + "output": "

You need to convert your string categories to integers, there is a method for that:

\n
y_train = tf.keras.utils.to_categorical(y_train, num_classes=num_classes)\n
\n

Also, the last layer for multi-class classification should be something like:

\n
model.add(Dense(NUM_CLASSES, activation='softmax'))\n
\n

And finally, for multi-class classification, the correct loss would be categorial cross-entropy.

\n
model.compile(loss="categorical_crossentropy", optimizer= "adam", metrics=['accuracy'])\n
\n

This is a nice example available from tensorflow: Classification Example

\n", + "system": "" + }, + { + "instruction": "Keras giving low accuracy after loading model", + "input": "", + "output": "

I had a similar problem in tf 2.3.0.

\n

This issue explains the problem with the generic term "accuracy" metric when using sparse_categorical_crossentropy. On model reloading it associates the wrong accuracy metric.\nThe solution is to explicitly tell keras to use the correct metric instead of letting it infer what is the correct one (bug in it) i.e. compile with metrics='sparse_categorical_accuracy'.

\n

I was using metrics='accuracy' initially as metric in training phase and discovered that only by recompiling the model after reloading it gave back the expected performance.

\n", + "system": "" + }, + { + "instruction": "what is the difference between using softmax as a sequential layer in tf.keras and softmax as an activation function for a dense layer?", + "input": "", + "output": "

they are the same, you can test it on your own

\n
# generate data\nx = np.random.uniform(0,1, (5,20)).astype('float32')\n\n# 1st option\nX = Dense(10, activation=tf.nn.softmax)\nA = X(x)\n\n# 2nd option\nw,b = X.get_weights()\nB = Softmax()(tf.matmul(x,w) + b)\n\ntf.reduce_all(A == B)\n# <tf.Tensor: shape=(), dtype=bool, numpy=True>\n
\n

Pay attention also when using tf.keras.layers.Softmax, it doesn't require to specify the units, it's a simple activation

\n

by default, the softmax is computed on the -1 axis, you can change this if you have tensor outputs > 2D and want to operate softmax on other dimensionalities. You can change this easily in the second option

\n", + "system": "" + }, + { + "instruction": "Why does keras model.fit use so much memory despite using allow_growth=True?", + "input": "", + "output": "

I used to face this problem. And I found a solution from someone who I can't find anymore. His solution I paste below. In fact, I found that if you set allow_growth=True, tensorflow seems to use all your memory. So you should just set your max limit.

\n

try this:

\n
gpus = tf.config.experimental.list_physical_devices("GPU")\nif gpus:\n    # Restrict TensorFlow to only use the first GPU\n    try:\n        for gpu in gpus:\n            tf.config.experimental.set_memory_growth(gpu, False)\n            tf.config.experimental.set_virtual_device_configuration(\n                gpu,\n                [\n                    tf.config.experimental.VirtualDeviceConfiguration(\n                        memory_limit=12288  # set your limit\n                    )\n                ],\n            )\n        tf.config.experimental.set_visible_devices(gpus[0], "GPU")\n        logical_gpus = tf.config.experimental.list_logical_devices("GPU")\n        print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")\n    except RuntimeError as e:\n        # Visible devices must be set before GPUs have been initialized\n        print(e)\n
\n", + "system": "" + }, + { + "instruction": "ValueError: `validation_split` is only supported for Tensors or NumPy arrays, found: (keras.preprocessing.sequence.TimeseriesGenerator object)", + "input": "", + "output": "

Your first intution is right that you can't use the validation_split when using dataset generator.

\n

You will have to understand how the functioninig of dataset generator happens. The model.fit API does not know how many records or batch your dataset has in its first epoch. As the data is generated or supplied for each batch one at a time to the model for training. So there is no way to for the API to know how many records are initially there and then making a validation set out of it. Due to this reason you cannot use the validation_split when using dataset generator. You can read it in their documentation.

\n
\n

Float between 0 and 1. Fraction of the training data to be used as\nvalidation data. The model will set apart this fraction of the\ntraining data, will not train on it, and will evaluate the loss and\nany model metrics on this data at the end of each epoch. The\nvalidation data is selected from the last samples in the x and y data\nprovided, before shuffling. This argument is not supported when x is a\ndataset, generator or keras.utils.Sequence instance.

\n
\n

You need to read the last two lines where they have said that it is not supported for dataset generator.

\n

What you can instead do is use the following code to split the dataset. You can read in detail here. I am just writing the important part from the link below.

\n
# Splitting the dataset for training and testing.\ndef is_test(x, _):\n    return x % 4 == 0\n\n\ndef is_train(x, y):\n    return not is_test(x, y)\n\n\nrecover = lambda x, y: y\n\n# Split the dataset for training.\ntest_dataset = dataset.enumerate() \\\n    .filter(is_test) \\\n    .map(recover)\n\n# Split the dataset for testing/validation.\ntrain_dataset = dataset.enumerate() \\\n    .filter(is_train) \\\n    .map(recover)\n
\n

I hope my answer helps you.

\n", + "system": "" + }, + { + "instruction": "Keras softmax output and accuracy", + "input": "", + "output": "
\n

e.g., when the true class is [0, 0, 1] and predicted probability is [0.1, 0.4, 0.5], even if 0.5 is the largest probability, the accuracy of this prediction should be 0, because 0.5 != 1. Is that correct?

\n
\n

No. You treat the index with the maximum value as the prediction of the model. So in your example, this sample prediction would count towards increasing the accuracy. This is normally called Top-1 accuracy. In image classification, the Top-5 accuracy is also often used (the top 5 maximum values in the softmax layer are treated as guesses of the NN and they are considered for the accuracy).

\n
\n

More generally, when the output layer activation is softmax, we will normally get floating probability predictions, and in very very little chance will we get integer probability predictions like [0, 0, 1]. So we can't use accuracy as a metric when using softmax as activation. Is that correct?

\n
\n

Technically speaking, you will never get integer values for the softmax layer output since the type is float. But yeah, there's a very teeny tiny chance of getting [0.0, 0.0, 1.0]. And this assumption of yours is incorrect since the premise does not hold. Nevertheless, accuracy is a valid metric when using Softmax as the classification layer of a neural network.

\n", + "system": "" + }, + { + "instruction": "Keras 'set_session' not available for Tensorflow 2.0", + "input": "", + "output": "

try using the keras backend from the tensorflow path. Your code gives me an error, but this works for me.

\n
import tensorflow as tf\nfrom tensorflow.keras.models import load_model, Model\nfrom tensorflow.python.keras import backend as K\n\nsess = tf.compat.v1.Session()\nK.set_session(sess)\n
\n", + "system": "" + }, + { + "instruction": "0 accuracy with LSTM", + "input": "", + "output": "

You're using sigmoid activation, which means your labels must be in range 0 and 1. But in your case, the labels are 1. and -1.

\n

Just replace -1 with 0.

\n
for i, y in enumerate(y_train_lstm):\n    if y == -1.:\n        y_train_lstm[i,:] = 0. \nfor i, y in enumerate(y_val_lstm):\n    if y == -1.:\n        y_val_lstm[i,:] = 0. \n\nfor i, y in enumerate(y_test_lstm):\n    if y == -1.:\n        y_test_lstm[i,:] = 0. \n
\n

Sidenote:

\n

\"enter

\n

The signals are very close, it would be hard to distinguish them. So, probably accuracy won't be high with simple models.

\n

After training with 0. and 1. labels,

\n
model = keras.models.Sequential([\n        keras.layers.LSTM(124, return_sequences=True, input_shape=(30, 1)),\n        keras.layers.LSTM(258),\n        keras.layers.Dense(1, activation='sigmoid')\n])\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\nhistory = model.fit(X_train_lstm, y_train_lstm, epochs=5, batch_size=128,\n                    validation_data=(X_val_lstm, y_val_lstm))\n# history = model.fit_generator(train_generator, epochs=40, validation_data=validation_generator, verbose=1)\nscore, acc = model.evaluate(X_val_lstm, y_val_lstm,\n                            batch_size=128)\n\nhistorydf = pd.DataFrame(history.history)\nhistorydf.head(10)\n
\n
Epoch 1/5\n12/12 [==============================] - 5s 378ms/step - loss: 0.7386 - accuracy: 0.4990 - val_loss: 0.6959 - val_accuracy: 0.4896\nEpoch 2/5\n12/12 [==============================] - 4s 318ms/step - loss: 0.6947 - accuracy: 0.5133 - val_loss: 0.6959 - val_accuracy: 0.5104\nEpoch 3/5\n12/12 [==============================] - 4s 318ms/step - loss: 0.6941 - accuracy: 0.4895 - val_loss: 0.6930 - val_accuracy: 0.5104\nEpoch 4/5\n12/12 [==============================] - 4s 332ms/step - loss: 0.6946 - accuracy: 0.5269 - val_loss: 0.6946 - val_accuracy: 0.5104\nEpoch 5/5\n12/12 [==============================] - 4s 334ms/step - loss: 0.6931 - accuracy: 0.4901 - val_loss: 0.6929 - val_accuracy: 0.5104\n3/3 [==============================] - 0s 73ms/step - loss: 0.6929 - accuracy: 0.5104\n\n    loss    accuracy    val_loss    val_accuracy\n0   0.738649    0.498980    0.695888    0.489583\n1   0.694708    0.513256    0.695942    0.510417\n2   0.694117    0.489463    0.692987    0.510417\n3   0.694554    0.526852    0.694613    0.510417\n4   0.693118    0.490143    0.692936    0.510417\n
\n

Source code in colab: https://colab.research.google.com/drive/10yRf4TfGDnp_4F2HYoxPyTlF18no-8Dr?usp=sharing

\n", + "system": "" + }, + { + "instruction": "Custom loss function with weights in Keras", + "input": "", + "output": "

this is a workaround to pass additional arguments to a custom loss function, in your case an array of weights. the trick consists in using fake inputs which are useful to build and use the loss in the correct ways. don't forget that keras handles fixed batch dimension

\n

I provide a dummy example in a regression problem

\n
def mse(y_true, y_pred, weights):\n    error = y_true-y_pred\n    return K.mean(K.square(error) + K.sqrt(weights))\n\nX = np.random.uniform(0,1, (1000,10))\ny = np.random.uniform(0,1, 1000)\nw = np.random.uniform(0,1, 1000)\n\ninp = Input((10,))\ntrue = Input((1,))\nweights = Input((1,))\nx = Dense(32, activation='relu')(inp)\nout = Dense(1)(x)\n\nm = Model([inp,true,weights], out)\nm.add_loss( mse( true, out, weights ) )\nm.compile(loss=None, optimizer='adam')\nm.fit(x=[X, y, w], y=None, epochs=3)\n\n## final fitted model to compute predictions (remove W if not needed)\nfinal_m = Model(inp, out)\n
\n", + "system": "" + }, + { + "instruction": "Is there any way to access layers in tensorflow_hub.KerasLayer object?", + "input": "", + "output": "

There is an undocumented way to get intermediate layers out of some TF2 SavedModels exported from TF-Slim, such as https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4: passing return_endpoints=True to the SavedModel's __call__ function changes the output to a dict.

\n

NOTE: This interface is subject to change or removal, and has known issues.

\n
model = tfhub.KerasLayer('https://tfhub.dev/google/imagenet/inception_v1/feature_vector/4', trainable=False, arguments=dict(return_endpoints=True))\ninput = tf.keras.layers.Input((224, 224, 3))\noutputs = model(input)\nfor k, v in sorted(outputs.items()):\n  print(k, v.shape)\n
\n

Output for this example:

\n
InceptionV1/Conv2d_1a_7x7 (None, 112, 112, 64)\nInceptionV1/Conv2d_2b_1x1 (None, 56, 56, 64)\nInceptionV1/Conv2d_2c_3x3 (None, 56, 56, 192)\nInceptionV1/MaxPool_2a_3x3 (None, 56, 56, 64)\nInceptionV1/MaxPool_3a_3x3 (None, 28, 28, 192)\nInceptionV1/MaxPool_4a_3x3 (None, 14, 14, 480)\nInceptionV1/MaxPool_5a_2x2 (None, 7, 7, 832)\nInceptionV1/Mixed_3b (None, 28, 28, 256)\nInceptionV1/Mixed_3c (None, 28, 28, 480)\nInceptionV1/Mixed_4b (None, 14, 14, 512)\nInceptionV1/Mixed_4c (None, 14, 14, 512)\nInceptionV1/Mixed_4d (None, 14, 14, 512)\nInceptionV1/Mixed_4e (None, 14, 14, 528)\nInceptionV1/Mixed_4f (None, 14, 14, 832)\nInceptionV1/Mixed_5b (None, 7, 7, 832)\nInceptionV1/Mixed_5c (None, 7, 7, 1024)\nInceptionV1/global_pool (None, 1, 1, 1024)\ndefault (None, 1024)\n
\n

Issues to be aware of:

\n\n

Source: https://github.com/tensorflow/hub/issues/453

\n", + "system": "" + }, + { + "instruction": "ValueError: No gradients provided for any variable - Tensorflow 2.0/Keras", + "input": "", + "output": "

There are two different sets of problems in your code, which could be categorized as syntactical and architectural problems. The error raised (i.e. No gradients provided for any variable) is related to the syntactical problems which I would mostly address below, but I would try to give you some pointers about the architectural problems after that as well.

\n\n

The main cause of syntactical problems is about using named inputs and outputs for the model. Named inputs and outputs in Keras is mostly useful when the model has multiple input and/or output layers. However, your model has only one input and one output layer. Therefore, it may not be very useful to use named inputs and outputs here, but if that's your decision I would explain how it could be done properly.

\n\n

First of all, you should keep in mind that when using Keras models, the data generated from any input pipeline (whether it's a Python generator or tf.data.Dataset) should be provided as a tuple i.e. (input_batch, output_batch) or (input_batch, output_batch, sample_weights). And, as I said, this is the expected format everywhere in Keras when dealing with input pipelines, even when we are using named inputs and outputs as dictionaries.

\n\n

For example, if I want to use inputs/outputs naming and my model has two input layers named as \"words\" and \"importance\", and also two output layers named as \"output1\" and \"output2\", they should be formatted like this:

\n\n
({'words': words_data, 'importance': importance_data},\n {'output1': output1_data, 'output2': output2_data})\n
\n\n

So as you can see above, it's a tuple where each element of the tuple is a dictionary; the first element corresponds to inputs of the model and the second element corresponds to outputs of the model. Now, according to this point, let's see what modifications should be done to your code:

\n\n\n\n

All right, these would resolve the input/output problems and the error related to gradients would be gone; however, if you run the code after applying the above modifications, you would still get an error regarding incompatible shapes. As I said earlier, there are architectural issues in your model which I would briefly address below.

\n\n
\n\n

As you mentioned, this is supposed to be a seq-to-seq model. Therefore, the output is a sequence of one-hot encoded vectors, where the length of each vector is equal to (target sequences) vocabulary size. As a result, the softmax classifier should have as much units as vocabulary size, like this (Note: never in any model or problem use a softmax layer with only one unit; that's all wrong! Think about why it's wrong!):

\n\n
self.out_layer = keras.layers.Dense(params['vocab_size'], activation='softmax')\n
\n\n

The next thing to consider is the fact that we are dealing with 1D sequences (i.e. a sequence of tokens/words). Therefore using 2D-convolution and 2D-pooling layers does not make sense here. You can either use their 1D counterparts or replace them with something else like RNN layers. As a result of this, the Lambda layer should be removed as well. Also, if you want to use convolution and pooling, you should adjust the number of filters in each layer as well as the pool size properly (i.e. one conv filter, Conv1D(1,...) is not probably optimal, and pool size of 1 does not make sense).

\n\n

Further, that Dense layer before the last layer which has only one unit could severely limit the representational capacity of the model (i.e. it is essentially the bottleneck of your model). Either increase its number of units, or remove it.

\n\n

The other thing is that there is no reason for not one-hot encoding the labels of dev set. Rather, they should be one-hot encoded like the labels of training set. Therefore, either the training argument of make_generator should be removed entirely or, if you have some other use case for it, the dev dataset should be created with training=True argument passed to make_dataset function.

\n\n

Finally, after all these changes your model might work and start fitting on data; but after a few batches passed, you might get incompatible shapes error again. That's because you are generating input data with unknown dimension and also use a relaxed padding approach to pad each batch as much as needed (i.e. by using (None,) for padded_shapes). To resolve this you should decide on a fixed input/output dimension (e.g. by considering a fixed length for input/output sequences), and then adjust the architecture or hyper-parameters of the model (e.g. conv kernel size, conv padding, pooling size, adding more layers, etc.) as well as the padded_shapes argument accordingly. Even if you would like your model to support input/output sequences of variable length instead, then you should consider it in model's architecture and hyper-parameters and also the padded_shapes argument. Since this the solution depends on the task and desired design in your mind and there is no one-fits-all solutions, I would not comment further on that and leave it to you to figure it out. But here is a working solution (which may not be, and probably isn't, optimal at all) just to give you an idea:

\n\n
self.out_layer = keras.layers.Dense(params['vocab_size'], activation='softmax')\n\nself.model_layers = [\n    keras.layers.Embedding(params['vocab_size'], params['vocab_size']),\n    keras.layers.Conv1D(32, 4, padding='same'),\n    keras.layers.TimeDistributed(self.out_layer)\n]\n\n\n# ...\npadded_shapes=(\n    {'inputs': (10,)},\n    {'targets': (10,)}\n)\n
\n", + "system": "" + }, + { + "instruction": "TF2.1: SegNet model architecture problem. Bug with metric calculation, keeps constant and converge to determined value", + "input": "", + "output": "

You can have reshapes with unknown batch size in custom layers in two ways.

\n\n

If you know the rest of the shape, reshape using -1 as the batch size:

\n\n

Suppose you know the size of your expected array:

\n\n
import tensorflow.keras.backend as K\nreshaped = K.reshape(original, (-1, x, y, channels))\n
\n\n

Suppose you don't know the size, then use K.shape to get the shape as a tensor:

\n\n
inputs_shape = K.shape(inputs)\nbatch_size = inputs_shape[:1]\nx = inputs_shape[1:2]\ny = inputs_shape[2:3]\nch = inputs_shape[3:]\n\n#you can then concatenate these and operate them (notice I kept them as 1D vector, not as scalar)\nnewShape = K.concatenate([batch_size, x, y, ch]) #of course you will make your operations\n
\n\n
\n\n

Once I did my own version of a Segnet, I didn't use indices, but kept a one hot version. It's true that it takes extra operations, but it might work well:

\n\n
def get_indices(original, unpooled):\n    is_equal = K.equal(original, unpooled)\n    return K.cast(is_equal, K.floatx())\n\nprevious_output = ...\npooled = MaxPooling2D()(previous_output)\nunpooled = UpSampling2D()(pooled)\n\none_hot_indices = Lambda(get_indices)([previous_output, unpooled])\n
\n\n

Then after an upsampling, I concatenate these indices and pass a new conv:

\n\n
some_output = ...\nupsampled = UpSampling2D()(some_output)\nwith_indices = Concatenate([upsampled, one_hot_indices])\nupsampled = Conv2D(...)(with_indices)\n
\n", + "system": "" + }, + { + "instruction": "Dice coef greater than 1", + "input": "", + "output": "

I believe your y_true images might not be in the range between 0 and 1.... are you sure they're not between 0 and 255? Or that they have a single channel (instead of 3 channels?)

\n\n

This should not be the cause, but you're using a batch dice, you should use an image dice:

\n\n
def dice_coef(y_true, y_pred, smooth=1):\n    y_true_f = K.batch_flatten(y_true)\n    y_pred_f = K.batch_flatten(y_pred)\n\n    intersection = K.sum(y_true_f * y_pred_f, axis=-1)\n    sums = K.sum(y_true_f, axis=-1) + K.sum(y_pred_f, axis=-1)\n\n    return (2. * intersection + smooth) / (sums + smooth)\n
\n\n

Usually, I use K.epsilon() for \"smooth\" (something very small).

\n\n

The same goes for iou:

\n\n
def iou(y_true, y_pred, smooth=1):\n    y_true_f = K.batch_flatten(y_true)\n    y_pred_f = K.batch_flatten(y_pred)\n\n    intersection = K.sum(y_true_f * y_pred_f, axis=-1)\n    union = K.sum(y_true_f, axis=-1) + K.sum(y_pred_f, axis=-1) - intersection\n    return (intersection + smooth) / (union + smooth)\n
\n\n

Example of a channel dice:

\n\n
#considering shape (batch, classes, image_size, image_size)\ndef dice_coef(y_true, y_pred, smooth=1):\n\n    intersection = K.sum(y_true * y_pred, axis=[2,3])\n    sums = K.sum(y_true, axis=[2,3]) + K.sum(y_pred, axis=[2,3])\n\n    dice = (2. * intersection + smooth) / (sums + smooth)\n    return K.mean(dice, axis=-1)\n
\n", + "system": "" + }, + { + "instruction": "Access deprecated attribute "validation_data" in tf.keras.callbacks.Callback", + "input": "", + "output": "

You are right that the argument, validation_data is deprecated as per Tensorflow Callbacks Documentation.

\n\n

The issue which you are facing has been raised in Github. Related issues are Issue1, Issue2 and Issue3.

\n\n

None of the above Github Issues is resolved and Your workaround of passing Validation_Data as an argument to Custom Callback is a good one, as per this Github Comment, as many people found it useful.

\n\n

Specifying the code of workaround below, for the benefit of the Stackoverflow Community, even though it is present in Github.

\n\n
class Metrics(Callback):\n\n    def __init__(self, val_data, batch_size = 20):\n        super().__init__()\n        self.validation_data = val_data\n        self.batch_size = batch_size\n\n    def on_train_begin(self, logs={}):\n        print(self.validation_data)\n        self.val_f1s = []\n        self.val_recalls = []\n        self.val_precisions = []\n\n    def on_epoch_end(self, epoch, logs={}):\n        batches = len(self.validation_data)\n        total = batches * self.batch_size\n\n        val_pred = np.zeros((total,1))\n        val_true = np.zeros((total))\n\n        for batch in range(batches):\n            xVal, yVal = next(self.validation_data)\n            val_pred[batch * self.batch_size : (batch+1) * self.batch_size] = np.asarray(self.model.predict(xVal)).round()\n            val_true[batch * self.batch_size : (batch+1) * self.batch_size] = yVal\n\n        val_pred = np.squeeze(val_pred)\n        _val_f1 = f1_score(val_true, val_pred)\n        _val_precision = precision_score(val_true, val_pred)\n        _val_recall = recall_score(val_true, val_pred)\n\n        self.val_f1s.append(_val_f1)\n        self.val_recalls.append(_val_recall)\n        self.val_precisions.append(_val_precision)\n\n        return\n
\n\n

I will keep following the Github Issues mentioned above and will update the Answer accordingly.

\n\n

Hope this helps. Happy Learning!

\n", + "system": "" + }, + { + "instruction": "Tensorflow one custom metric for multioutput models", + "input": "", + "output": "

With your given model definition, this is a standard multi-output Model.

\n\n
model = tf.keras.Model(inputs=[input], outputs=[output_1, output_2, output_3])\n
\n\n

In general, all (custom) Metrics as well as (custom) Losses will be called on every output separately (as y_pred)! Within the loss/metric function you will only see one output together with the one\ncorresponding target tensor. \nBy passing a list of loss functions (length == number of outputs of your model) you can specifiy which loss will be used for which output:

\n\n
model.compile(optimizer=Adam(), loss=[loss_for_output_1, loss_for_output_2, loss_for_output_3], loss_weights=[1, 4, 8])\n
\n\n

The total loss (which is the objective function to minimize) will be the additive combination of all losses multiplied with the given loss weights.

\n\n

It is almost the same for the metrics! Here you can pass (as for the loss) a list (lenght == number of outputs) of metrics and tell Keras which metric to use for which of your model outputs.

\n\n
model.compile(optimizer=Adam(), loss='mse', metrics=[metrics_for_output_1, metrics_for_output2, metrics_for_output3])\n
\n\n

Here metrics_for_output_X can be either a function or a list of functions, which all be called with the one corresponding output_X as y_pred.

\n\n

This is explained in detail in the documentation of Multi-Output Models in Keras. They also show examples for using dictionarys (to map loss/metric functions to a specific output) instead of lists.\nhttps://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models

\n\n

Further information:

\n\n

If I understand you correctly you want to train your model using a loss function comparing the \nthree model outputs with three ground truth values and want to do some sort of performance evaluation by comparing \na derived value from the three model outputs and a single ground truth value. \nUsually the model gets trained on the same objective it is evaluated on, otherwise you might get poorer results when\nevaluating your model!

\n\n

Anyways... for evaluating your model on a single label I suggest you either:

\n\n

1. (The clean solution)

\n\n

Rewrite your model and incorporate the post-processing steps. Add all the necessary operations (as layers) and map those\nto an auxiliary output. For training your model you can set the loss_weight of the auxiliary output to zero. \nMerge your Datasets so you can feed your model the model input, the intermediate target outputs as well as the labels.\nAs explained above you can define now a metric comparing the auxiliary model output with the given target labels.

\n\n

2.

\n\n

Or you train your model and derive the metric e.g. in a custom Callback by calculating your post-processing steps on the three outputs of model.predict(input). \nThis will make it necessary to write custom summaries if you want to track those values in your tensorboard! That's why I would not recommend this solution.

\n", + "system": "" + }, + { + "instruction": "TensorFlow 2.0 How to get trainable variables from tf.keras.layers layers, like Conv2D or Dense", + "input": "", + "output": "

Ok, so I think I found the problem.

\n\n

The trainable variables were not available until I used the given layer object. After I run my forward pass I could retrieve attributes of the tf.keras.layers.Layer object like trainable_variables and weights.

\n\n

However, before my forward pass I received an empty list. To make things a little bit more clear:

\n\n
with tf.GradientTape() as tape:\n    print(dense_layers[0].trainable_variables)\n    self.forward_pass(X)\n    self.compute_loss()\n    print(dense_layers[0].trainable_variables)\n
\n\n

On the code above, the attribute trainable_variables is an empty list before executing self.forward_pass. However, right after it, I could retrieve the kernel and bias numpy arrays.

\n", + "system": "" + }, + { + "instruction": "Primer on TensorFlow and Keras: The past (TF1) the present (TF2)", + "input": "", + "output": "

How does TF1/TF2 work? And their differences

\n\n

TF1

\n\n

TF1 follows an execution style known as define-then-run. This is opposed to define-by-run which is for example Python execution style. But what does that mean? Define then run means that, just because you called/defined something it's not executed. You have to explicitly execute what you defined.

\n\n

TF has this concept of a Graph. First you define all the computations you need (e.g. all the layer computations of a neural network, loss computation and an optimizer that minimizes the loss - these are represented as ops or operations). After you define the computation/data-flow graph you execute bits and pieces of this using a Session. Let's see a simple example in action.

\n\n
# Graph generation\ntf_a = tf.placeholder(dtype=tf.float32)\ntf_b = tf.placeholder(dtype=tf.float32)\ntf_c = tf.add(tf_a, tf.math.multiply(tf_b, 2.0))\n\n# Execution\nwith tf.Session() as sess:\n    c = sess.run(tf_c, feed_dict={tf_a: 5.0, tf_b: 2.0})\n    print(c)\n
\n\n

The computational graph (also known as data flow graph) will look like below.

\n\n
     tf_a      tf_b   tf.constant(2.0)\n       \\         \\   /\n        \\      tf.math.multiply\n         \\     /\n         tf.add\n            |\n          tf_c\n
\n\n

Analogy: Think about you making a cake. You download the recipe from the internet. Then you start following the steps to actually make the cake. The recipe is the Graph and the process of making the cake is what the Session does (i.e. execution of the graph).

\n\n

TF2

\n\n

TF2 follows immediate execution style or define-by-run. You call/define something, it is executed. Let's see an example.

\n\n
a = tf.constant(5.0)\nb = tf.constant(3.0)\nc = tf_a + (tf_b * 2.0)\nprint(c.numpy())\n
\n\n

Woah! It looks so clean compared to the TF1 example. Everything looks so Pythonic.

\n\n

Analogy: Now think that you are in a hands-on cake workshop. You are making cake as the instructor explains. And the instructor explains what the result of each step is immediately. So, unlike in the previous example you don't have to wait till you bake the cake to see if you got it right (which is a reference to the fact that you cannot debug code). But you get instant feedback on how you are doing (you know what this means).

\n\n

Does that mean TF2 doesn't build a graph? Panic attack

\n\n

Well yes and no. There's two features in TF2 you should know about eager execution and AutoGraph functions.

\n\n
\n

Tip: To be exact TF1 also had eager execution (off by default) and can be enabled using tf.enable_eager_execution(). TF2 has eager_execution on by default.

\n
\n\n

Eager execution

\n\n

Eager execution can immediately execute Tensors and Operations. This is what you observed in the TF2 example. But the flipside is that it does not build a graph. So for example you use eager execution to implement and run a neural network, it will be very slow (as neural networks do very repetitive tasks (forward computation - loss computation - backward pass) over and over again).

\n\n

AutoGraph

\n\n

This is where the AutoGraph feature comes to the rescue. AutoGraph is one of my favorite features in TF2. What this does is that if you are doing \"TensorFlow\" stuff in a function, it analyses the function and builds the graph for you (mind blown). So for example you do the following. TensorFlow builds the graph.

\n\n
@tf.function\ndef do_silly_computation(x, y):\n    a = tf.constant(x)\n    b = tf.constant(y)\n    c = tf_a + (tf_b * 2.0)\n    return c\n\nprint(do_silly_computation(5.0, 3.0).numpy())\n
\n\n

So all you need to do is define a function which takes the necessary inputs and return the correct output. Most importantly add @tf.function decorator as that's the trigger for TensorFlow AutoGraph to analyse a given function.

\n\n
\n

Warning: AutoGraph is not a silver bullet and not to be used naively. There are various limitations of AutoGraph too.

\n
\n\n

Differences between TF1 and TF2

\n\n\n\n

What are different datatypes in TF1 and TF2?

\n\n

You've already seen lot of the main data types. But you might have questions about what they do and how they behave. Well this section is all about that.

\n\n

TF1 Data types / Data structures

\n\n\n\n

TF2 Data types / Data structures

\n\n\n\n

In terms of the behavior nothing much has changed in data types going from TF1 to TF2. The only main difference is that, the tf.placeholders are gone. You can also have a look at the full list of data types.

\n\n

What is Keras and how does that fit in all these?

\n\n

Keras used to be a separate library providing high-level implementations of components (e.g. layers and models) that are mainly used for deep learning models. But since later versions of TensorFlow, Keras got integrated into TensorFlow.

\n\n

So as I explained, Keras hides lot of unnecessary intricacies you have to deal with if you were to work with bare-bone TensorFlow. There are two main things Keras offers Layer objects and Model objects for implementing NNs. Keras also has two most common model APIs that lets you develop models: the Sequential API and the Functional API. Let's see how different Keras and TensorFlow are in a quick example. Let's build a simple CNN.

\n\n
\n

Tip: Keras allows you to achieve what you can with do achieve with TF much easier. But Keras also provide capabilities that are not yet strong in TF (e.g. text processing capabilities).

\n
\n\n
height=64\nwidth = 64\nn_channels = 3\nn_outputs = 10\n
\n\n

Keras (Sequential API) example

\n\n
model = Sequential()\nmodel.add(Conv2D(filters=32, kernel_size=(2,2), \nactivation='relu',input_shape=(height, width, n_channels)))\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Conv2D(filters=64, kernel_size=(2,2), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Flatten())\nmodel.add(Dense(n_outputs, activation='softmax'))\nmodel.compile(loss='binary_crossentropy', optimizer='adam')\nmodel.summary()\n
\n\n

Pros

\n\n
\n

Straight-forward to implement simple models

\n
\n\n

Cons

\n\n
\n

Cannot be used to implement complex models (e.g. models with multiple inputs)

\n
\n\n

Keras (Functional API) example

\n\n
inp = Input(shape=(height, width, n_channels))\nout = Conv2D(filters=32, kernel_size=(2,2), activation='relu',input_shape=(height, width, n_channels))(inp)\nout = MaxPooling2D(pool_size=(2,2))(out)\nout = Conv2D(filters=64, kernel_size=(2,2), activation='relu')(out)\nout = MaxPooling2D(pool_size=(2,2))(out)\nout = Flatten()(out)\nout = Dense(n_outputs, activation='softmax')(out)\nmodel = Model(inputs=inp, outputs=out)\nmodel.compile(loss='binary_crossentropy', optimizer='adam')\nmodel.summary()\n
\n\n

Pros

\n\n
\n

Can be used to implement complex models involving multiple inputs and outputs

\n
\n\n

Cons

\n\n
\n

Needs to have a very good understanding of the shapes of the inputs outputs and what's expected as an input by each layer

\n
\n\n

TF1 example

\n\n
# Input\ntf_in = tf.placeholder(shape=[None, height, width, n_channels], dtype=tf.float32)\n\n# 1st conv and max pool\nconv1 = tf.Variable(tf.initializers.glorot_uniform()([2,2,3,32]))\ntf_out = tf.nn.conv2d(tf_in, filters=conv1, strides=[1,1,1,1], padding='SAME') # 64,64\ntf_out = tf.nn.max_pool2d(tf_out, ksize=[2,2], strides=[1,2,2,1], padding='SAME') # 32,32\n\n# 2nd conv and max pool\nconv2 = tf.Variable(tf.initializers.glorot_uniform()([2,2,32,64]))\ntf_out = tf.nn.conv2d(tf_out, filters=conv2, strides=[1,1,1,1], padding='SAME') # 32, 32\ntf_out = tf.nn.max_pool2d(tf_out, ksize=[2,2], strides=[1,2,2,1], padding='SAME') # 16, 16\ntf_out = tf.reshape(tf_out, [-1, 16*16*64])\n\n# Dense layer\ndense = conv1 = tf.Variable(tf.initializers.glorot_uniform()([16*16*64, n_outputs]))\ntf_out = tf.matmul(tf_out, dense)\n
\n\n

Pros

\n\n
\n

Is very good for cutting edge research involving atypical operations (e.g. changing the sizes of layers dynamically)

\n
\n\n

Cons

\n\n
\n

Poor readability

\n
\n\n

Caveats and Gotchas

\n\n

Here I will be listing down few things you have to watch out for when using TF (coming from my experience).

\n\n

TF1 - Forgetting to feed all the dependent placeholders to compute the result

\n\n
tf_a = tf.placeholder(dtype=tf.float32)\ntf_b = tf.placeholder(dtype=tf.float32)\ntf_c = tf.add(tf_a, tf.math.multiply(tf_b, 2.0))\n\nwith tf.Session() as sess:\n    c = sess.run(tf_c, feed_dict={tf_a: 5.0})\n    print(c)\n
\n\n
\n

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_8' with dtype float\n [[node Placeholder_8 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]

\n
\n\n

The reason you get an error here is because, you haven't fed a value to tf_b. So make sure you feed values to all the dependent placeholder to compute a result.

\n\n

TF1 - Be very very careful of data types

\n\n
tf_a = tf.placeholder(dtype=tf.int32)\ntf_b = tf.placeholder(dtype=tf.float32)\ntf_c = tf.add(tf_a, tf.math.multiply(tf_b, 2.0))\n\nwith tf.Session() as sess:\n    c = sess.run(tf_c, feed_dict={tf_a: 5, tf_b: 2.0})\n    print(c)\n
\n\n
\n

TypeError: Input 'y' of 'Add' Op has type float32 that does not match type int32 of argument 'x'.

\n
\n\n

Can you spot the error? It is because you have to match data types when passing them to operations. Otherwise, use tf.cast() operation to cast your data type to a compatible data type.

\n\n

Keras - Understand what input shape each layer expects

\n\n
model = Sequential()\nmodel.add(Conv2D(filters=32, kernel_size=(2,2), \nactivation='relu',input_shape=(height, width)))\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Conv2D(filters=64, kernel_size=(2,2), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Flatten())\nmodel.add(Dense(n_outputs, activation='softmax'))\nmodel.compile(loss='binary_crossentropy', optimizer='adam')\n
\n\n
\n

ValueError: Input 0 of layer conv2d_8 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 64, 64]

\n
\n\n

Here, you have defined an input shape [None, height, width] (when you add the batch dimension). But Conv2D expects a 4D input [None, height, width, n_channels]. Therefore you get the error above. Some commonly misunderstood/error-prone layers are,

\n\n\n\n

Keras - Feeding in the wrong input/output shape during fit()

\n\n
height=64\nwidth = 64\nn_channels = 3\nn_outputs = 10\n\nXtrain = np.random.normal(size=(500, height, width, 1))\nYtrain = np.random.choice([0,1], size=(500, n_outputs))\n\n# Build the model\n\n# fit network\nmodel.fit(Xtrain, Ytrain, epochs=10, batch_size=32, verbose=0)\n
\n\n
\n

ValueError: Error when checking input: expected conv2d_9_input to have shape (64, 64, 3) but got array with shape (64, 64, 1)

\n
\n\n

You should know this one. We are feeding an input of shape [batch size, height, width, 1] when we should be feeding a [batch size, height, width, 3] input.

\n\n

Performance differences between TF1 and TF2

\n\n

This has already been in discussion here. So I will not reiterate what's in there.

\n\n

Things I wish I could have talked about but couldn't

\n\n

I'm leaving this with some links to further reading.

\n\n\n", + "system": "" + }, + { + "instruction": "How to do parallel GPU inferencing in Tensorflow 2.0 + Keras?", + "input": "", + "output": "

Try to load model in tf.distribute.MirroredStrategy and use greater batch_size

\n
mirrored_strategy = tf.distribute.MirroredStrategy()\n\nwith mirrored_strategy.scope():\n    model = tf.keras.models.load_model(saved_model_path)\n\nresult = model.predict(batch_size=greater_batch_size)\n
\n", + "system": "" + }, + { + "instruction": "Keras, Tensorflow : Merge two different model output into one", + "input": "", + "output": "

Trainable weights

\n

Ok. Since you are going to have custom trainable weights, the way to do this in Keras is creating a custom layer.

\n

Now, since your custom layer has no inputs, we will need a hack that will be explained later.

\n

So, this is the layer definition for the custom weights:

\n
from keras.layers import *\nfrom keras.models import Model\nfrom keras.initializers import get as get_init, serialize as serial_init\nimport keras.backend as K\nimport tensorflow as tf\n\n\nclass TrainableWeights(Layer):\n\n    #you can pass keras initializers when creating this layer\n    #kwargs will take base layer arguments, such as name and others if you want\n    def __init__(self, shape, initializer='uniform', **kwargs):\n        super(TrainableWeights, self).__init__(**kwargs)\n        self.shape = shape\n        self.initializer = get_init(initializer)\n        \n\n    #build is where you define the weights of the layer\n    def build(self, input_shape):\n        self.kernel = self.add_weight(name='kernel', \n                                      shape=self.shape, \n                                      initializer=self.initializer, \n                                      trainable=True)\n        self.built = True\n        \n\n    #call is the layer operation - due to keras limitation, we need an input\n    #warning, I'm supposing the input is a tensor with value 1 and no shape or shape (1,)\n    def call(self, x):\n        return x * self.kernel\n    \n\n    #for keras to build the summary properly\n    def compute_output_shape(self, input_shape):\n        return self.shape\n    \n\n    #only needed for saving/loading this layer in model.save()\n    def get_config(self):\n        config = {'shape': self.shape, 'initializer': serial_init(self.initializer)}\n        base_config = super(TrainableWeights, self).get_config()\n        return dict(list(base_config.items()) + list(config.items()))\n
\n

Now, this layer should be used like this:

\n
dummyInputs = Input(tensor=K.constant([1]))\ntrainableWeights = TrainableWeights(shape)(dummyInputs)\n
\n

Model A

\n

Having the layer defined, we can start modeling.
\nFirst, let's see the model_a side:

\n
#general vars\nlength = 150\ndic_size = 100\nembed_size = 12\n\n#for the model_a segment\ninput_text = Input(shape=(length,))\nembedding = Embedding(dic_size, embed_size)(input_text)\n\n#the following two lines are just a resource to reach the desired shape\nembedding = LSTM(5)(embedding) \nembedding = Dense(50)(embedding)\n\n#creating model_a here is optional, only if you want to use model_a independently later\nmodel_a = Model(input_text, embedding, name = 'model_a')\n
\n

Model B

\n

For this, we are going to use our TrainableWeights layer.
\nBut first, let's simulate a New_model() as mentioned.

\n
#simulates New_model() #notice the explicit batch_shape for the matrices\nnewIn1 = Input(batch_shape = (10,10))\nnewIn2 = Input(batch_shape = (10,30))\nnewOut1 = Dense(50)(newIn1)\nnewOut2 = Dense(50)(newIn2)\nnewOut = Add()([newOut1, newOut2])\nnew_model = Model([newIn1, newIn2], newOut, name='new_model')   \n
\n

Now the entire branch:

\n
#the matrices    \ndummyInput = Input(tensor = K.constant([1]))\nX_in = TrainableWeights((10,10), initializer='uniform')(dummyInput)\nM_in = TrainableWeights((10,30), initializer='uniform')(dummyInput)\n\n#the output of the branch   \nmd_1 = new_model([X_in, M_in])\n\n#optional, only if you want to use model_s independently later\nmodel_s = Model(dummyInput, md_1, name='model_s')\n
\n

The whole model

\n

Finally, we can join the branches in a whole model.
\nNotice how I didn't have to use model_a or model_s here. You can do it if you want, but those submodels are not needed, unless you want later to get them individually for other usages. (Even if you created them, you don't need to change the code below to use them, they're already part of the same graph)

\n
#I prefer tf.matmul because it's clear and understandable while K.dot has weird behaviors\nmult = Lambda(lambda x: tf.matmul(x[0], x[1], transpose_b=True))([embedding, md_1])\n\n#final model\nmodel = Model([input_text, dummyInput], mult, name='full_model')\n
\n

Now train it:

\n
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])\nmodel.fit(np.random.randint(0,dic_size, size=(128,length)),\n          np.ones((128, 10)))\n
\n

Since the output is 2D now, there is no problem about the 'categorical_crossentropy', my comment was because of doubts on the output shape.

\n", + "system": "" + }, + { + "instruction": "GradienTape convergence much slower than Keras.model.fit", + "input": "", + "output": "

Dataset.shuffle() only shuffle each minibatch, so each epoch has the same order. Keras .fit() uses some magics to shuffle the whole dataset before each epoch. To do this in TF, you need to use Dataset .repeat(epochs_number) and .shuffle(..., reshuffle_each_iteration=True):

\n\n
train_ds = data.Dataset.from_tensor_slices(\n    (np.hstack([index_rows.reshape(-1, 1), index_cols.reshape(-1, 1)]), index_data)\n    ).shuffle(100000, reshuffle_each_iteration=True\n    ).batch(batch_size, drop_remainder=True\n    ).repeat(epochs_number)\n\nfor ix, (examples, labels) in train_ds.enumerate():\n    train_step(examples, labels)\n    current_epoch = ix // (len(index_data) // batch_size)\n
\n\n

This workaround is not beautiful nor natural, for the moment you can use this to shuffle each epoch. It's a known issue and will be fixed, in the future you can use for epoch in range(epochs_number) instead of .repeat().

\n", + "system": "" + }, + { + "instruction": "Keras you are trying to load a weight file containing 2 layers into a model with 1 layers", + "input": "", + "output": "

Part of the problem may lie in Model(noise, img), where img is the entire Sequential model that could treated as a single layer when loading weights (see below) - depending on how the weights were saved.

\n\n

To better understand the problem, it'd help seeing your save code - since your code provided as-is (w/ save code added) works for me. For a workaround you could try, see below.

\n\n
\n\n

Possible problem:

\n\n
model = build_model()\nmodel.summary()\n\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\ninput_1 (InputLayer)         (None, 12)                0         \n_________________________________________________________________\nsequential_1 (Sequential)    (None, 28, 112, 16)       623824    \n=================================================================\nTotal params: 623,824\nTrainable params: 623,184\nNon-trainable params: 640\n
\n\n
\n\n

What worked for me:

\n\n
model_to_save = build_model()\nmodel_to_save.compile()\nmodel_to_save.save_weights(path)\n\nmodel_to_load = build_model()\nmodel_to_load.compile()\nmodel_to_load.load_weights(path)\n
\n\n
\n\n

Workaround + tip:

\n\n

To fix as-is, drop the noise =, image =, and Model(...) lines entirely, and simply do return model: your original Input should already do what you intend to with noise =.

\n\n

Also, if you require advanced functionality with multiple inputs/outputs, use Model, lot easier to work with - and don't mix Model w/ Sequential unless for very specific reasons.

\n", + "system": "" + }, + { + "instruction": "Keras predict() returns a better accuracy than evaluate()", + "input": "", + "output": "

This question was already answered here

\n\n

what happens is when you evaluate the model, since your loss function is categorical_crossentropy, metrics=['accuracy'] calculates categorical_accuracy.

\n\n

But predict has a default set to binary_accuracy.

\n\n

So essentially you are calculating categorical accuracy with evaluate and and binary accuracy with predict. this is the reason they are so widely different.

\n\n

the difference between categorical_accuracy and binary_accuracy is that categorical_accuracy check if all the outputs match with your y_test and binary_accuracy checks if each of you outputs matches with your y_test.

\n\n

Example(single row):

\n\n
prediction = [0,0,1,1,0]\ny_test = [0,0,0,1,0]\n\ncategorical_accuracy = 0% \n
\n\n

since 1 output does not match the categorical_accuracy is 0

\n\n
binary_accuracy = 80% \n
\n\n

even though 1 output doesn't match the rest of 80% do match so accuracy is 80%

\n", + "system": "" + }, + { + "instruction": "What is the 'index' in TFLite interpreter.get_input_details referring to?", + "input": "", + "output": "

In TFLite interpreter, all tensors are put into a tensor list (see the TfLiteTensor* tensors; in TfLiteContext), the index is the index of tensor in the tensor list.

\n", + "system": "" + }, + { + "instruction": "Understanding CTC loss for speech recognition in Keras", + "input": "", + "output": "

What are these?

\n\n\n\n

It seems this loss expects that your model's outputs (y_pred) have different lengths, as well as your ground truth data (y_true). This is probably to avoid calculating the loss for garbage characters after the end of the sentences (since you will need a fixed size tensor for working with lots of sentences at once)

\n\n

Form of the labels:

\n\n

Since the function's documentation is asking for shape (samples, length), the format is that... the char index for each char in each sentence.

\n\n

How to use this?

\n\n

There are some possibilities.

\n\n

1- If you don't care about lengths:

\n\n

If all lengths are the same, you can easily use it as a regular loss:

\n\n
def ctc_loss(y_true, y_pred):\n\n    return K.ctc_batch_cost(y_true, y_pred, input_length, label_length)\n    #where input_length and label_length are constants you created previously\n    #the easiest way here is to have a fixed batch size in training \n    #the lengths should have the same batch size (see shapes in the link for ctc_cost)    \n\nmodel.compile(loss=ctc_loss, ...)   \n\n#here is how you pass the labels for training\nmodel.fit(input_data_X_train, ground_truth_data_Y_train, ....)\n
\n\n

2 - If you care about the lengths.

\n\n

This is a little more complicated, you need that your model somehow tells you the length of each output sentence.
\nThere are again several creative forms of doing this:

\n\n\n\n

I like the first idea, and will exemplify it here.

\n\n
def ctc_find_eos(y_true, y_pred):\n\n    #convert y_pred from one-hot to label indices\n    y_pred_ind = K.argmax(y_pred, axis=-1)\n\n    #to make sure y_pred has one end_of_sentence (to avoid errors)\n    y_pred_end = K.concatenate([\n                                  y_pred_ind[:,:-1], \n                                  eos_index * K.ones_like(y_pred_ind[:,-1:])\n                               ], axis = 1)\n\n    #to make sure the first occurrence of the char is more important than subsequent ones\n    occurrence_weights = K.arange(start = max_length, stop=0, dtype=K.floatx())\n\n    #is eos?\n    is_eos_true = K.cast_to_floatx(K.equal(y_true, eos_index))\n    is_eos_pred = K.cast_to_floatx(K.equal(y_pred_end, eos_index))\n\n    #lengths\n    true_lengths = 1 + K.argmax(occurrence_weights * is_eos_true, axis=1)\n    pred_lengths = 1 + K.argmax(occurrence_weights * is_eos_pred, axis=1)\n\n    #reshape\n    true_lengths = K.reshape(true_lengths, (-1,1))\n    pred_lengths = K.reshape(pred_lengths, (-1,1))\n\n    return K.ctc_batch_cost(y_true, y_pred, pred_lengths, true_lengths)\n\nmodel.compile(loss=ctc_find_eos, ....)\n
\n\n

If you use the other option, use a model branch to calculate the lengths, concatenate these length to the first or last step of the output, and make sure you do the same with the true lengths in your ground truth data. Then, in the loss function, just take the section for lengths:

\n\n
def ctc_concatenated_length(y_true, y_pred):\n\n    #assuming you concatenated the length in the first step\n    true_lengths = y_true[:,:1] #may need to cast to int\n    y_true = y_true[:, 1:]\n\n    #since y_pred uses one-hot, you will need to concatenate to full size of the last axis, \n    #thus the 0 here\n    pred_lengths = K.cast(y_pred[:, :1, 0], \"int32\")\n    y_pred = y_pred[:, 1:]\n\n    return K.ctc_batch_cost(y_true, y_pred, pred_lengths, true_lengths)\n
\n", + "system": "" + }, + { + "instruction": "Multiple inputs of keras model with tf.data.Dataset.from_generator in Tensorflow 2", + "input": "", + "output": "

I had a similar issue, and it took me many tries to get the structure right for those inputs. Here's an example of a network with 3 inputs and 2 outputs, complete to the .fit call.

\n\n

The following works in tensorflow 2.1.0

\n\n
import tensorflow as tf\nimport numpy as np\n\ndef generator(N=10):\n    \"\"\"\n    Returns tuple of (inputs,outputs) where\n    inputs  = (inp1,inp2,inp2)\n    outputs = (out1,out2)\n    \"\"\"\n    dt=np.float32\n    for i in range(N):\n        inputs  = (np.random.rand(N,3,3,1).astype(dt), \n                   np.random.rand(N,3,3,1).astype(dt), \n                   np.random.rand(N,3,3,1).astype(dt))\n        outputs = (np.random.rand(N,3,3,1).astype(dt),\n                   np.random.rand(N,3,3,1).astype(dt))\n        yield inputs,outputs\n\n# Create dataset from generator\ntypes = ( (tf.float32,tf.float32,tf.float32),\n          (tf.float32,tf.float32) )\nshapes = (([None,3,3,1],[None,3,3,1],[None,3,3,1]),\n          ([None,3,3,1],[None,3,3,1]))\ndata = tf.data.Dataset.from_generator(generator,\n                                      output_types=types,\n                                      output_shapes=shapes\n                                     )\n# Define a model\ninp1 = tf.keras.Input(shape=(3,3,1),name='inp1')\ninp2 = tf.keras.Input(shape=(3,3,1),name='inp2')\ninp3 = tf.keras.Input(shape=(3,3,1),name='inp3')\nout1 = tf.keras.layers.Conv2D(1,kernel_size=3,padding='same')(inp1)\nout2 = tf.keras.layers.Conv2D(1,kernel_size=3,padding='same')(inp2)\nmodel = tf.keras.Model(inputs=[inp1,inp2,inp3],outputs=[out1,out2])\nmodel.compile(loss=['mse','mse'])\n\n# Train\nmodel.fit(data)\n\n\n
\n", + "system": "" + }, + { + "instruction": "How is a multiple-outputs deep learning model trained?", + "input": "", + "output": "

Keras calculations are graph based and use only one optimizer.

\n\n

The optimizer is also a part of the graph, and in its calculations it gets the gradients of the whole group of weights. (Not two groups of gradients, one for each output, but one group of gradients for the entire model).

\n\n

Mathematically, it's not really complicated, you have a final loss function made of:

\n\n
loss = (main_weight * main_loss) + (aux_weight * aux_loss) #you choose the weights in model.compile\n
\n\n

All defined by you. Plus a series of other possible weights (sample weights, class weights, regularizer terms, etc.)

\n\n

Where:

\n\n\n\n

And the gradients are just \u2202(loss)/\u2202(weight_i) for all weights.

\n\n

Once the optimizer has the gradients, it performs its optimization step once.

\n\n

Questions:

\n\n
\n

how are the auxiliary branch weights updated as it is not connected directly to the main output?

\n
\n\n\n\n
\n

Is the part of the network which is between the root of the auxiliary branch and the main output concerned by the the weighting of the loss? Or the weighting influences only the part of the network that is connected to the auxiliary output?

\n
\n\n

The weights are plain mathematics. You will define them in compile:

\n\n
model.compile(optimizer=one_optimizer, \n\n              #you choose each loss   \n              loss={'main_output':main_loss, 'aux_output':aux_loss},\n\n              #you choose each weight\n              loss_weights={'main_output': main_weight, 'aux_output': aux_weight}, \n\n              metrics = ...)\n
\n\n

And the loss function will use them in loss = (weight1 * loss1) + (weight2 * loss2).
\nThe rest is the mathematical calculation of \u2202(loss)/\u2202(weight_i) for each weight.

\n", + "system": "" + }, + { + "instruction": "What is the difference between keras.tokenize.text_to_sequences and word embeddings", + "input": "", + "output": "

Word embeddings is a way of representing words such that words with the same/similar meaning have a similar representation. Two commonly used algorithms that learn word embedding are Word2Vec and GloVe.

\n

Note that word embeddings can also be learnt from scratch while training your neural network for text processing, on your specific NLP problem. You can also use transfer learning; in this case, it would mean to transfer the learned representation of the words from huge datasets on your problem.

\n

As for the tokenizer(I assume it's Keras that we're speaking of), taking from the documentation:

\n
    \n
  1. tokenize.fit_on_text() --> Creates the vocabulary index based on word frequency. For example, if you had the phrase "My dog is different from your dog, my dog is prettier", word_index["dog"] = 0, word_index["is"] = 1 (dog appears 3 times, is appears 2 times)

    \n
  2. \n
  3. tokenize.text_to_sequence() --> Transforms each text into a sequence of integers. Basically if you had a sentence, it would assign an integer to each word from your sentence. You can access tokenizer.word_index() (returns a dictionary) to verify the assigned integer to your word.

    \n
  4. \n
\n", + "system": "" + }, + { + "instruction": "keras to_categorical adds additional value", + "input": "", + "output": "

Your need to number classes starting with 0, like this:

\n\n
dict = {'word': 0, 'feature_name': 1, 'feature_value': 2, 'part_number': 3}\n
\n\n

You can get description of the function with help() command

\n\n
help(np_utils.to_categorical)\n
\n\n

:

\n\n
Help on function to_categorical in module keras.utils.np_utils:\n\nto_categorical(y, num_classes=None, dtype='float32')\nConverts a class vector (integers) to binary class matrix.\n\nE.g. for use with categorical_crossentropy.\n\n# Arguments\n    y: class vector to be converted into a matrix\n        (integers from 0 to num_classes).\n    num_classes: total number of classes.\n    dtype: The data type expected by the input, as a string\n        (`float32`, `float64`, `int32`...)\n\n# Returns\n    A binary matrix representation of the input. The classes axis\n    is placed last.\n
\n", + "system": "" + }, + { + "instruction": "Validation Loss Much Higher Than Training Loss", + "input": "", + "output": "

Overfitting

\n\n

In general, if you're seeing much higher validation loss than training loss, then it's a sign that your model is overfitting - it learns \"superstitions\" i.e. patterns that accidentally happened to be true in your training data but don't have a basis in reality, and thus aren't true in your validation data.

\n\n

It's generally a sign that you have a \"too powerful\" model, too many parameters that are capable of memorizing the limited amount of training data. In your particular model you're trying to learn almost a million parameters (try printing model.summary()) from a thousand datapoints - that's not reasonable, learning can extract/compress information from data, not create it out of thin air.

\n\n

What's the expected result?

\n\n

The first question you should ask (and answer!) before building a model is about the expected accuracy. You should have a reasonable lower bound (what's a trivial baseline? For time series prediction, e.g. linear regression might be one) and an upper bound (what could an expert human predict given the same input data and nothing else?).

\n\n

Much depends on the nature of the problem. You really have to ask, is this information sufficient to get a good answer? For many real life time problems with time series prediction, the answer is no - the future state of such a system depends on many variables that can't be determined by simply looking at historical measurements - to reasonably predict the next value, you need to bring in lots of external data other than the historical prices. There's a classic quote by Tukey: \"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.\"

\n", + "system": "" + }, + { + "instruction": "TensorFlow 2.0 Keras: How to write image summaries for TensorBoard", + "input": "", + "output": "

Except providing an answer to your question \nI will make the code more TF2.0-like. If you have any questions/need clarification, please post a comment down below.

\n\n

1. Loading data

\n\n

I would advise to use Tensorflow Datasets library. There is absolutely no need to load data in numpy and transform it to tf.data.Dataset if one can do it in a single line:

\n\n
import tensorflow_datasets as tfds\n\ndataset = tfds.load(\"mnist\", as_supervised=True, split=tfds.Split.TRAIN)\n
\n\n

Line above will only return TRAIN split (read more about those here).

\n\n

2. Define Augmentations and Summaries

\n\n

In order to save images, one has to keep tf.summary.SummaryWriter object throughout each pass.

\n\n

I have created a convenient wrapping class with __call__ method for easy usage with tf.data.Dataset's map capabilities:

\n\n
import tensorflow as tf\n\nclass ExampleAugmentation:\n    def __init__(self, logdir: str, max_images: int, name: str):\n        self.file_writer = tf.summary.create_file_writer(logdir)\n        self.max_images: int = max_images\n        self.name: str = name\n        self._counter: int = 0\n\n    def __call__(self, image, label):\n        augmented_image = tf.image.random_flip_left_right(\n            tf.image.random_flip_up_down(image)\n        )\n        with self.file_writer.as_default():\n            tf.summary.image(\n                self.name,\n                augmented_image,\n                step=self._counter,\n                max_outputs=self.max_images,\n            )\n\n        self._counter += 1\n        return augmented_image, label\n
\n\n

name will be the name under which each part of images will be saved. Which part you may ask - the part defined by max_outputs.

\n\n

Say image in __call__ will have shape (32, 28, 28, 1), where the first dimension is batch, second width, third height and last channels (in case of MNIST only onel but this dimension is needed in tf.image augmentations). Furthermore, let's say max_outputs is specified as 4. In this case, only 4 first images from batch will be saved. Default value is 3, so you may set it as BATCH_SIZE to save every image.

\n\n

In Tensorboard, each image will be a separate sample over which you can iterate at the end.

\n\n

_counter is needed so the images will not be overwritten (I think, not really sure, clarification from someone else would be nice).

\n\n

Important: You may want to rename this class to something like ImageSaver when doing more serious buisness and move augmentation to separate functors/lambda functions. It suffices for presentation purposes I guess.

\n\n

3. Setup global variables

\n\n

Please do not mix function declaration, global variables, data loading and others (like loading data and creating function afterwards). I know TF1.0 encouraged this type of programming but they are trying to get away from it and you might want to follow the trend.

\n\n

Below I have defined some global variables which will be used throughout next parts, pretty self-explanatory I guess:

\n\n
BATCH_SIZE = 32\nDATASET_SIZE = 60000\nEPOCHS = 5\n\nLOG_DIR = \"/logs/images\"\nAUGMENTATION = ExampleAugmentation(LOG_DIR, max_images=4, name=\"Images\")\n
\n\n

4. Dataset augmentation

\n\n

Similar to yours but with a little twist:

\n\n
dataset = (\n    dataset.map(\n        lambda image, label: (\n            tf.image.convert_image_dtype(image, dtype=tf.float32),\n            label,\n        )\n    )\n    .batch(BATCH_SIZE)\n    .map(AUGMENTATION)\n    .repeat(EPOCHS)\n)\n
\n\n\n\n

5. Define model, compile, train

\n\n

Almost as you did in your example, but I have provided additional steps_per_epoch, so fit knows how many batches constitute an epoch:

\n\n
model = tf.keras.models.Sequential(\n    [\n        tf.keras.layers.Flatten(input_shape=(28, 28, 1)),\n        tf.keras.layers.Dense(128, activation=\"relu\"),\n        tf.keras.layers.Dropout(0.2),\n        tf.keras.layers.Dense(10, activation=\"softmax\"),\n    ]\n)\n\nmodel.compile(\n    optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"]\n)\nmodel.fit(\n    dataset,\n    epochs=EPOCHS,\n    steps_per_epoch=DATASET_SIZE // BATCH_SIZE,\n    callbacks=[tf.keras.callbacks.TensorBoard(log_dir=LOG_DIR)],\n)\n
\n\n

Not much to explain other than that I think.

\n\n

6. Run Tensorboard

\n\n

Since TF2.0 one can do it inside colab using %tensorboard --logdir /logs/images, just wanted to add this for others who may visit this issue. Do it however you like, anyways you know how to do it for sure.

\n\n

Images should be inside IMAGES and each sample named by name provided to AUGMENTATION object.

\n\n

7. Whole code (to make everyone's life easier)

\n\n
import tensorflow as tf\nimport tensorflow_datasets as tfds\n\n\nclass ExampleAugmentation:\n    def __init__(self, logdir: str, max_images: int, name: str):\n        self.file_writer = tf.summary.create_file_writer(logdir)\n        self.max_images: int = max_images\n        self.name: str = name\n        self._counter: int = 0\n\n    def __call__(self, image, label):\n        augmented_image = tf.image.random_flip_left_right(\n            tf.image.random_flip_up_down(image)\n        )\n        with self.file_writer.as_default():\n            tf.summary.image(\n                self.name,\n                augmented_image,\n                step=self._counter,\n                max_outputs=self.max_images,\n            )\n\n        self._counter += 1\n        return augmented_image, label\n\n\nif __name__ == \"__main__\":\n\n    # Global settings\n\n    BATCH_SIZE = 32\n    DATASET_SIZE = 60000\n    EPOCHS = 5\n\n    LOG_DIR = \"/logs/images\"\n    AUGMENTATION = ExampleAugmentation(LOG_DIR, max_images=4, name=\"Images\")\n\n    # Dataset\n\n    dataset = tfds.load(\"mnist\", as_supervised=True, split=tfds.Split.TRAIN)\n\n    dataset = (\n        dataset.map(\n            lambda image, label: (\n                tf.image.convert_image_dtype(image, dtype=tf.float32),\n                label,\n            )\n        )\n        .batch(BATCH_SIZE)\n        .map(AUGMENTATION)\n        .repeat(EPOCHS)\n    )\n\n    # Model and training\n\n    model = tf.keras.models.Sequential(\n        [\n            tf.keras.layers.Flatten(input_shape=(28, 28, 1)),\n            tf.keras.layers.Dense(128, activation=\"relu\"),\n            tf.keras.layers.Dropout(0.2),\n            tf.keras.layers.Dense(10, activation=\"softmax\"),\n        ]\n    )\n\n    model.compile(\n        optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"]\n    )\n    model.fit(\n        dataset,\n        epochs=EPOCHS,\n        steps_per_epoch=DATASET_SIZE // BATCH_SIZE,\n        callbacks=[tf.keras.callbacks.TensorBoard(log_dir=LOG_DIR)],\n    )\n
\n", + "system": "" + }, + { + "instruction": "Keras flow_from_directory() read only from selected sub-directories", + "input": "", + "output": "

Assuming that I understood your question in the right way, this should help you:

\n\n
train_generator = train_datagen.flow_from_directory(directory='train', class_mode='categorical', target_size=(64,64), batch_size=16, shuffle=True, classes=[\"dog\", \"cat\"])\n
\n\n

This will read only the images from the directories dog and cat, leave out the elephant directory and provide distinct categorical labels for them.

\n", + "system": "" + }, + { + "instruction": "Error in loading the model with load_weights in Keras", + "input": "", + "output": "

You are saving the weights, not the whole model. A Model is more than just the weights, including architecture, losses, metrics and etc.

\n\n

You have two solutions:

\n\n

1) Go with saving the weights: in this case, in time of model loading, you will need to recreate your model, load the weight and then compile the model. Your code should be something like this:

\n\n
model = Sequential()\nmodel.add(Dense(60, input_dim=7, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(55, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(50, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(45, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(30, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(20, kernel_initializer='normal', activation='relu'))\nmodel.add(Dense(1, kernel_initializer='normal'))\nmodel.load_weights(\"kwhFinal.h5\")\nmodel.compile(loss='mse', optimizer='adam', metrics=[rmse])\n
\n\n

2) Save the whole model by this command:

\n\n
model.save(\"kwhFinal.h5\")\n
\n\n

And during the loading use this command for having your model loaded:

\n\n
from keras.models import load_model\nmodel=load_model(\"kwhFinal.h5\")\n
\n", + "system": "" + }, + { + "instruction": "Why am I getting different values between loss functions and metrics in TensorFlow Keras?", + "input": "", + "output": "

This has been confirmed as a bug and fixed.\nFor more information, see https://github.com/tensorflow/tensorflow/issues/25970.

\n", + "system": "" + }, + { + "instruction": "Keras network producing inverse predictions", + "input": "", + "output": "

EDIT: After author's comments I do not believe this is the correct answer but I will keep it posted for posterity.

\n\n

Great question and the answer is due to how the Time_generator works! Apparently instead of grabbing x,y pairs with the same index (e.g input x[0] to output target y[0]) it grabs target with offset 1 (so x[0] to y[1]).

\n\n

Thus plotting y with offset 1 will produce the desired fit.

\n\n

Code to simulate:

\n\n
import keras \nimport matplotlib.pyplot as plt\n\nx=np.random.uniform(0,10,size=41).reshape(-1,1)\nx[::2]*=-1\ny=x[1:]\nx=x[:-1]\ntrain_gen = keras.preprocessing.sequence.TimeseriesGenerator(\n        x,\n        y,\n        length=1,\n        sampling_rate=1,\n        batch_size=1,\n        shuffle=False\n    )\n\nmodel = keras.models.Sequential()\nmodel.add(keras.layers.LSTM(100, input_shape=(1, 1), return_sequences=False))\nmodel.add(keras.layers.Dense(1))\n\n\nmodel.compile(\n    loss=\"mse\",\n    optimizer=\"rmsprop\",\n    metrics=[keras.metrics.mean_squared_error]\n)\nmodel.optimizer.lr/=.1\n\nhistory = model.fit_generator(\n    train_gen,\n    epochs=20,\n    steps_per_epoch=100\n)\n
\n\n

Proper plotting:

\n\n
y_pred = model.predict_generator(train_gen)\nplot_points = 39\nepochs = range(1, plot_points + 1)\npred_points = np.resize(y_pred[:plot_points], (plot_points,))\n\ntarget_points = train_gen.targets[1:plot_points+1] #NOTICE DIFFERENT INDEXING HERE\n\nplt.plot(epochs, pred_points, 'b', label='Predictions')\nplt.plot(epochs, target_points, 'r', label='Targets')\nplt.legend()\nplt.show()\n
\n\n

Output, Notice how the fit is no longer inverted and is mostly very accurate:

\n\n

\"With

\n\n

This is how it looks when the offset is incorrect:

\n\n

\"Without

\n", + "system": "" + }, + { + "instruction": "Is there a way to output a metric with several values in keras?", + "input": "", + "output": "

For keras to output all channels, you will need one metric per channel. You can create a wrapper that takes the index and returns only the desired class:

\n\n
#calculates dice considering an input with a single class\ndef dice_single(true,pred):\n    true = K.batch_flatten(true)\n    pred = K.batch_flatten(pred)\n    pred = K.round(pred)\n\n    intersection = K.sum(true * pred, axis=-1)\n    true = K.sum(true, axis=-1)\n    pred = K.sum(pred, axis=-1)\n\n    return ((2*intersection) + K.epsilon()) / (true + pred + K.epsilon())\n\ndef dice_for_class(index):\n    def dice_inner(true,pred):\n\n        #get only the desired class\n        true = true[:,:,:,index]\n        pred = pred[:,:,:,index]\n\n        #return dice per class\n        return dice_single(true,pred)\n    return dice_inner\n
\n\n

Then your metrics in the model will be `metrics = [dice_for_class(i) for i in range(10)]

\n\n
\n\n

Hint: don't iterate unless it's absolutely necessary.

\n\n

Example of dice for the ten classes without iteration

\n\n
def dice_metric(ground_truth, prediction):\n\n    #for metrics, it's good to round predictions:\n    prediction = K.round(prediction)\n\n    #intersection and totals per class per batch (considers channels last)\n    intersection = ground_truth * prediction\n    intersection = K.sum(intersection, axis=[1,2])\n    ground_truth = K.sum(ground_truth, axis=[1,2])\n    prediction = K.sum(prediciton, axis=[1,2])\n\n    dice = ((2 * intersection) + K.epsilon()) / (ground_truth + prediction + K.epsilon())\n
\n", + "system": "" + }, + { + "instruction": "why Tensorflow-gpu is still using cpu", + "input": "", + "output": "

It is using the GPU, as you can see in logs.\nThe problem is, that a lot of things can not be done on the GPU and as long your data is small and your complexity is low, you will end up with low GPU usage.

\n\n

Here is some more detailed explanation.

\n", + "system": "" + }, + { + "instruction": "Multiple outputs in keras Sequential models", + "input": "", + "output": "

Not actually. Sequential model is here to make things simpler, when designing smaller and straight-forward Neural Networks. As noted here, they can be useful for most problems.

\n\n
\n

The Sequential API allows you to create models layer-by-layer for most\n problems. It is limited in that it does not allow you to create models\n that share layers or have multiple inputs or outputs.

\n
\n\n

But if you need more complex design, with multiple input/output as well as models that share layers, you can use the Functional API to achieve your goal.

\n", + "system": "" + }, + { + "instruction": "How do I get Keras to train a model on a specific GPU?", + "input": "", + "output": "

Possibly duplicate with my previous question

\n

It's a bit more complicated. Keras will the memory in both GPUs althugh it will only use one GPU by default. Check keras.utils.multi_gpu_model for using several GPUs.

\n

I found the solution by choosing the GPU using the environment variable CUDA_VISIBLE_DEVICES.

\n

You can add this manually before importing keras or tensorflow to choose your gpu

\n
os.environ["CUDA_VISIBLE_DEVICES"]="0" # first gpu\nos.environ["CUDA_VISIBLE_DEVICES"]="1" # second gpu\nos.environ["CUDA_VISIBLE_DEVICES"] = "-1" # runs in cpu\n
\n

To make it automatically, I made a function that parses nvidia-smi and detects automatically which GPU is being already used and sets the appropriate value to the variable.

\n", + "system": "" + }, + { + "instruction": "Using sample_weights with fit_generator()", + "input": "", + "output": "

You can provide sample weights as the third element of the tuple returned by the generator. From Keras documentation on fit_generator:

\n
\n

generator: A generator or an instance of Sequence (keras.utils.Sequence) object in order to avoid duplicate data when using multiprocessing. The output of the generator must be either

\n\n
\n

Update: Here is a rough sketch of a generator that returns the input samples and targets as well as the sample weights obtained from model g(x):

\n\n
def gen(args):\n    while True:\n        for i in range(num_batches):\n            # get the i-th batch data\n            inputs = ...\n            targets = ...\n            \n            # get the sample weights\n            weights = g.predict(inputs)\n            \n            yield inputs, targets, weights\n            \n            \nmodel.fit_generator(gen(args), steps_per_epoch=num_batches, ...)\n    \n    \n
\n", + "system": "" + }, + { + "instruction": "BrokenProcessPool on using n_jobs parameter in cross_val_score", + "input": "", + "output": "

Try creating your build_classifier function in an external file and importing it. E.g:\n

\n\n

in file classifier_builder.py:

\n\n
import keras\n\ndef build_classifier():\n  classifier = Sequential()\n  classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))\n  classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))\n  classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))\n  classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])\nreturn classifier\n
\n\n

and then in your notebook:

\n\n
import classifier_builder\n\nclassifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, nb_epoch = 100)\naccuracies = cross_val_score(estimator = classifier, X = X_train, y = Y_train, cv = 10, n_jobs = -1)\n
\n\n

This solved the issue for me. Apparently, the inline function is not picklable.

\n", + "system": "" + }, + { + "instruction": "InvalidArgumentError: input must be 4-dimensional[8,6171,4]", + "input": "", + "output": "

You need to use

\n\n
new_image = tf.expand_dims(image,0)\n
\n\n

because the model expects a dataset instead of single images.

\n", + "system": "" + }, + { + "instruction": "Keras: Accuracy Drops While Finetuning Inception", + "input": "", + "output": "

Note: Since your problem is a bit strange and difficult to debug without having your trained model and dataset, this answer is just a (best) guess after considering many things that may have could go wrong. Please provide your feedback and I will delete this answer if it does not work.

\n\n

Since the inception_V3 contains BatchNormalization layers, maybe the problem is due to (somehow ambiguous or unexpected) behavior of this layer when you set trainable parameter to False (1, 2, 3, 4).

\n\n

Now, let's see if this is the root of the problem: as suggested by @fchollet, set the learning phase when defining the model for fine-tuning:

\n\n
from keras import backend as K\n\nK.set_learning_phase(0)\n\nbase_model = applications.inception_v3.InceptionV3(weights='imagenet', include_top=False, input_shape=(img_width,img_height,3))\n\nfor layer in base_model.layers:\n    layer.trainable = False\n\nK.set_learning_phase(1)\n\ntop_model = Sequential()\ntop_model.add(Flatten(input_shape=base_model.output_shape[1:]))\ntop_model.add(Dense(1000, activation='relu'))\ntop_model.add(Dense(inclusive_images, activation='softmax'))\n\ntop_model.load_weights(top_model_weights_path)\n\n#combine base and top model\nfullModel = Model(input= base_model.input, output= top_model(base_model.output))\n\nfullModel.compile(loss='categorical_crossentropy',\n             optimizer=optimizers.SGD(lr=1e-4, momentum=0.9), \n             metrics=['accuracy'])\n\n\n#####################################################################\n# Here, define the generators and then fit the model same as before #\n#####################################################################\n
\n\n
\n\n

Side Note: This is not causing any problem in your case, but keep in mind that when you use top_model(base_model.output) the whole Sequential model (i.e. top_model) is stored as one layer of fullModel. You can verify this by either using fullModel.summary() or print(fullModel.layers[-1]). Hence when you used:

\n\n
for layer in model2.layers[:-2]:\n    layer.trainable = False \n
\n\n

you are actually not freezing the last layer of base_model as well. However, since it is a Concatenate layer, and therefore does not have trainable parameters, no problem occurs and it would behave as you intended.

\n", + "system": "" + }, + { + "instruction": "How to model a shared layer in keras?", + "input": "", + "output": "

You can use Keras functional API for this purpose:

\n\n
from keras.layers import Input, concatenate\n\nx = Input(shape=...)\ny = Input(shape=...)\n\nshared_layer = MySharedLayer(...)\nout_x = shared_layer(x)\nout_y = shared_layer(y)\n\nconcat = concatenate([out_x, out_y])\n\n# pass concat to other layers ...\n
\n\n

Note that x and y could be the output tensors of any layer and not necessarily input layers.

\n", + "system": "" + }, + { + "instruction": "Keras Reshape layer adding an extra dimension?", + "input": "", + "output": "

User Reshape(target_shape=(1,))(x)

\n\n

The batch_size is implied in the entire model and ignored from the beginning to the end.

\n\n

If you do want to access the batch size, use a K.reshape(x,(5,1)).

\n\n

Keras is not supposed to be used without creating a model made entirely of layers.

\n", + "system": "" + }, + { + "instruction": "ValueError: malformed node or string with ast.literal_eval() when adding a Keras layer", + "input": "", + "output": "

A big mistake: literal_eval only works for literals. In this case, I have a Call.

\n\n

The function literal_eval first parse the string.

\n\n

From /usr/lib/python3.5/ast.py: lines 38-46

\n\n
def literal_eval(node_or_string):\n    \"\"\"\n    Safely evaluate an expression node or a string containing a Python\n    expression.  The string or node provided may only consist of the following\n    Python literal structures: strings, bytes, numbers, tuples, lists, dicts,\n    sets, booleans, and None.\n    \"\"\"\n    if isinstance(node_or_string, str):\n        node_or_string = parse(node_or_string, mode='eval')\n
\n\n

At this point, node_or_string is an instance of Expression. Then, literal_eval get the body.

\n\n

From /usr/lib/python3.5/ast.py: lines 47-48

\n\n
    if isinstance(node_or_string, Expression):\n        node_or_string = node_or_string.body\n
\n\n

And finally, literal_eval checks the type of the body (node_or_string).

\n\n

From /usr/lib/python3.5/ast.py: lines 49-84

\n\n
    def _convert(node):\n        if isinstance(node, (Str, Bytes)):\n            return node.s\n        elif isinstance(node, Num):\n            return node.n\n        elif isinstance(node, Tuple):\n            return tuple(map(_convert, node.elts))\n        elif isinstance(node, List):\n            return list(map(_convert, node.elts))\n        elif isinstance(node, Set):\n            return set(map(_convert, node.elts))\n        elif isinstance(node, Dict):\n            return dict((_convert(k), _convert(v)) for k, v\n                        in zip(node.keys, node.values))\n        elif isinstance(node, NameConstant):\n            return node.value\n        elif isinstance(node, UnaryOp) and \\\n             isinstance(node.op, (UAdd, USub)) and \\\n             isinstance(node.operand, (Num, UnaryOp, BinOp)):\n            operand = _convert(node.operand)\n            if isinstance(node.op, UAdd):\n                return + operand\n            else:\n                return - operand\n        elif isinstance(node, BinOp) and \\\n             isinstance(node.op, (Add, Sub)) and \\\n             isinstance(node.right, (Num, UnaryOp, BinOp)) and \\\n             isinstance(node.left, (Num, UnaryOp, BinOp)):\n            left = _convert(node.left)\n            right = _convert(node.right)\n            if isinstance(node.op, Add):\n                return left + right\n            else:\n                return left - right\n        raise ValueError('malformed node or string: ' + repr(node))\n    return _convert(node_or_string)\n
\n\n

If the initial code was ast.literal_eval('1+1') (for example), now node_or_string would be an instance of BinOp. But in the case of:

\n\n
code = \"model.add( Dense( input_shape=(10,), units=10, activation='softmax' ) )\"\nast.literal_eval(code)\n
\n\n

The body will be an instance of Call, which does not appear among the valid types of the function.

\n\n

E.g.:

\n\n
import ast\n\ncode_nocall = \"1+1\"\nnode = ast.parse(code_nocall, mode='eval')\nbody = node.body\nprint(type(body)) # Returns <class '_ast.BinOp'>\n\ncode_call = \"print('hello')\"\nnode = ast.parse(code_call, mode='eval')\nbody = node.body\nprint(type(body)) # Returns <class '_ast.Call'>\n
\n\n

Solution

\n\n

The best solution I have found so far, to not use eval directly, is to perform the process manually. With this function:

\n\n
import ast\n\ndef eval_code(code):\n    parsed = ast.parse(code, mode='eval')\n    fixed = ast.fix_missing_locations(parsed)\n    compiled = compile(fixed, '<string>', 'eval')\n    eval(compiled)\n
\n\n

Now it works:

\n\n
eval_code(\"print('hello world')\")\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nmodel = Sequential()\ncode = \"model.add( Dense( input_shape=(10,), units=10, activation='softmax' ) )\"\neval_code(code)\n
\n", + "system": "" + }, + { + "instruction": "Keras: is there an easy way to mutate (shuffle) data in/out of the training set between epochs?", + "input": "", + "output": "

https://keras.io/models/model/#fit

\n

model.fit() has an argument steps_per_epoch. If you set shuffle=True and choose steps_per_epoch small enough you will get the behaviour that you describe.

\n

In your example with 80 training examples: you could for instance set batch_size to 20 and steps_per_epoch to 4, or batch_size to 10 and steps_per_epoch to 8 etc.

\n", + "system": "" + }, + { + "instruction": "Custom Loss Function in R Keras", + "input": "", + "output": "

You can't eval in loss funtions. This will break the graph.

\n\n

You should just use the sample_weight parameter of the fit method: https://keras.rstudio.com/reference/fit.html

\n\n
##not sure if this is valid R, but \n##at some point you will call `fit` for training with `X_train` and `Y_train`, \n##so, just add the weights.\nhistory <- model$fit(X_train, Y_train, ..., sample_weight = weights)\n
\n\n

That's all (don't use a custom loss).

\n\n
\n\n

Just for knowledge - Passing loss functions to compile

\n\n

Only works for functions taking y_true and y_pred. (Not necessary if you're using sample_weights)

\n\n
model      <- model %>% compile(\n            loss = weighted_mse, \n            optimizer = 'rmsprop',\n            metrics = 'mse')\n
\n\n

But this won't work, you need something similar to the wrapper created by @spadarian.

\n\n

Also, it will be very complicated to keep a correlation between your data and the weights, both because Keras will divide your data in batches and also because the data will be shuffled.

\n", + "system": "" + }, + { + "instruction": "Parallelizing keras models in R using doParallel", + "input": "", + "output": "

Although this question is quite old, I got the same issue so I'm posting the solution here. The problem is that the Keras model object can not be transferred to the workers before being serialised. A quick workaround would be to serialise the models before sending them to the workers and then unserialising them on the nodes locally:

\n\n
library(foreach)\nlibrary(doParallel)\ncl<-makeCluster(2)\nregisterDoParallel(cl)\nnep <- 10\n\n# Serialize models before sending them to the workers\nmodels_par <- lapply(models_par, keras::serialize_model)\n\n# Now send the models, not just the indices\nforeach(model = models_par,.packages=c(\"keras\")) %dopar% { \n\n  # Unserialize locally\n  model_local <- keras::unserialize_model(model)\n  model_local %>% keras::fit(\n    x_bagged[[i]], y_bagged[[i]], \n    epochs = nep,\n    validation_split = 0.1,\n    batch_size =256,\n    verbose=1\n  )\n\n  # Serialize before sending back to master\n  keras::serialize_model(model_local)\n} \nstopCluster(cl)\n
\n", + "system": "" + }, + { + "instruction": "How to build a Language model using LSTM that assigns probability of occurence for a given sentence", + "input": "", + "output": "

I have just coded a very simple example showing how one might compute the probability of occurrence of a sentence with a LSTM model. The full code can be found here.

\n

Suppose we want to predict the probability of occurrence of a sentence for the following dataset (this rhyme was published in Mother Goose's Melody in London around 1765):

\n\n
# Data\ndata = ["Two little dicky birds",\n        "Sat on a wall,",\n        "One called Peter,",\n        "One called Paul.",\n        "Fly away, Peter,",\n        "Fly away, Paul!",\n        "Come back, Peter,",\n        "Come back, Paul."]\n
\n

First of all, let's use keras.preprocessing.text.Tokenizer to create a vocabulary and tokenize the sentences:

\n
# Preprocess data\ntokenizer = Tokenizer()\ntokenizer.fit_on_texts(data)\nvocab = tokenizer.word_index\nseqs = tokenizer.texts_to_sequences(data)\n
\n

Our model will take a sequence of words as input (context), and will output the conditional probability distribution of each word in the vocabulary given the context. To this end, we prepare the training data by padding the sequences and sliding windows over them:

\n
def prepare_sentence(seq, maxlen):\n    # Pads seq and slides windows\n    x = []\n    y = []\n    for i, w in enumerate(seq):\n        x_padded = pad_sequences([seq[:i]],\n                                 maxlen=maxlen - 1,\n                                 padding='pre')[0]  # Pads before each sequence\n        x.append(x_padded)\n        y.append(w)\n    return x, y\n\n# Pad sequences and slide windows\nmaxlen = max([len(seq) for seq in seqs])\nx = []\ny = []\nfor seq in seqs:\n    x_windows, y_windows = prepare_sentence(seq, maxlen)\n    x += x_windows\n    y += y_windows\nx = np.array(x)\ny = np.array(y) - 1  # The word <PAD> does not constitute a class\ny = np.eye(len(vocab))[y]  # One hot encoding\n
\n

I decided to slide windows separately for each verse, but this could be done differently.

\n

Next, we define and train a simple LSTM model with Keras. The model consists of an embedding layer, a LSTM layer, and a dense layer with a softmax activation (which uses the output at the last timestep of the LSTM to produce the probability of each word in the vocabulary given the context):

\n
# Define model\nmodel = Sequential()\nmodel.add(Embedding(input_dim=len(vocab) + 1,  # vocabulary size. Adding an\n                                               # extra element for <PAD> word\n                    output_dim=5,  # size of embeddings\n                    input_length=maxlen - 1))  # length of the padded sequences\nmodel.add(LSTM(10))\nmodel.add(Dense(len(vocab), activation='softmax'))\nmodel.compile('rmsprop', 'categorical_crossentropy')\n\n# Train network\nmodel.fit(x, y, epochs=1000)\n
\n

The joint probability P(w_1, ..., w_n) of occurrence of a sentence w_1 ... w_n can be computed using the rule of conditional probability:

\n

P(w_1, ..., w_n)=P(w_1)*P(w_2|w_1)*...*P(w_n|w_{n-1}, ..., w_1)

\n

where each of these conditional probabilities is given by the LSTM model. Note that they might be very small, so it is sensible to work in log space in order to avoid numerical instability issues. Putting it all together:

\n
# Compute probability of occurence of a sentence\nsentence = "One called Peter,"\ntok = tokenizer.texts_to_sequences([sentence])[0]\nx_test, y_test = prepare_sentence(tok, maxlen)\nx_test = np.array(x_test)\ny_test = np.array(y_test) - 1  # The word <PAD> does not constitute a class\np_pred = model.predict(x_test)  # array of conditional probabilities\nvocab_inv = {v: k for k, v in vocab.items()}\n\n#\u00a0Compute product\n# Efficient version: np.exp(np.sum(np.log(np.diag(p_pred[:, y_test]))))\nlog_p_sentence = 0\nfor i, prob in enumerate(p_pred):\n    word = vocab_inv[y_test[i]+1]  # Index 0 from vocab is reserved to <PAD>\n    history = ' '.join([vocab_inv[w] for w in x_test[i, :] if w != 0])\n    prob_word = prob[y_test[i]]\n    log_p_sentence += np.log(prob_word)\n    print('P(w={}|h={})={}'.format(word, history, prob_word))\nprint('Prob. sentence: {}'.format(np.exp(log_p_sentence)))\n
\n

NOTE: This is a very small toy dataset and we might be overfitting.

\n
\n

UPDATE 29/10/2022: For bigger datasets, it is likely that you'll run out of memory if you process the entire dataset at once. In this case, I recommend using a generator to train your model. Please see this gist for a modified version that uses a data generator.

\n", + "system": "" + }, + { + "instruction": "Gensim Word2Vec select minor set of word vectors from pretrained model", + "input": "", + "output": "

Thanks to this answer (I've changed the code a little bit to make it better). you can use this code for solving your problem.

\n\n

we have all our minor set of words in restricted_word_set(it can be either list or set) and w2v is our model, so here is the function:

\n\n
import numpy as np\n\ndef restrict_w2v(w2v, restricted_word_set):\n    new_vectors = []\n    new_vocab = {}\n    new_index2entity = []\n    new_vectors_norm = []\n\n    for i in range(len(w2v.vocab)):\n        word = w2v.index2entity[i]\n        vec = w2v.vectors[i]\n        vocab = w2v.vocab[word]\n        vec_norm = w2v.vectors_norm[i]\n        if word in restricted_word_set:\n            vocab.index = len(new_index2entity)\n            new_index2entity.append(word)\n            new_vocab[word] = vocab\n            new_vectors.append(vec)\n            new_vectors_norm.append(vec_norm)\n\n    w2v.vocab = new_vocab\n    w2v.vectors = np.array(new_vectors)\n    w2v.index2entity = np.array(new_index2entity)\n    w2v.index2word = np.array(new_index2entity)\n    w2v.vectors_norm = np.array(new_vectors_norm)\n
\n\n
\n

WARNING: when you first create the model the vectors_norm == None so\n you will get an error if you use this function there. vectors_norm\n will get a value of the type numpy.ndarray after the first use. so\n before using the function try something like most_similar(\"cat\") so\n that vectors_norm not be equal to None.

\n
\n\n

It rewrites all of the variables which are related to the words based on the Word2VecKeyedVectors.

\n\n

Usage:

\n\n
w2v = KeyedVectors.load_word2vec_format(\"GoogleNews-vectors-negative300.bin.gz\", binary=True)\nw2v.most_similar(\"beer\")\n
\n\n
\n

[('beers', 0.8409687876701355),
\n ('lager', 0.7733745574951172),
\n ('Beer', 0.71753990650177),
\n ('drinks', 0.668931245803833),
\n ('lagers', 0.6570086479187012),
\n ('Yuengling_Lager', 0.655455470085144),
\n ('microbrew', 0.6534324884414673),
\n ('Brooklyn_Lager', 0.6501551866531372),
\n ('suds', 0.6497018337249756),
\n ('brewed_beer', 0.6490240097045898)]

\n
\n\n
restricted_word_set = {\"beer\", \"wine\", \"computer\", \"python\", \"bash\", \"lagers\"}\nrestrict_w2v(w2v, restricted_word_set)\nw2v.most_similar(\"beer\")\n
\n\n
\n

[('lagers', 0.6570085287094116),
\n ('wine', 0.6217695474624634),
\n ('bash', 0.20583480596542358),
\n ('computer', 0.06677375733852386),
\n ('python', 0.005948573350906372)]

\n
\n\n

it can be used for removing some words either.

\n", + "system": "" + }, + { + "instruction": "TypeError: write() argument must be str, not bytes while saving .npy file", + "input": "", + "output": "

The code in the blog post is aimed at Python 2, where writing to and reading from a file works with bytestrings. In Python 3, you need to open the file in binary mode, both for writing and then reading again:

\n\n
np.save(\n    open('bottleneck_features_train.npy', 'wb'),\n    bottleneck_features_train)\n
\n\n

And when reading:

\n\n
train_data = np.load(open('bottleneck_features_train.npy', 'rb'))\n
\n\n

Note the b character in the mode arguments there.

\n\n

I'd use the file as a context manager to ensure it is cleanly closed:

\n\n
with open('bottleneck_features_train.npy', 'wb') as features_train_file\n    np.save(features_train_file, bottleneck_features_train)\n
\n\n

and

\n\n
with open('bottleneck_features_train.npy', 'wb') as features_train_file:\n    train_data = np.load(features_train_file)\n
\n\n

The code in the blog post should use both of these changes anyway, because in Python 2, without the b flag in the mode text files have platform-specific newline conventions translated, and on Windows certain characters in the stream will have specific meaning (including causing the file to appear shorter than it really is if a EOF characte appears). With binary data that could be a real problem.

\n", + "system": "" + }, + { + "instruction": "Custom Data Generator for Keras LSTM with TimeSeriesGenerator", + "input": "", + "output": "

It could be because the object type is changed from Sequence which is what a TimeseriesGenerator is to a generic generator. The fit_generator function treats these differently. A cleaner solution would be to inherit the class and override the processing bit:

\n\n
class CustomGen(TimeseriesGenerator):\n  def __getitem__(self, idx):\n    x, y = super()[idx]\n    # do processing here\n    return x, y\n
\n\n

And use this class like before as the rest of internal logic will remain the same.

\n", + "system": "" + }, + { + "instruction": "Change training dataset every N epochs in Keras", + "input": "", + "output": "

Use Sequence to create your dataset and pass it to fit_generator. Define the on_epoch_end method to modify the dataset on certain epochs.

\n\n
\n

Every Sequence must implements the __getitem__ and the __len__ methods. If you want to modify your dataset between epochs you may implement on_epoch_end. The method __getitem__ should return a complete batch.

\n
\n\n

Also, you can safely use Sequence with multiprocessing data processing:

\n\n
\n

The use of keras.utils.Sequence guarantees the ordering and guarantees the single use of every input per epoch when using use_multiprocessing=True.

\n
\n\n

Example

\n\n

Slightly modified from the Sequence documentation to include on_epoch_end.

\n\n
class CIFAR10Sequence(Sequence):\n\n    def __init__(self, x_set, y_set, batch_size):\n        self.x, self.y = x_set, y_set\n        self.epoch = 0\n        self.batch_size = batch_size\n\n    def __len__(self):\n        return int(np.ceil(len(self.x) / float(self.batch_size)))\n\n    def __getitem__(self, idx):\n        batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]\n        batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]\n\n        return np.array([\n            resize(imread(file_name), (200, 200))\n               for file_name in batch_x]), np.array(batch_y)\n\n    def on_epoch_end(self):\n        if self.epoch % N == 0:\n            pass\n            # modify data\n        self.epoch += 1\n
\n", + "system": "" + }, + { + "instruction": "How are metrics computed in Keras?", + "input": "", + "output": "

Something additional to know with respect to the metric for the VALIDATION set:

\n

Contrary to what is suggested in another answer, I just saw that the metric on the validation set is calculated in batches, and then averaged (of course the trained model at the end of the epoch is used, in contrast to how the metric score is calculated for the training set).

\n

If you want to compute it on the whole validation data at once, you have to use a callback as described in the accepted answer of guangshengzuo (see https://keras.io/guides/writing_your_own_callbacks/ for more details).

\n

Sure, for the usual metrics, there will not be any difference whether you calculate first in batches and average, or do it all in one big batch. BUT for custom metrics, there very well can be: I just had a case where the metric would tune a parameter, based on the data.

\n

Edit: added link on callbacks, in response to comment

\n", + "system": "" + }, + { + "instruction": "What are the uses of tf.space_to_depth?", + "input": "", + "output": "

space_to_depth is a convolutional practice used very often for lossless spatial dimensionality reduction. Applied to tensor (example_dim, width, height, channels) with block_size = k it produces a tensor with shape (example_dim, width / block_size, height / block_size, channels * block_size ** 2). It works in a following manner (example_dim is skipped for simplicity):

\n\n
    \n
  1. Cut image / feature map into chunks of size (block_size, block_size, channels): e.g. the following image (with block_size = 2):

    \n\n
    [[[1], [2], [3], [4]],\n [[5], [6], [7], [8]],\n [[9], [10], [11], [12]],\n [[13], [14], [15], [16]]]\n
    \n\n

    is divided into the following chunks:

    \n\n
    [[[1], [2]],       [[[3], [4]],\n [[5], [6]]]        [[7], [8]]]\n\n[[[9], [10],]      [[[11], [12]],\n [[13], [14]]]      [[15], [16]]]\n
  2. \n
  3. Flatten each chunk to a single array:

    \n\n
    [[1, 2, 5, 6]],      [[3, 4, 7, 8]]\n[[9 10, 13, 14]],    [[11, 12, 15, 16]]\n
  4. \n
  5. Spatially rearrange chunks according to their initial position:

    \n\n
    [[[1, 2, 5, 6]], [[3, 4, 7, 8]],\n [[9 10, 13, 14]], [[11, 12, 15, 16]]]\n
  6. \n
\n\n

So - as you may see - the initial image with size (4, 4, 1) was rearranged to feature map with shape (2, 2, 4). The following strategy is usually used for applications like object detection, segmentation or superresolution when it's important to decrease the spatial size of an image without losing reduction (like pooling). An example of an application of this technique might be found e.g. here.

\n", + "system": "" + }, + { + "instruction": "Add hand-crafted features to Keras sequential model", + "input": "", + "output": "

Sequential model is not very flexible. You should look into the functional API.

\n\n

I would try something like this:

\n\n
from keras.layers import (Conv1D, MaxPool1D, Dropout, Flatten, Dense,\n                          Input, concatenate)\nfrom keras.models import Model, Sequential\n\ntimesteps = 50\nn = 5\n\ndef network():\n    sequence = Input(shape=(timesteps, 1), name='Sequence')\n    features = Input(shape=(n,), name='Features')\n\n    conv = Sequential()\n    conv.add(Conv1D(10, 5, activation='relu', input_shape=(timesteps, 1)))\n    conv.add(Conv1D(10, 5, activation='relu'))\n    conv.add(MaxPool1D(2))\n    conv.add(Dropout(0.5, seed=789))\n\n    conv.add(Conv1D(5, 6, activation='relu'))\n    conv.add(Conv1D(5, 6, activation='relu'))\n    conv.add(MaxPool1D(2))\n    conv.add(Dropout(0.5, seed=789))\n    conv.add(Flatten())\n    part1 = conv(sequence)\n\n    merged = concatenate([part1, features])\n\n    final = Dense(512, activation='relu')(merged)\n    final = Dropout(0.5, seed=789)(final)\n    final = Dense(2, activation='softmax')(final)\n\n    model = Model(inputs=[sequence, features], outputs=[final])\n\n    model.compile(loss='logcosh', optimizer='adam', metrics=['accuracy'])\n\n    return model\n\nm = network()\n
\n", + "system": "" + }, + { + "instruction": "How does Keras back propagate custom loss function?", + "input": "", + "output": "

The magic is called automatic differentiation (AD). Keras is built on top of symbolic computational frameworks, namely Theano, TensorFlow, and/or CNTK. These frameworks allow you to define the loss as a symbolic expression, which can be easily be differentiated at runtime, as the whole representation is symbolic.

\n\n

In contrast, Caffe is built in C++ and does not use any symbolic representation framework, and as you mention, it needs to specify the loss function and its gradient analytically in code.

\n", + "system": "" + }, + { + "instruction": "How to extract False Positive, False Negative from a confusion matrix of multiclass classification", + "input": "", + "output": "

First of all, you have omissions in your code - in order to run, I needed to add the following commands:

\n\n
import keras\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n
\n\n

Having done that, and given the confusion matrix cm1:

\n\n
array([[ 965,    0,    1,    0,    0,    2,    6,    1,    5,    0],\n       [   0, 1113,    4,    2,    0,    0,    3,    0,   13,    0],\n       [   8,    0,  963,   14,    5,    1,    7,    8,   21,    5],\n       [   0,    0,    3,  978,    0,    7,    0,    6,   12,    4],\n       [   1,    0,    4,    0,  922,    0,    9,    3,    3,   40],\n       [   4,    1,    1,   27,    0,  824,    6,    1,   20,    8],\n       [  11,    3,    1,    1,    5,    6,  925,    0,    6,    0],\n       [   2,    6,   17,    8,    2,    0,    1,  961,    2,   29],\n       [   5,    1,    2,   13,    4,    6,    2,    6,  929,    6],\n       [   6,    5,    0,    7,    5,    6,    1,    6,   10,  963]])\n
\n\n

here is how you can get the requested TP, FP, FN, TN per class:

\n\n

The True Positives are simply the diagonal elements:

\n\n
TruePositive = np.diag(cm1)\nTruePositive\n# array([ 965, 1113,  963,  978,  922,  824,  925,  961,  929,  963])\n
\n\n

The False Positives are the sum of the respective column, minus the diagonal element:

\n\n
FalsePositive = []\nfor i in range(num_classes):\n    FalsePositive.append(sum(cm1[:,i]) - cm1[i,i])\nFalsePositive\n# [37, 16, 33, 72, 21, 28, 35, 31, 92, 92]\n
\n\n

Similarly, the False Negatives are the sum of the respective row, minus the diagonal element:

\n\n
FalseNegative = []\nfor i in range(num_classes):\n    FalseNegative.append(sum(cm1[i,:]) - cm1[i,i])\nFalseNegative\n# [15, 22, 69, 32, 60, 68, 33, 67, 45, 46]\n
\n\n

Now, the True Negatives are a little trickier; let's first think what exactly a True Negative means, with respect to, say class 0: it means all the samples that have been correctly identified as not being 0. So, essentially what we should do is remove the corresponding row & column from the confusion matrix, and then sum up all the remaining elements:

\n\n
TrueNegative = []\nfor i in range(num_classes):\n    temp = np.delete(cm1, i, 0)   # delete ith row\n    temp = np.delete(temp, i, 1)  # delete ith column\n    TrueNegative.append(sum(sum(temp)))\nTrueNegative\n# [8998, 8871, 9004, 8950, 9057, 9148, 9040, 9008, 8979, 8945]\n
\n\n

Let's make a sanity check: for each class, the sum of TP, FP, FN, and TN must be equal to the size of our test set (here 10,000): let's confirm that this is indeed the case:

\n\n
l = len(y_test)\nfor i in range(num_classes):\n    print(TruePositive[i] + FalsePositive[i] + FalseNegative[i] + TrueNegative[i] == l)\n
\n\n

The result is

\n\n
True\nTrue\nTrue\nTrue\nTrue\nTrue\nTrue\nTrue\nTrue\nTrue\n
\n", + "system": "" + }, + { + "instruction": "Keras LSTM TimeDistributed, stateful", + "input": "", + "output": "

TimeDistributed:

\n

This does not affect how layers work.\nThe purpose of this is to have an additional "time" (it may not be time too) dimension. The wrapped layer will be applied to each slice of the input tensor considering this time dimension.

\n

For instance, if a layer is expecting an input shape with 3 dimensions, say (batch, length, features), using the TimeDistributed wrapper will make it expect 4 dimensions: (batch, timeDimension, length, features)

\n

The layer will then be "copied" and applied equally to each element in the time dimension.

\n

With an LSTM layer, it works the same. Although an LSTM layer already expects a time dimension in its input shape: (batch, timeSteps, features), you can use the TimeDistributed to add yet another "time" dimension (which may mean anything, not exactly time) and make this LSTM layer to be reused for each element in this new time dimension.

\n\n

In any case, the LSTM will only actually perform its recurrent calculations in the timeSteps dimension. The other time dimension is just replicating this layer many times.

\n

TimeDistributed + Dense:

\n

The Dense layer (and maybe a few others), already supports 3D inputs, although the standard is 2D: (batch, inputFeatures).

\n

Using the TimeDistributed or not with Dense layers is optional and the result is the same: if your data is 3D, the Dense layer will be repeated for the second dimension.

\n

Return sequences:

\n

This is well explained in the documentation.

\n

With recurrent layers, keras will use the timeSteps dimension to perform its recurrent steps. For each step, it will naturally have an output.

\n

You can choose to get the outputs for all steps (return_sequences=True) or to get just the last output (return_sequences=False)

\n

Consider an input shape like (batch, timeSteps, inputFeatures) and a layer with outputFeatures units:

\n\n

In any case, if you use a TimeDistributed wrapper, the superSteps dimension will be in the input and the output, unchanged.

\n

Stateful = True

\n

Usually, if you can put all your sequences with all their steps in an input array, everything is fine and you don't need stateful=True layers.

\n

Keras creates a "state" for each sequence in the batch. The batch dimension is equal to the number of sequences. When keras finishes processing a batch, it automatically resets the states, meaning: we reached the end (last time step) of the sequences, bring new sequences from the first step.

\n

When using stateful=True, these states will not be reset. This means that sending another batch to the model will not be interpreted as a new set of sequences, but additional steps for the sequences that were processed before. You must then model.reset_states() manually to tell the model that you've reached the last step of the sequences, or that you will start new sequences.

\n

The only case that needs shuffle=False is this stateful=True case. Because for each batch, many sequences are input. In every batch these sequences must be kept in the same order, so that the states for each sequence don't get mixed.

\n

Stateful layers are good for:

\n\n

Working with windows

\n

So far, the only way I could work with windows was replicating data.

\n

The input array should be organized in windows. One sequence per window step. You could optionally take advantage of the TimeDistributed wrapper if you want to keep all window steps as a single batch entry. But you can make all steps be individual sequences as well.

\n

The stateful=True layer won't work with windows because of the states. If you input in a batch the steps from 1 to 12, the next batch will be expecting the step 13 as first step to keep the connection.

\n", + "system": "" + }, + { + "instruction": "Why does binary accuracy give high accuracy while categorical accuracy give low accuracy, in a multi-class classification problem?", + "input": "", + "output": "

So you need to understand what happens when you apply a binary_crossentropy to a multiclass prediction.

\n\n
    \n
  1. Let's assume that your output from softmax is (0.1, 0.2, 0.3, 0.4) and one-hot encoded ground truth is (1, 0, 0, 0).
  2. \n
  3. binary_crossentropy masks all outputs which are higher than 0.5 so out of your network is turned to (0, 0, 0, 0) vector.
  4. \n
  5. (0, 0, 0, 0) matches ground truth (1, 0, 0, 0) on 3 out of 4 indexes - this makes resulting accuracy to be at the level of 75% for a completely wrong answer!
  6. \n
\n\n

To solve this you could use a single class accuracy, e.g. like this one:

\n\n
def single_class_accuracy(interesting_class_id):\n    def fn(y_true, y_pred):\n        class_id_preds = K.argmax(y_pred, axis=-1)\n        # Replace class_id_preds with class_id_true for recall here\n        positive_mask = K.cast(K.equal(class_id_preds, interesting_class_id), 'int32')\n        true_mask = K.cast(K.equal(y_true, interesting_class_id), 'int32')\n        acc_mask = K.cast(K.equal(positive_mask, true_mask), 'float32')\n        class_acc = K.mean(acc_mask)\n        return class_acc\n\n    return fn\n
\n", + "system": "" + }, + { + "instruction": "keras validation_data with multiple input", + "input": "", + "output": "

model.fit() takes as first argument the data input and as the second one the data output. You attempt to do that by using [X['macd_train'], X['rsi_train'], X['ema_train']]

\n\n

However, you are not concatenating your data but only increasing the dimension of your array. You should use the numpy.concatenate() to have control over your concatenation over the proper axis.

\n", + "system": "" + }, + { + "instruction": "How to specify the axis when using the softmax activation in a Keras layer?", + "input": "", + "output": "

You must use an actual function there, not a string.

\n\n

Keras allows you to use a few strings for convenience.

\n\n

The activation functions can be found in keras.activations, and they're listed in the help file.

\n\n
from keras.activations import softmax\n\ndef softMaxAxis1(x):\n    return softmax(x,axis=1)\n\n..... \n......\nmodel.add(layers.Dense(output_dim=n, activation=softMaxAxis1))\n
\n\n
\n\n

Or even a custom axis:

\n\n
def softMaxAxis(axis):\n    def soft(x):\n        return softmax(x,axis=axis)\n    return soft\n\n...\nmodel.add(layers.Dense(output_dim=n, activation=softMaxAxis(1)))\n
\n", + "system": "" + }, + { + "instruction": "How to merge keras sequential models with same input?", + "input": "", + "output": "

Keras functional API seems to be a better fit for your use case, as it allows more flexibility in the computation graph. e.g.:

\n\n
from keras.layers import concatenate\nfrom keras.models import Model\nfrom keras.layers import Input, Merge\nfrom keras.layers.core import Dense\nfrom keras.layers.merge import concatenate\n\n# a single input layer\ninputs = Input(shape=(3,))\n\n# model 1\nx1 = Dense(3, activation='relu')(inputs)\nx1 = Dense(2, activation='relu')(x1)\nx1 = Dense(2, activation='tanh')(x1)\n\n# model 2 \nx2 = Dense(3, activation='linear')(inputs)\nx2 = Dense(4, activation='tanh')(x2)\nx2 = Dense(3, activation='tanh')(x2)\n\n# merging models\nx3 = concatenate([x1, x2])\n\n# output layer\npredictions = Dense(1, activation='sigmoid')(x3)\n\n# generate a model from the layers above\nmodel = Model(inputs=inputs, outputs=predictions)\nmodel.compile(optimizer='adam',\n              loss='binary_crossentropy',\n              metrics=['accuracy'])\n\n# Always a good idea to verify it looks as you expect it to \n# model.summary()\n\ndata = [[1,2,3], [1,1,3], [7,8,9], [5,8,10]]\nlabels = [0,0,1,1]\n\n# The resulting model can be fit with a single input:\nmodel.fit(data, labels, epochs=50)\n
\n\n

Notes:

\n\n\n\n

EDIT: updated notes based on comments

\n", + "system": "" + }, + { + "instruction": "Keras + Tensorflow strange results", + "input": "", + "output": "

If you shuffle the data, the problem is solved.

\n\n

\"enter

\n\n
import matplotlib.pyplot as plt\nimport numpy\nfrom keras import callbacks\nfrom keras import optimizers\nfrom keras.layers import Dense\nfrom keras.models import Sequential\nfrom keras.callbacks import ModelCheckpoint\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.utils import shuffle\n\n# TensorBoard callback for visualization of training history\ntb = callbacks.TensorBoard(log_dir='./logs/4', histogram_freq=10, batch_size=32,\n                           write_graph=True, write_grads=True, write_images=False,\n                           embeddings_freq=0, embeddings_layer_names=None, embeddings_metadata=None)\n\n\n# Early stopping - Stop training before overfitting\nearly_stop = callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')\n\n# fix random seed for reproducibility\nseed = 42\nnumpy.random.seed(seed)\n# load pima indians dataset\ndataset = numpy.loadtxt(\"../Downloads/pima-indians-diabetes.csv\", delimiter=\",\")\n# split into input (X) and output (Y) variables\nX = dataset[:, 0:8]\nY = dataset[:, 8]\n\n# Standardize features by removing the mean and scaling to unit variance\nscaler = StandardScaler()\nX = scaler.fit_transform(X)\n\n# This is the important part\nX, Y = shuffle(X, Y)\n\n#ADAM Optimizer with learning rate decay\nopt = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0001)\n\n## Create our model\nmodel = Sequential()\n\nmodel.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))\nmodel.add(Dense(8, kernel_initializer='uniform', activation='relu'))\nmodel.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))\n\n# Compile the model using binary crossentropy since we are predicting 0/1\nmodel.compile(loss='binary_crossentropy',\n              optimizer=opt,\n              metrics=['accuracy'])\n\n# checkpoint\n# filepath=\"./checkpoints/weights.best.hdf5\"\n# checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')\n\n# Fit the model\nhistory = model.fit(X, Y, validation_split=0.33, epochs=1000, batch_size=10, verbose=0, callbacks=[tb,early_stop])\n# list all data in history\nprint(history.history.keys())\n# summarize history for accuracy\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n# summarize history for loss\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n
\n", + "system": "" + }, + { + "instruction": "What does nb_epoch in neural network stands for?", + "input": "", + "output": "

Starting Keras 2.0, nb_epoch argument has been renamed to epochs everywhere.

\n\n

Neural networks are trained iteratively, making multiple passes over entire dataset. Each pass over entire dataset is referred to as epoch.

\n\n

There are two possible ways to choose an optimum number of epochs:

\n\n

1) Set epochs to a large number, and stop training when validation accuracy or loss stop improving: so-called early stopping

\n\n
from keras.callbacks import EarlyStopping\nearly_stopping = EarlyStopping(monitor='val_loss', patience=4, mode='auto')\n\nmodel.fit(X_train, Y_train,\n      batch_size=128, epochs=500,\n      show_accuracy=True, verbose=1,\n      validation_data=(X_test, Y_test),callbacks = [early_stopping])\n
\n\n

2) Consider number of epochs as a hyperparameter and select the best value based on a set of trials (runs) on a grid of epochs values

\n", + "system": "" + }, + { + "instruction": "Keras -- Input Shape for Embedding Layer", + "input": "", + "output": "

I fixed this particular error by adding an input_shape field to the Embedding layer as follows:

\n\n
m.add(Embedding(features, embedding_dims, input_length=maxlen, input_shape=(features, ) ))\n
\n\n

features is the number of features (29430).

\n", + "system": "" + }, + { + "instruction": "Why do Keras Conv1D layers' output tensors not have the input dimension?", + "input": "", + "output": "

I used to have the same problem with 2D convolutions. The thing is that when you apply a convolutional layer the kernel you are applying is not of size (kernel_size, 1) but actually (kernel_size, input_dim).

\n\n

If you think of it if it wasn't this way a 1D convolutional layer with kernel_size = 1 would be doing nothing to the inputs it received.

\n\n

Instead it is computing a weighted average of the input features at each time step, using the same weights for each time step (although every filter uses a different set of weights). I think it helps to visualize input_dim as the number of channels in a 2D convolution of an image, where the same reaoning applies (in that case is the channels that \"get lost\" and trasformed into the number of filters).

\n\n

To convince yourself of this, you can reproduce the 1D convolution with a 2D convolution layer using kernel_size=(1D_kernel_size, input_dim) and the same number of filters. Here an example:

\n\n
from keras.layers import Conv1D, Conv2D\nimport keras.backend as K\nimport numpy as np\n\n# create an input with 4 steps and 5 channels/input_dim\nchannels = 5\nsteps = 4\nfilters = 3\nval = np.array([list(range(i * channels, (i + 1) * channels)) for i in range(1, steps + 1)])\nval = np.expand_dims(val, axis=0)\nx = K.variable(value=val)\n\n# 1D convolution. Initialize the kernels to ones so that it's easier to compute the result by hand\n\nconv1d = Conv1D(filters=filters, kernel_size=1, kernel_initializer='ones')(x)\n\n# 2D convolution that replicates the 1D one\n\n# need to add a dimension to your input since conv2d expects 4D inputs. I add it at axis 4 since my keras is setup with `channel_last`\nval1 = np.expand_dims(val, axis=3)\nx1 = K.variable(value=val1)\n\nconv2d = Conv2D(filters=filters, kernel_size=(1, 5), kernel_initializer='ones')(x1)\n\n# evaluate and print the outputs\n\nprint(K.eval(conv1d))\nprint(K.eval(conv2d))\n
\n\n

As I said, it took me a while too to understand this, I think mostly because no tutorial explains it clearly

\n", + "system": "" + }, + { + "instruction": "Convert trained Keras image classification model to coreml and integrate in iOS11", + "input": "", + "output": "
\n

Not sure what \"Dimensions of layer 'output' is not the same size as the number of class labels\" means.

\n
\n\n

This means that the last layer of your model is a different dimension than your class labels (which I assume is of dimension 2). I would recommend removing this parameter:

\n\n
\n

class_labels = output_labels

\n
\n\n

from your model conversion and see if it fixes the problem

\n", + "system": "" + }, + { + "instruction": "AttributeError:'Tensor' object has no attribute '_keras_history'", + "input": "", + "output": "

The problem lied in the fact that using every tf operation should be encapsulated by either:

\n\n
    \n
  1. Using keras.backend functions,
  2. \n
  3. Lambda layers,
  4. \n
  5. Designated keras functions with the same behavior.
  6. \n
\n\n

When you are using tf operation - you are getting tf tensor object which doesn't have history field. When you use keras functions you will get keras.tensors.

\n", + "system": "" + }, + { + "instruction": "Reloading Keras Tokenizer during Testing", + "input": "", + "output": "

Check out this question \nThe commenter recommends using a pickle to save the object & state, though the question still remains why this kind of functionality is not built into keras.

\n", + "system": "" + }, + { + "instruction": "How training LSTM model for sequences items ?", + "input": "", + "output": "

I am not an expert but I am not sure about the batch size. As I know Keras LSTM reset its state after each batch. So when your batch size if 1, LSTM resets its memory. So you are forgetting what user 1 did at timestep 1 when processing timestep 2. Maximum number of purchases can be your batch size. You can use masking to avoid effect of padding.

\n", + "system": "" + }, + { + "instruction": "Accessing gradient values of keras model outputs with respect to inputs", + "input": "", + "output": "

As you mention, Theano and TF are symbolic, so doing a derivative should be quite easy:

\n\n
import theano\nimport theano.tensor as T\nimport keras.backend as K\nJ = T.grad(model.output[0, 0], model.input)\njacobian = K.function([model.input, K.learning_phase()], [J])\n
\n\n

First you compute the symbolic gradient (T.grad) of the output given the input, then you build a function that you can call and does the computation. Note that sometimes this is not that trivial due to shape problems, as you get one derivative for each element in the input.

\n", + "system": "" + }, + { + "instruction": "How do I take the squared difference of two Keras tensors?", + "input": "", + "output": "

As dhinckley mentioned, you should use Lambda layer. But I would suggest to define your custom function first. With this code will a little bit more clear:

\n\n
import keras.backend as K\nfrom keras.layers import Lambda\n\ndef squared_differences(pair_of_tensors):\n    x, y = pair_of_tensors\n    return K.square(x - y)\n\nsquare_diff = Lambda(squared_differences)([r1, r2])\n
\n", + "system": "" + }, + { + "instruction": "Frozen model from Keras doesn't predict after restoration", + "input": "", + "output": "
all_saver = tf.train.Saver()\nsess.run(tf.global_variables_initializer())\nprint save_path + '/model_predeploy.chkp'\nall_saver.save(sess, save_path + '/model_predeploy.chkp', meta_graph_suffix='meta', write_meta_graph=True)\ntf.train.write_graph(sess.graph_def, save_path, \"model.pb\", False)\n
\n\n

In line 2, you re-initialize all variables from scratch (not only the uninitialized ones). This means your trained model is gone at that point, and you save a model that is just random/constant weights (depending on your initializers).

\n\n

Demo script:

\n\n
from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nimport numpy as np\n\nvar = tf.get_variable('demo', dtype=tf.float32, shape=[], \n                      initializer=tf.zeros_initializer())\n\nsess = tf.Session()\n\nsess.run(tf.assign(var, 42));\n\nprint(var.eval(session=sess))\n
\n\n

This prints 42.

\n\n
sess.run(tf.global_variables_initializer())\n\nprint(var.eval(session=sess))\n
\n\n

This prints 0, as the variable has been re-initialized to 0.

\n\n

So, initialize your variables before you train your model, and don't re-initialize them before writing them out.

\n", + "system": "" + }, + { + "instruction": "Variationnal auto-encoder: implementing warm-up in Keras", + "input": "", + "output": "

This will not work. I tested it to figure out exactly why it was not working. The key thing to remember is that Keras creates a static graph once at the beginning of training.

\n\n

Therefore, the vae_loss function is called only once to create the loss tensor, which means that the reference to the beta variable will remain the same every time the loss is calculated. However, your warmup function reassigns beta to a new K.variable. Thus, the beta that is used for calculating loss is a different beta than the one that gets updated, and the value will always be 0.

\n\n

It is an easy fix. Just change this line in your warmup callback:

\n\n

beta = K.variable(value=value)

\n\n

to:

\n\n

K.set_value(beta, value)

\n\n

This way the actual value in beta gets updated \"in place\" rather than creating a new variable, and the loss will be properly re-calculated.

\n", + "system": "" + }, + { + "instruction": "Keras training only specific outputs", + "input": "", + "output": "

You have to create 2 different models like this

\n\n
model1 = Model(input=input, output=[out1,out2])\nmodel2 = Model(input=input, output=[out1,out2,out3])\n
\n\n

You compile both but only fit the first. They will share the layers so model2, even if it wasn't trained, will have the weights learned from model1. But if there is a layer in out3 which is trainable but not in the flow between input and out1 and out2 of the graph, that layer wont be trained so will stay wirh its inital values.

\n\n

Does that help? :-)

\n", + "system": "" + }, + { + "instruction": "How do you compute accuracy in a regression model, after rounding predictions to classes, in keras?", + "input": "", + "output": "

I use rounded accuracy like this:

\n
from keras import backend as K\n\ndef soft_acc(y_true, y_pred):\n    return K.mean(K.equal(K.round(y_true), K.round(y_pred)))\n\nmodel.compile(..., metrics=[soft_acc])\n
\n", + "system": "" + }, + { + "instruction": "Keras pretrain CNN with TimeDistributed", + "input": "", + "output": "

My simple solution is a pretty one.

\n\n

Considering you are using a pre-trained network from keras, you can replace it with your own pre-trained network too.

\n\n

Here's a simple solution::

\n\n
model_vgg=keras.applications.VGG16(input_shape=(256, 256, 3),\n                                           include_top=False,\n                                           weights='imagenet')\nmodel_vgg.trainable = False\nmodel_vgg.summary()\n
\n\n

If you want to use any intermediate layers then, otherwise replace 'block2_pool' with last layer's name::

\n\n
intermediate_model= Model(inputs=model_vgg.input, outputs=model_vgg.get_layer('block2_pool').output)\nintermediate_model.summary()\n
\n\n

Finally wrap it in a TimeDistributed Layer

\n\n
input_tensor = Input(shape=(time_steps,height, width, channels))\ntimeDistributed_layer = TimeDistributed( intermediate_model )(input_tensor)\n
\n\n

Now you can simply do::

\n\n
my_time_model = Model( inputs = input_tensor, outputs = timeDistributed_layer )\n
\n", + "system": "" + }, + { + "instruction": "Multi scale CNN Network Python Keras", + "input": "", + "output": "

The number of nodes in lower_model1 and lower_model2 after flattening is\n32 * 112 * 112 = 401 408. Followed by a fully connected layer with 64 nodes this gives 401 408 * 2 * 64 = 51 380 224 parameters, which is quite a big number. I would suggest to reconsider size of the images fed to your \"lower\" models. Do you really need 224 x 224 size there? Take a closer look at the diagram that you attached. There you see that the first step in the second and the third model is subsampling: 8:1 and 4:1. This is the step that you have missed in your implementation.

\n\n

Your main_model is fine because you have enough max pooling layers there that reduce the number of parameters.

\n", + "system": "" + }, + { + "instruction": "Error when checking model target: expected dense_24 to have shape...but got array with shape... in Keras", + "input": "", + "output": "

I'm not sure if the answer you expect is this, but...

\n

First: I agree - the error message seems weird, it should talk about incompatibility between dense_24 and target array.

\n

Now, to solve your problem, you should either reshape your target array or create a different Dense at the end to match your array.

\n

About your target array, for a classification in two classes, it should be shaped as:

\n\n

What I think is the easiest solution:

\n\n

Why? Because your target data is shaped like (46000,1), meaning you have only one number for two classes. 0 is one class, 1 is another.

\n", + "system": "" + }, + { + "instruction": "Keras: reshape to connect lstm and conv", + "input": "", + "output": "

According to Convolution2D definition your input must be 4-dimensional with dimensions (samples, channels, rows, cols). This is the direct reason why are you getting an error.

\n\n

To resolve that you must use TimeDistributed wrapper. This allows you to use static (not recurrent) layers across the time.

\n", + "system": "" + }, + { + "instruction": "How to implement Weighted Binary CrossEntropy on theano?", + "input": "", + "output": "

Thanks to the developers on lasagne group, i fixed this by constructing my own loss function.

\n\n
loss_or_grads = -(customized_rate * target_var * tensor.log(prediction) + (1.0 - target_var) * tensor.log(1.0 - prediction))\n\nloss_or_grads = loss_or_grads.mean()\n
\n", + "system": "" + }, + { + "instruction": "Keras Convolution2D Input: Error when checking model input: expected convolution2d_input_1 to have shape", + "input": "", + "output": "

The problem is due to wrong size of test images. For me,

\n\n
train_datagen.flow_from_directory(\n        'C:\\\\Users\\\\...\\\\train',  # this is the target directory\n        target_size=(150, 150),  # all images will be resized to 150x150\n        batch_size=32,\n        class_mode='binary')\n
\n\n

was not working properly. So I used a matlab command to resize all the test images and it worked fine

\n", + "system": "" + }, + { + "instruction": "Deconvolution2D layer in keras", + "input": "", + "output": "

Short answer: you need to add subsample=(2,2) to Deconvolution2D if you wish the output to truly be twice as large as the input.

\n\n
\n\n

Longer answer: Deconvolution2D is severely undocumented and you have to go through its code to understand how to use it.

\n\n

First, you must understand how the deconvolution layer works (skip this if you already know all the details). Deconvolution, unlike what its name suggest, is simply applying the back-propgation (gradient calculation method) of a standard convolution layer on the input to the deconvolution layer. The \"kernel size\" of the deconvolution layer is actually the kernel size of the virtual convolution layer of the backprop step mentioned above. While given the size of a convolution kernel and its stride, it is straightforward to compute the output shape of the convolution layer (assuming no padding it's (input - kernel) // stride + 1), but the reverse is not true. In fact, there can be more than one possible input shapes that matches a given output shape of the convolution layer (this is because integer division isn't invertible). This means that for a deconvolution layer, the output shape cannot be directly determined simply from the input shape (which is implicitly known), kernel size and stride - this is why we need to know the output shape when we initialize the layer. Of course, because of the way the deconvolution layer is defined, for some input shapes you'll get holes in its output which are undefined, and if we forbid these cases then we actually can deduce the output shape.

\n\n

Back to Keras and how the above is implemented. Confusingly, the output_shape parameter is actually not used for determining the output shape of the layer, and instead they try to deduce it from the input, the kernel size and the stride, while assuming only valid output_shapes are supplied (though it's not checked in the code to be the case). The output_shape itself is only used as input to the backprop step. Thus, you must also specify the stride parameter (subsample in Keras) in order to get the desired result (which could've been determined by Keras from the given input shape, output shape and kernel size).

\n", + "system": "" + }, + { + "instruction": "How can implement subsample like keras in tensorflow?", + "input": "", + "output": "

subsample in Keras is the same as strides in tensorflow. You can use the strides argument in the tensorflow tf.nn.conv2d() function to implement this.

\n\n

Subsample / strides tells you how much to move the filter in each dimension as you perform the convolution. For instance with a stride of 1 in each direction you would shift the filter by one for each convolution and produce an output of the same size as the input (except for border padding effects). If strides was set to 2 the dimensions of the result would be half that of the original image.

\n", + "system": "" + }, + { + "instruction": "Stream Output of Predictions in Keras", + "input": "", + "output": "

You can enable statefulness in your LSTM layers by setting stateful=True. This changes the behavior of the layer to always use the state of the previous invocation of the layer instead of resetting it for each layer.call(x).

\n\n

For example an LSTM layer with 32 units with batch size 1, sequence length 64 and feature length 10:

\n\n
LSTM(32, stateful=True, batch_input_shape=(1,64,10))\n
\n\n

With this successive calls of predict will use the previous states.

\n", + "system": "" + }, + { + "instruction": "What do model.predict() and model.fit() do?", + "input": "", + "output": "

First of all it surprises me that you could not find the documentation but I guess you just had bad luck while searching.

\n\n

The documentation states for model.fit:

\n\n
\n

fit(self, x, y, batch_size=32, nb_epoch=10, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None)

\n \n \n
\n\n

The batch_size parameter in case of model.predict is just the number of samples used for each prediction step. So calling model.predict one time consumes batch_size number of data samples. This helps for devices that can process large matrices quickly (such as GPUs).

\n", + "system": "" + }, + { + "instruction": "Run model in reverse in Keras", + "input": "", + "output": "

There is no such thing as \"running a neural net in reverse\", as a generic architecture of neural net does not define any not-forward data processing. There is, however, a subclass of models which do - the generative models, which are not a part of keras right now. The only thing you can do is to create a network which somehow \"simulates\" the generative process you are interested in. But this is paricular model specific method, and has no general solution.

\n", + "system": "" + }, + { + "instruction": "How to monitor tensor values in Theano/Keras?", + "input": "", + "output": "

I use the solution described in the Keras FAQ:

\n\n

http://keras.io/getting-started/faq/#how-can-i-visualize-the-output-of-an-intermediate-layer

\n\n

In detail:

\n\n
from keras import backend as K\n\nintermediate_tensor_function = K.function([model.layers[0].input],[model.layers[layer_of_interest].output])\nintermediate_tensor = intermediate_tensor_function([thisInput])[0]\n
\n\n

yields:

\n\n
array([[ 3.,  17.]], dtype=float32)\n
\n\n

However I'd like to use the functional API but I can't seem to get the actual tensor, only the symbolic representation. For example:

\n\n
model.layers[1].output\n
\n\n

yields:

\n\n
<tf.Tensor 'add:0' shape=(?, 2) dtype=float32>\n
\n\n

I'm missing something about the interaction of Keras and Tensorflow here but I'm not sure what. Any insight much appreciated.

\n", + "system": "" + }, + { + "instruction": "why Tensorflow-gpu is still using cpu", + "input": "", + "output": "

It is using the GPU, as you can see in logs.\nThe problem is, that a lot of things can not be done on the GPU and as long your data is small and your complexity is low, you will end up with low GPU usage.

\n\n

Here is some more detailed explanation.

\n", + "system": "" + }, + { + "instruction": "loading keras model issues warning: skipping variable loading for optimizer 'Adam'", + "input": "", + "output": "

check that you didn't load model that was saved before fit was called with it even once

\n", + "system": "" + }, + { + "instruction": "The following argument(s) are not supported with the native Keras format: ['options']", + "input": "", + "output": "

As I mentioned in the comments, there seems to be a weird behaviour related to keras saving and also versioning of TF/Keras. I could replicate your error when running TF/Keras with version 2.13 (newest right now) on colab. Standard install on colab is 2.12, where the error doesn't come up.
\nSo one solution would be to downgrade TF/Keras to 2.12.x, or change

\n
keras.callbacks.ModelCheckpoint(\n        filepath="convnet_from_scratch.keras",\n        ..)\n
\n

to

\n
keras.callbacks.ModelCheckpoint(\n        filepath="convnet_from_scratch.x",\n        ..)\n
\n

where x stands for whatever you fancy (NOT "keras") to not save in the .keras format.

\n", + "system": "" + }, + { + "instruction": "Forecast future values with LSTM in Python", + "input": "", + "output": "

You could train your model to predict a future sequence (e.g. the next 30 days) instead of predicting the next value (the next day) as it is currently the case.

\n

In order to do that, you need to define the outputs as y[t: t + H] (instead of y[t] as in the current code) where y is the time series and H is the length of the forecast period (i.e. the number of days ahead that you want to forecast). You also need to set the number of outputs of the last layer equal to H (instead of equal to 1 as in the current code).

\n

You can still define the inputs as y[t - T: t] where T is the length of the lookback period (or number of timesteps), and therefore the model's input shape is still (T, 1). The lookback period T is usually longer than the forecast period H (i.e. T > H) and it's often set equal to a multiple of H (i.e. T = m * H where m > 1 is an integer.).

\n

\"enter

\n
import numpy as np\nimport pandas as pd\nimport yfinance as yf\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Dense, LSTM\nfrom tensorflow.keras.models import Sequential\nfrom sklearn.preprocessing import MinMaxScaler\npd.options.mode.chained_assignment = None\ntf.random.set_seed(0)\n\n# download the data\ndf = yf.download(tickers=['AAPL'], period='1y')\ny = df['Close'].fillna(method='ffill')\ny = y.values.reshape(-1, 1)\n\n# scale the data\nscaler = MinMaxScaler(feature_range=(0, 1))\nscaler = scaler.fit(y)\ny = scaler.transform(y)\n\n# generate the input and output sequences\nn_lookback = 60  # length of input sequences (lookback period)\nn_forecast = 30  # length of output sequences (forecast period)\n\nX = []\nY = []\n\nfor i in range(n_lookback, len(y) - n_forecast + 1):\n    X.append(y[i - n_lookback: i])\n    Y.append(y[i: i + n_forecast])\n\nX = np.array(X)\nY = np.array(Y)\n\n# fit the model\nmodel = Sequential()\nmodel.add(LSTM(units=50, return_sequences=True, input_shape=(n_lookback, 1)))\nmodel.add(LSTM(units=50))\nmodel.add(Dense(n_forecast))\n\nmodel.compile(loss='mean_squared_error', optimizer='adam')\nmodel.fit(X, Y, epochs=100, batch_size=32, verbose=0)\n\n# generate the forecasts\nX_ = y[- n_lookback:]  # last available input sequence\nX_ = X_.reshape(1, n_lookback, 1)\n\nY_ = model.predict(X_).reshape(-1, 1)\nY_ = scaler.inverse_transform(Y_)\n\n# organize the results in a data frame\ndf_past = df[['Close']].reset_index()\ndf_past.rename(columns={'index': 'Date', 'Close': 'Actual'}, inplace=True)\ndf_past['Date'] = pd.to_datetime(df_past['Date'])\ndf_past['Forecast'] = np.nan\ndf_past['Forecast'].iloc[-1] = df_past['Actual'].iloc[-1]\n\ndf_future = pd.DataFrame(columns=['Date', 'Actual', 'Forecast'])\ndf_future['Date'] = pd.date_range(start=df_past['Date'].iloc[-1] + pd.Timedelta(days=1), periods=n_forecast)\ndf_future['Forecast'] = Y_.flatten()\ndf_future['Actual'] = np.nan\n\nresults = df_past.append(df_future).set_index('Date')\n\n# plot the results\nresults.plot(title='AAPL')\n
\n

\"enter

\n

See this answer for a different approach.

\n", + "system": "" + }, + { + "instruction": "Selecting loss and metrics for Tensorflow model", + "input": "", + "output": "

About the data set: oxford_flowers102

\n

The dataset is divided into a training set, a validation set, and a test set. The training set and validation set each consist of 10 images per class (totaling 1020 images each). The test set consists of the remaining 6149 images (minimum 20 per class).

\n
'test'        6,149\n'train'       1,020\n'validation'  1,020\n
\n

If we check, we'll see

\n
import tensorflow_datasets as tfds\ntfds.disable_progress_bar()\n\ndata, ds_info = tfds.load('oxford_flowers102', \n                          with_info=True, as_supervised=True)\ntrain_ds, valid_ds, test_ds = data['train'], data['validation'], data['test']\n\nfor i, data in enumerate(train_ds.take(3)):\n  print(i+1, data[0].shape, data[1])\n1 (500, 667, 3) tf.Tensor(72, shape=(), dtype=int64)\n2 (500, 666, 3) tf.Tensor(84, shape=(), dtype=int64)\n3 (670, 500, 3) tf.Tensor(70, shape=(), dtype=int64)\n
\n
ds_info.features["label"].num_classes\n102\n
\n

So, it has 102 categories or classes and the target comes with an integer with different shapes input.

\n

Clarification

\n

First, if you keep this integer target or label, you should use sparse_categorical_accuracy for accuracy and sparse_categorical_crossentropy for loss function. But if you transform your integer label to a one-hot encoded vector, then you should use categorical_accuracy for accuracy, and categorical_crossentropy for loss function. As these data set have integer labels, you can choose sparse_categorical or you can transform the label to one-hot in order to use categorical.

\n

Second, if you set outputs = keras.layers.Dense(102, activation='softmax')(x) to the last layer, you will get probabilities score. But if you set outputs = keras.layers.Dense(102)(x), then you will get logits. So, if you set activations='softmax', then you should not use from_logit = True. For example in your above code you should do as follows (here's some theory for you):

\n
...\n(a)\n# Use softmax activation (no logits output)\noutputs = keras.layers.Dense(102, activation='softmax')(x)\n...\nmodel.compile(\n    optimizer=keras.optimizers.Adam(),\n    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=False),\n    metrics=[keras.metrics.Accuracy()],\n)\n\nor,\n\n(b)\n# no activation, output will be logits\noutputs = keras.layers.Dense(102)(x)\n...\nmodel.compile(\n    optimizer=keras.optimizers.Adam(),\n    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n    metrics=[keras.metrics.Accuracy()],\n)\n
\n

Third, keras uses string identifier such as metrics=['acc'] , optimizer='adam'. But in your case, you need to be a bit more specific as you mention loss function specific. So, instead of keras.metrics.Accuracy(), you should choose keras.metrics.SparseCategoricalAccuracy() if you target are integer or keras.metrics.CategoricalAccuracy() if your target are one-hot encoded vector.

\n

Code Examples

\n

Here is an end-to-end example. Note, I will transform integer labels to a one-hot encoded vector (right now, it's a matter of preference to me). Also, I want probabilities (not logits) from the last layer which means from_logits = False. And for all of these, I need to choose the following parameters in my training:

\n
# use softmax to get probabilities \noutputs = keras.layers.Dense(102, \n                   activation='softmax')(x)\n\n# so no logits, set it false (FYI, by default it already false)\nloss = keras.losses.CategoricalCrossentropy(from_logits=False),\n\n# specify the metrics properly \nmetrics = keras.metrics.CategoricalAccuracy(),\n
\n

Let's complete the whole code.

\n
import tensorflow_datasets as tfds\ntfds.disable_progress_bar()\n\ndata, ds_info = tfds.load('oxford_flowers102', \n                         with_info=True, as_supervised=True)\ntrain_ds, valid_ds, test_ds = data['train'], data['validation'], data['test']\n\n\nNUM_CLASSES = ds_info.features["label"].num_classes\ntrain_size =  len(data['train'])\n\nbatch_size = 64\nimg_size = 120 \n
\n

Preprocess and Augmentation

\n
import tensorflow as tf \n\n# pre-process functions \ndef normalize_resize(image, label):\n    image = tf.cast(image, tf.float32)\n    image = tf.divide(image, 255)\n    image = tf.image.resize(image, (img_size, img_size))\n    label = tf.one_hot(label , depth=NUM_CLASSES) # int to one-hot\n    return image, label\n\n# augmentation \ndef augment(image, label):\n    image = tf.image.random_flip_left_right(image)\n    return image, label \n\n\ntrain = train_ds.map(normalize_resize).cache().map(augment).shuffle(100).\\\n                          batch(batch_size).repeat()\nvalid = valid_ds.map(normalize_resize).cache().batch(batch_size)\ntest = test_ds.map(normalize_resize).cache().batch(batch_size)\n
\n

Model

\n
from tensorflow import keras \n\nbase_model = keras.applications.Xception(\n    weights='imagenet',  \n    input_shape=(img_size, img_size, 3),\n    include_top=False)  \n\nbase_model.trainable = False\ninputs = keras.Input(shape=(img_size, img_size, 3))\nx = base_model(inputs, training=False)\nx = keras.layers.GlobalAveragePooling2D()(x)\noutputs = keras.layers.Dense(NUM_CLASSES, activation='softmax')(x)\nmodel = keras.Model(inputs, outputs)\n
\n

Okay, additionally, here I like to use two metrics to compute top-1 and top-3 accuracy.

\n
model.compile(optimizer=keras.optimizers.Adam(),\n              loss=keras.losses.CategoricalCrossentropy(),\n              metrics=[\n                       keras.metrics.TopKCategoricalAccuracy(k=3, name='acc_top3'),\n                       keras.metrics.TopKCategoricalAccuracy(k=1, name='acc_top1')\n                    ])\nmodel.fit(train, steps_per_epoch=train_size // batch_size,\n          epochs=20, validation_data=valid, verbose=2)\n
\n
...\nEpoch 19/20\n15/15 - 2s - loss: 0.2808 - acc_top3: 0.9979 - acc_top1: 0.9917 - \nval_loss: 1.5025 - val_acc_top3: 0.8147 - val_acc_top1: 0.6186\n\nEpoch 20/20\n15/15 - 2s - loss: 0.2743 - acc_top3: 0.9990 - acc_top1: 0.9885 - \nval_loss: 1.4948 - val_acc_top3: 0.8147 - val_acc_top1: 0.6255\n
\n

Evaluate

\n
# evaluate on test set \nmodel.evaluate(test, verbose=2)\n97/97 - 18s - loss: 1.6482 - acc_top3: 0.7733 - acc_top1: 0.5994\n[1.648208498954773, 0.7732964754104614, 0.5994470715522766]\n
\n", + "system": "" + }, + { + "instruction": "Cannot use keras models on Mac M1 with BigSur", + "input": "", + "output": "

First two ones are nothing to worry about.

\n

The third one is a problem. You have installed an improper version of TensorFlow. Use one that supports the Mac M1 chip.

\n

Run the following bash script to download and install TensorFlow.

\n
#!/bin/bash\n\nset -e\n\nVERSION=0.1alpha3\nINSTALLER_PACKAGE=tensorflow_macos-$VERSION.tar.gz\nINSTALLER_PATH=https://github.com/apple/tensorflow_macos/releases/download/v$VERSION/$INSTALLER_PACKAGE\nINSTALLER_SCRIPT=install_venv.sh\n\necho\n\n# Check to make sure we're good to go.\nif [[ $(uname) != Darwin ]] || [[ $(sw_vers -productName) != macOS ]] || [[ $(sw_vers -productVersion) != "11."* ]] ; then \n  echo "ERROR: TensorFlow with ML Compute acceleration is only available on macOS 11.0 and later." \n  exit 1\nfi\n\n# This \necho "Installation script for pre-release tensorflow_macos $VERSION.  Please visit https://github.com/apple/tensorflow_macos "\necho "for instructions and license information."   \necho\necho "This script will download tensorflow_macos $VERSION and needed binary dependencies, then install them into a new "\necho "or existing Python 3.8 virtual environment."\n\n# Make sure the user knows what's going on.  \nread -p 'Continue [y/N]? '    \n\nif [[ ! $REPLY =~ ^[Yy]$ ]]\nthen\nexit 1\nfi\necho\n\necho "Downloading installer."\ntmp_dir=$(mktemp -d)\n\npushd $tmp_dir\n\ncurl -LO $INSTALLER_PATH \n\necho "Extracting installer."\ntar xf $INSTALLER_PACKAGE\n\ncd tensorflow_macos \n\nfunction graceful_error () { \n  echo \n  echo "Error running installation script with default options.  Please fix the above errors and proceed by running "\n  echo \n  echo "  $PWD/$INSTALLER_SCRIPT --prompt"\n  echo \n  echo\n  exit 1\n}\n\nbash ./$INSTALLER_SCRIPT --prompt || graceful_error \n\npopd\nrm -rf $tmp_dir\n\n
\n

ref: https://github.com/apple/tensorflow_macos

\n", + "system": "" + }, + { + "instruction": "Tensorflow Keras error: Unknown image file format. One of JPEG, PNG, GIF, BMP required", + "input": "", + "output": "
\n

Actually it might have an extension name jpg but be in say a\ntiff format. To take it a stepfurther you can add some code ...

\n
\n

If you want to check for a type of image, not an extension name, then try this modificated version of code above:

\n
import os\nimport cv2\nimport imghdr\n\ndef check_images( s_dir, ext_list):\n    bad_images=[]\n    bad_ext=[]\n    s_list= os.listdir(s_dir)\n    for klass in s_list:\n        klass_path=os.path.join (s_dir, klass)\n        print ('processing class directory ', klass)\n        if os.path.isdir(klass_path):\n            file_list=os.listdir(klass_path)\n            for f in file_list:               \n                f_path=os.path.join (klass_path,f)\n                tip = imghdr.what(f_path)\n                if ext_list.count(tip) == 0:\n                  bad_images.append(f_path)\n                if os.path.isfile(f_path):\n                    try:\n                        img=cv2.imread(f_path)\n                        shape=img.shape\n                    except:\n                        print('file ', f_path, ' is not a valid image file')\n                        bad_images.append(f_path)\n                else:\n                    print('*** fatal error, you a sub directory ', f, ' in class directory ', klass)\n        else:\n            print ('*** WARNING*** you have files in ', s_dir, ' it should only contain sub directories')\n    return bad_images, bad_ext\n\nsource_dir =r'c:\\temp\\people\\storage'\ngood_exts=['jpg', 'png', 'jpeg', 'gif', 'bmp' ] # list of acceptable extensions\nbad_file_list, bad_ext_list=check_images(source_dir, good_exts)\nif len(bad_file_list) !=0:\n    print('improper image files are listed below')\n    for i in range (len(bad_file_list)):\n        print (bad_file_list[i])\nelse:\n    print(' no improper image files were found')\n
\n

Python has many modules in its standard library, and one that helps here is imghdr. It lets you identify what is the image type that is contained in a file, byte stream or path-like object. The imghdr can recognize the following image types: rgb, gif, pbm, pgm, ppm, tiff, rast, xbm, jpeg / jpg, bmp, png, webp and exr.

\n", + "system": "" + }, + { + "instruction": "Stop Tensorflow from printing to the console", + "input": "", + "output": "

You can disable debugging logs with os.environ.

\n
import os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' \nimport tensorflow as tf\n
\n

Possible values are as follows:

\n
0 = all messages are logged (default behavior)\n1 = INFO messages are not printed\n2 = INFO and WARNING messages are not printed\n3 = INFO, WARNING, and ERROR messages are not printed\n
\n", + "system": "" + }, + { + "instruction": "tensorflow evalutaion and earlystopping gives infinity overflow error", + "input": "", + "output": "

Well it's hard to tell exactly as I can't run code without some_get_data_function() realization but recently I've got same error when mistakenly passed EMPTY array to model.evaluate. Taking into account that @meTchaikovsky comment solved your issue it's certainly due to messed up input arrays.

\n", + "system": "" + }, + { + "instruction": "Unable to load Keras model in Keras 2.4.3 (with Tensorflow 2.3.0) that was saved in Keras 2.1.0 (with Tensorflow 1.3.0)", + "input": "", + "output": "

You cant not load the model this way because keras.models.load_model will load the configuration that has been defined, not something has been self_customed.

\n

To overcome this, you should reload the model architecture and try to load_weights from that instead:

\n
model = YourModelDeclaration()\nmodel.load_weights("checkpoint/h5file")\n
\n

I have the same problem when I self custom BatchNormalize, so I would be pretty sure this is the only way to load it.

\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'tensorflow.python.keras.engine.base_layer_v1", + "input": "", + "output": "

I came across similar error some time back and resolved this by importing all modules from tensorflow.

\n

Please refer working code in below

\n
from tensorflow.keras.layers import MaxPooling2D,Conv2D,Input,Add,MaxPool2D,Flatten,AveragePooling2D,Dense,BatchNormalization,ZeroPadding2D,Activation,Concatenate,UpSampling2D\nfrom tensorflow.keras.models import Model\n\n\ndef Dense_Layer(x,k):\n    x = BatchNormalization(axis = 3)(x)\n    x = Activation('relu')(x)\n    x = Conv2D(4*k,(1,1),strides = (1,1))(x)\n    x = BatchNormalization(axis = 3)(x)\n    x = Activation('relu')(x)\n    x = Conv2D(k,(1,1),strides = (1,1))(x)\n    return x\n\ndef Dense_Block(x,k):\n    \n    x1 = Dense_Layer(x,k)\n    x1_add = Concatenate()([x1,x])\n    x2 = Dense_Layer(x1_add,k)\n    x2_add = Concatenate()([x1,x2])\n    \n    return x2_add\ndef Dilated_Spatial_Pyramid_Pooling(x,k):\n    x = BatchNormalization(axis = 3)(x)\n    d1 = Conv2D(k, (1,1), dilation_rate = 2)(x)\n    d2 = Conv2D(k, (1,1), dilation_rate = 4)(d1)\n    d3 = Conv2D(k, (1,1), dilation_rate = 8)(d2)\n    d4 = Conv2D(k, (1,1), dilation_rate = 16)(d3)\n    c = Concatenate()([d1,d2,d3,d4])\n    return c\n\n    \n        \n    \ndef down_block(x,filters, kernel_size = (3, 3), padding = "same",strides =1 ):\n    c = Dense_Block(x,filters)\n    c = Dense_Block(c,filters)\n    p = MaxPool2D((2,2),(2,2))(c)\n    return c,p\ndef up_block(x,skip,filters, kernel_size = (3, 3), padding = "same",strides =1 ):\n    us = UpSampling2D((2,2))(x)\n    concat = Concatenate()([us,skip])\n    c = Dense_Block(concat,filters)\n    c = Dense_Block(c,filters)\n    return c\ndef bottleneck(x,filters, kernel_size = (3, 3), padding = "same",strides =1 ):\n    c = Dense_Block(x,filters)\n    c = Dense_Block(c,filters)\n    c = Dilated_Spatial_Pyramid_Pooling(c,filters)\n    return c\n\ndef UNet():\n    f = [32,64,128,256]\n    input = Input((128,128,1))\n    \n    \n    p0 = input\n    c1,p1 =  down_block(p0,f[0])\n    c2,p2 =  down_block(p1,f[1])\n    c3,p3 =  down_block(p2,f[2])\n\n    \n    bn = bottleneck(p3,f[3])\n    \n    u1 = up_block(bn,c3,f[2])\n    u2 = up_block(u1,c2,f[1])\n    u3 = up_block(u2,c1,f[0])\n    \n    \n    outputs = Conv2D(1,(1,1),padding= "same",activation = "sigmoid")(u3)\n    model = Model(input,outputs)\n    return model\nmodel=UNet()\nmodel.summary()\n
\n", + "system": "" + }, + { + "instruction": ""Layer is not connected, no input to return" error while trying to get intermediate layer prediction using tensorflow custom callback", + "input": "", + "output": "

I also cannot get the self.layers[0].input because of the same error, but maybe u can directly call function defined in Model like this:

\n
class Model(tf.keras.Model):\n    def __init__(self, input_shape=None, name="cus_model", **kwargs):\n        super(Model, self).__init__(name=name, **kwargs)\n        if not input_shape:\n            input_shape = (10,)\n        self.dense1 = tf.keras.layers.Dense(input_shape=input_shape, units=32)\n        self.dev_dataset = np.ones((8,16))\n\n    def call(self, input_tensor):\n        return self.dense1(input_tensor)\n\n\nclass CustomCallback(tf.keras.callbacks.Callback):\n    def on_epoch_end(self, epoch, logs=None):\n        self.model.call(self.model.dev_dataset)\n\n\nX = np.ones((8,16))\ny = np.sum(X, axis=1)\n\nmodel = Model()\nmodel.compile(optimizer='adam',loss='mean_squared_error', metrics='accuracy')\nmodel.fit(X,y, epochs=1, callbacks=[CustomCallback()])\n
\n", + "system": "" + }, + { + "instruction": "How to reset Keras metrics?", + "input": "", + "output": "

Your reproducible example failed in several places for me, so I changed just a few things (I'm using TF 2.1). After getting it to run, I was able to get rid of the additional metric names by specifying metrics=[AUC(name='auc')]. Here's the full (fixed) reproducible example:

\n\n
import numpy as np\nimport tensorflow as tf\nimport tensorflow.keras as keras\nfrom tensorflow.keras.metrics import AUC\n\n\ndef dummy_network(input_shape):\n    model = keras.Sequential()\n    model.add(keras.layers.Dense(10,\n                                 input_shape=input_shape,\n                                 activation=tf.nn.relu,\n                                 kernel_initializer='he_normal',\n                                 kernel_regularizer=keras.regularizers.l2(l=1e-3)))\n\n    model.add(keras.layers.Flatten())\n    model.add(keras.layers.Dense(11, activation='softmax'))\n\n    model.compile(optimizer='adagrad',\n                  loss='binary_crossentropy',\n                  metrics=[AUC(name='auc')])\n    return model\n\n\ndef train():\n    CB_lr = tf.keras.callbacks.ReduceLROnPlateau(\n        monitor=\"val_auc\",\n        patience=3,\n        verbose=1,\n        mode=\"max\",\n        min_delta=0.0001,\n        min_lr=1e-6)\n\n    CB_es = tf.keras.callbacks.EarlyStopping(\n        monitor=\"val_auc\",\n        min_delta=0.00001,\n        verbose=1,\n        patience=10,\n        mode=\"max\",\n        restore_best_weights=True)\n    callbacks = [CB_lr, CB_es]\n    y = tf.keras.utils.to_categorical([np.random.randint(0, 11) for _ in range(1000)])\n    x = [np.ones((37, 12, 1)) for _ in range(1000)]\n    dummy_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(batch_size=100).repeat()\n    val_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(batch_size=100).repeat()\n    model = dummy_network(input_shape=((37, 12, 1)))\n    model.fit(dummy_dataset, validation_data=val_dataset, epochs=2,\n              steps_per_epoch=len(x) // 100,\n              validation_steps=len(x) // 100, callbacks=callbacks)\n\n\nfor i in range(3):\n    print(f'\\n\\n **** Loop {i} **** \\n\\n')\n    train()\n
\n\n
Train for 10 steps, validate for 10 steps\nEpoch 1/2\n 1/10 [==>...........................] - ETA: 6s - loss: 0.3426 - auc: 0.4530\n 7/10 [====================>.........] - ETA: 0s - loss: 0.3318 - auc: 0.4895\n10/10 [==============================] - 1s 117ms/step - loss: 0.3301 - \n                                         auc: 0.4893 - val_loss: 0.3222 - val_auc: 0.5085\n
\n\n

This happens because every loop, you created a new metric without a specified name by doing this: metrics=[AUC()]. The first iteration of the loop, TF automatically created a variable in the name space called auc, but at the second iteration of your loop, the name 'auc' was already taken, so TF named it auc_1 since you didn't specify a name. But, your callback was set to be based on auc, which is a metric that this model didn't have (it was the metric of the model from the previous loop). So, you either do name='auc' and overwrite the previous metric name, or define it outside of the loop, like this:

\n\n
import numpy as np\nimport tensorflow as tf\nimport tensorflow.keras as keras\nfrom tensorflow.keras.metrics import AUC\n\nauc = AUC()\n\ndef dummy_network(input_shape):\n    model = keras.Sequential()\n    model.add(keras.layers.Dense(10,\n                                 input_shape=input_shape,\n                                 activation=tf.nn.relu,\n                                 kernel_initializer='he_normal',\n                                 kernel_regularizer=keras.regularizers.l2(l=1e-3)))\n\n    model.add(keras.layers.Flatten())\n    model.add(keras.layers.Dense(11, activation='softmax'))\n    model.compile(optimizer='adagrad',\n                  loss='binary_crossentropy',\n                  metrics=[auc])\n    return model\n
\n\n

And don't worry about keras resetting the metrics. It takes care of all that in the fit() method. If you want more flexibility and/or do it yourself, I suggest using custom training loops, and reset it yourself:

\n\n
auc = tf.keras.metrics.AUC()\n\nauc.update_state(np.random.randint(0, 2, 10), np.random.randint(0, 2, 10)) \n\nprint(auc.result())\n\nauc.reset_states()\n\nprint(auc.result())\n
\n\n
Out[6]: <tf.Tensor: shape=(), dtype=float32, numpy=0.875>  # state updated\n
\n\n
Out[8]: <tf.Tensor: shape=(), dtype=float32, numpy=0.0>  # state reset\n
\n", + "system": "" + }, + { + "instruction": "What does 'INFO:tensorflow:Oracle triggered exit' mean with keras tuner?", + "input": "", + "output": "

You can solve this with:

\n
tuner = RandomSearch(\n    tune_rnn_model,\n    objective='val_accuracy',\n    seed=SEED, \n    overwrite=True,\n    max_trials=MAX_TRIALS,\n    directory='project')\n
\n

To begin a new search and ignore any prior results, we set overwrite=True. Alternatively, you can delete the directory folder by using this code:

\n
!rm -r <directory folder>\n
\n", + "system": "" + }, + { + "instruction": "RuntimeError: Unable to create link (name already exists) Keras", + "input": "", + "output": "

I think the problem is that both of your weight variables have internally the same name, which should not happen, you can give them names with the name parameter to add_weight:

\n\n
self.alpha = self.add_weight(shape=(self.nout,), initializer='zeros',\n                         trainable=True, name=\"alpha\")\n\nself.beta = self.add_weight(shape=(self.nout,), initializer='zeros',\n                         trainable=True, name=\"beta\")\n
\n\n

This should workaround the problem.

\n", + "system": "" + }, + { + "instruction": "Does changing a token name in an image caption model affect performance?", + "input": "", + "output": "

I would go for option 2.

\n\n

When training the model from scratch, you are initializing the model's weights randomly and then you fit them based on your problem. However, if, instead of using random weights, you use weights that have already been trained for a similar problem, you may decrease the convergence time. This option is kind similar to the idea of transfer learning.

\n", + "system": "" + }, + { + "instruction": "How to efficiently assign to a slice of a tensor in TensorFlow", + "input": "", + "output": "

Here is another solution based on binary mask.

\n\n
\"\"\"Solution based on binary mask.\n- We just add this mask to inputs, instead of multiplying.\"\"\"\nclass AddToEven(tf.keras.Model):\n    def __init__(self):\n        super(AddToEven, self).__init__()        \n\n    def build(self, inputshape):\n        self.built = True # Actually nothing to build with, becuase we don't have any variables or weights here.\n\n    @tf.function\n    def call(self, inputs):\n        w = inputs.get_shape()[-1]\n\n        # 1-d mask generation for w-axis (activate even indices only)        \n        m_w = tf.range(w)  # [0, 1, 2,... w-1]\n        m_w = ((m_w%2)==0) # [True, False, True ,...] with dtype=tf.bool\n\n        # Apply 1-d mask to 2-d input\n        m_w = tf.expand_dims(m_w, axis=0) # just extend dimension as to be (1, W)\n        m_w = tf.cast(m_w, dtype=inputs.dtype) # in advance, we need to convert dtype\n\n        # Here, we just add this (1, W) mask to (H,W) input magically.\n        outputs = inputs + m_w # This add operation is allowed in both TF and numpy!\n        return tf.reshape(outputs, inputs.get_shape())\n
\n\n

Sanity-check here.

\n\n
# sanity-check as model\nmodel = AddToEven()\nmodel.build(tf.TensorShape([None, None]))\nz = model(tf.zeros([2,4]))\nprint(z)\n
\n\n

Result (with TF 2.1) is like this.

\n\n
tf.Tensor(\n[[1. 0. 1. 0.]\n [1. 0. 1. 0.]], shape=(2, 4), dtype=float32)\n
\n\n

-------- Below is the previous answer --------

\n\n

You need to create tf.Variable in build() method.\nIt also allows dynamic size by shape=(None,).\nIn the code below, I specified the input shape as (None, None).

\n\n
class AddToEven(tf.keras.Model):\n    def __init__(self):\n        super(AddToEven, self).__init__()\n\n    def build(self, inputshape):\n        self.v = tf.Variable(initial_value=tf.zeros((0,0)), shape=(None, None), trainable=False, dtype=tf.float32)\n\n    @tf.function\n    def call(self, inputs):\n        self.v.assign(inputs)\n        self.v[:, ::2].assign(self.v[:, ::2] + 1)\n        return self.v.value()\n
\n\n

I tested this code with TF 2.1.0 and TF1.15

\n\n
# test\nadd_to_even = AddToEven()\nz = add_to_even(tf.zeros((2,4)))\nprint(z)\n
\n\n

Result:

\n\n
tf.Tensor(\n[[1. 0. 1. 0.]\n [1. 0. 1. 0.]], shape=(2, 4), dtype=float32)\n
\n\n

P.S. There are some other ways, such as using tf.numpy_function(), or generating mask function.

\n", + "system": "" + }, + { + "instruction": "indices[201] = [0,8] is out of order. Many sparse ops require sorted indices.Use `tf.sparse.reorder` to create a correctly ordered copy", + "input": "", + "output": "

Mentioning the solution here (Answer Section) even though it is present in the Comments Section, for the benefit of the Community.

\n

The documentation for SparseTensor states

\n
By convention, indices should be sorted in row-major order (or equivalently \nlexicographic order on the tuples indices[i]). This is not enforced when\nSparseTensor objects are constructed, but most ops assume correct ordering. If \nthe ordering of sparse tensor st is wrong, a fixed version can be obtained by\ncalling [tf.sparse.reorder(st)][2].\n
\n

So, using either tf.sparse.reorder or scipy.sort_indices on the matrices, X_train_enc, X_test_enc, Y_train_enc, Y_test_enc, before the line of code,

\n
model.fit(X_train_enc, Y_train_enc, validation_data = (X_test_enc, \nY_test_enc), epochs = 20, batch_size = 64, shuffle = True)\n
\n

will resolve the issue.

\n

For more information, please refer documentation of Sparse Tensor and tf.sparse.reorder.

\n

Hope this helps. Happy Learning!

\n", + "system": "" + }, + { + "instruction": "KerasTuner Custom Objective Function", + "input": "", + "output": "

Thanks to the GitHub page provided above by @Shiva I tried this to get the AUC for the validation data with the Keras tuner, and it worked. My model is an LSTM, and I have made the MyHyperModel class to be able to tune the batch_size as described here. You don't have to do this if you want to use a fixed batch_size. You can uncomment any of the other metrics and do the regularization based on them in the same way.

\n
# make X_train, y_train, X_valid, y_valid\nmask_value=-9999.99\nepochs=200\n\nclass MyHyperModel(kt.HyperModel):\n  def build(self, hp):\n    hp_lstm_units = hp.Int('units', min_value=16, max_value=128, step=16)\n    hp_dropout_rate = hp.Float('drop_out_rate', min_value=0, max_value=0.6)\n    hp_recurrent_dropout_rate = hp.Float('recurrent_dropout_rate', min_value=0, max_value=0.6)\n    hp_initial_learning_rate = hp.Float('initial_learning_rate',  min_value=1e-3, max_value=1e-1, sampling='log')\n    hp_decay = hp.Int('decay', min_value=10, max_value=100, step=10 )\n\n    # model\n    model = tf.keras.Sequential()\n\n    model.add(tf.keras.layers.Masking(mask_value=mask_value, input_shape = (X_train.shape[1], X_train.shape[2])))\n    model.add(tf.keras.layers.LSTM(hp_lstm_units,\n    dropout=hp_dropout_rate, recurrent_dropout=hp_recurrent_dropout_rate))\n    model.add(tf.keras.layers.Dense(1, activation='sigmoid'))\n    model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=False), \n                  optimizer=keras.optimizers.SGD(learning_rate=hp_initial_learning_rate, decay=hp_decay), \n                  metrics=[\n                      # tf.keras.metrics.TruePositives(name='tp'),\n                      # tf.keras.metrics.FalsePositives(name='fp'),\n                      # tf.keras.metrics.TrueNegatives(name='tn'),\n                      # tf.keras.metrics.FalseNegatives(name='fn'),\n                      # tf.keras.metrics.BinaryAccuracy(name='accuracy'),\n                      # tf.keras.metrics.Precision(name='precision'),\n                      # tf.keras.metrics.Recall(name='recall'),\n                      tf.keras.metrics.AUC(name='auc'),\n                  ])\n    return model\n\n    def fit(self, hp):\n      hp_batch_size = hp.Int('batch_size', min_value=8, max_value=128, step=8)\n      return model.fit(\n          *args,\n          batch_size=hp_batch_size,\n          **kwargs)\n      \n\ntuner = kt.BayesianOptimization(\n    MyHyperModel(),\n    objective=kt.Objective('val_auc', direction='max'),\n    overwrite=True,\n    max_trials=100,\n    directory="MyDirectory",\n    project_name="MyProject",\n)\n\ntuner.search(X_train, y_train, epochs=200, validation_data=(X_valid, y_valid))\n
\n", + "system": "" + }, + { + "instruction": "Why does almost every Activation Function Saturate at Negative Input Values in a Neural Network", + "input": "", + "output": "
    \n
  1. True - ReLU is designed to result in zero for negative values. (It can be dangerous with big learning rates, bad initialization or with very few units - all neurons can get stuck in zero and the model freezes)

  2. \n
  3. False - Sigmoid results in zero for \"very negative\" inputs, not for \"negative\" inputs. If your inputs are between -3 and +3, you will see a very pleasant result between 0 and 1.

  4. \n
  5. False - The same comment as Sigmoid. If your inputs are between -2 and 2, you will see nice results between -1 and 1.

  6. \n
\n\n
\n\n

So, the saturation problem only exists for inputs whose absolute values are too big.

\n\n

By definition, the outputs are:

\n\n\n\n

You might want to use a BatchNormalization layer before these activations to avoid having big values and avoid saturation.

\n\n
\n\n

For predicting negative outputs, tanh is the only of the three that is capable of doing that.

\n\n

You could invent a negative sigmoid, though, it's pretty easy:

\n\n
def neg_sigmoid(x):\n    return -keras.backend.sigmoid(x)\n\n#use the layer:\nActivation(neg_sigmoid)\n
\n", + "system": "" + }, + { + "instruction": "ValueError: name for name_scope must be a string when Trying to Build up a Model Class in TF2.0", + "input": "", + "output": "

I had the same error due to missing square brackets. I guess you have a typo somewhere in your code. For me this code produced the same error

\n\n
model = tf.keras.Sequential(\n    feature_extractor,\n    layers.Dense(num_classes))\n
\n\n

Adding square brackets solved the issue

\n\n
model = tf.keras.Sequential( [\n    feature_extractor,\n    layers.Dense(num_classes) ]) \n
\n", + "system": "" + }, + { + "instruction": "Batch normalization layer for CNN-LSTM", + "input": "", + "output": "

Update: the LayerNormalization implementation I was using was inter-layer, not recurrent as in the original paper; results with latter may prove superior.

\n\n
\n\n

BatchNormalization can work with LSTMs - the linked SO gives false advice; in fact, in my application of EEG classification, it dominated LayerNormalization. Now to your case:

\n\n\n\n

Below is an example template you can use as a starting point; I also recommend the following SO's for further reading: Regularizing RNNs, and Visualizing RNN gradients

\n\n
from keras.layers import Input, Dense, LSTM, Conv1D, Activation\nfrom keras.layers import AlphaDropout, BatchNormalization\nfrom keras.layers import GlobalAveragePooling1D, Reshape, multiply\nfrom keras.models import Model\nimport keras.backend as K\nimport numpy as np\n\n\ndef make_model(batch_shape):\n    ipt = Input(batch_shape=batch_shape)\n    x   = ConvBlock(ipt)\n    x   = LSTM(16, return_sequences=False, recurrent_dropout=0.2)(x)\n    # x   = BatchNormalization()(x)  # may or may not work well\n    out = Dense(1, activation='relu')\n\n    model = Model(ipt, out)\n    model.compile('nadam', 'mse')\n    return model\n\ndef make_data(batch_shape):  # toy data\n    return (np.random.randn(*batch_shape),\n            np.random.uniform(0, 2, (batch_shape[0], 1)))\n\nbatch_shape = (32, 21, 20)\nmodel = make_model(batch_shape)\nx, y  = make_data(batch_shape)\n\nmodel.train_on_batch(x, y)\n
\n\n

Functions used:

\n\n
def ConvBlock(_input):  # cleaner code\n    x   = Conv1D(filters=10, kernel_size=3, padding='causal', use_bias=False,\n                 kernel_initializer='lecun_normal')(_input)\n    x   = BatchNormalization(scale=False)(x)\n    x   = Activation('selu')(x)\n    x   = AlphaDropout(0.1)(x)\n    out = SqueezeExcite(x)    \n    return out\n\ndef SqueezeExcite(_input, r=4):  # r == \"reduction factor\"; see paper\n    filters = K.int_shape(_input)[-1]\n\n    se = GlobalAveragePooling1D()(_input)\n    se = Reshape((1, filters))(se)\n    se = Dense(filters//r, activation='relu',    use_bias=False,\n               kernel_initializer='he_normal')(se)\n    se = Dense(filters,    activation='sigmoid', use_bias=False, \n               kernel_initializer='he_normal')(se)\n    return multiply([_input, se])\n
\n\n
\n\n

Spatial Dropout: pass noise_shape = (batch_size, 1, channels) to Dropout - has the effect below; see Git gist for code:

\n\n

\n", + "system": "" + }, + { + "instruction": "In TensorFlow 2.0, how to feed TFRecord data to keras model?", + "input": "", + "output": "

I'm doing something similar in TF 2.0 with a couple differences that may address your issues. Separate parsed_record in features and label:

\n
    feature, label = parsed_record['feature'], parsed_record['label']\n
\n

To continue getting batches from a dataset use ds.repeat:

\n
    ds.repeat(ds.shuffle(buffer_size=number_of_sample).batch(batch_size))\n
\n

My full input pipeline looks like:

\n
def _parse_function_same_side(example_proto):\n    """Extracts features and labels.\n  \n    Args:\n        example_proto: tf.Example protocol    \n      Returns:\n    A `tuple` `(features, labels)`:\n      features: A 2D tensor representing the features\n      labels: A tensor with the corresponding labels.\n    """\n    feature_description = {\n        "features": tf.io.FixedLenFeature(4, tf.int64), \n        "label": tf.io.FixedLenFeature(1, tf.int64)\n                }\n    \n    parsed_features = tf.io.parse_single_example(example_proto, feature_description)\n    \n    features = parsed_features['features']\n    \n    labels = tf.one_hot(parsed_features['label'],depth=len(hero_vocab))\n    return features, labels\n
\n
def _input_fn(input_filenames, num_epochs=None, \n              shuffle=True, batch_size=50,compression_type=""):\n   \n    ds=tf.data.TFRecordDataset(input_filenames,compression_type=compression_type)\n    ds=ds.map(_parse_function)\n    \n    #only shuffle if shuffle flag\n    if shuffle:\n        ds = ds.shuffle(10000)\n    \n    #take only dataset of length batch_size\n    ds = ds.batch(batch_size)\n    \n    #make sure you can repeatedly take datasets from the TFRecord\n    ds = ds.repeat()\n    \n    # Return the dataset.\n    return ds\n
\n

After this I just directly feed the dataset to my model.

\n", + "system": "" + }, + { + "instruction": "Unable to import Keras(from TensorFlow 2.0) in PyCharm 2019.2", + "input": "", + "output": "

For PyCharm Users

\n\n

For those who use PyCharm. Install future (EAP) release 2019.3 EAP build 193.3793.14 from here. With that, you will be able to use autocomplete for the current stable release of TensorFlow (i.e. 2.0). I have tried it and it works :).

\n\n

For other IDEs

\n\n

For users with other IDEs, this will be resolved only after the stable version is released, which is anyways the case now. But this might take some more time for a fix. See the comment here. I assume it will be wise to wait and keep using version 2.0.0.b1. On the other hand avoid imports from tensorflow_core if you do not want to refactor your code in the future.

\n\n

Note: for autocomplete to work use import statement as below

\n\n
import tensorflow.keras as tk\n\n# this does not work for autocomplete \n# from tensorflow import keras as tk  \n
\n\n

The autocomplete works for TensorFlow 2.0.0 on CPU version, but the autocomplete does not work for the GPU version.

\n", + "system": "" + }, + { + "instruction": "How to save one hot encoder?", + "input": "", + "output": "

Mentioning the Answer in this Section (although it is present in Comments Section), for the benefit of the Community.

\n\n

To Save the Encoder, you can use the below code:

\n\n
import pickle\nwith open(\"encoder\", \"wb\") as f: \n    pickle.dump(one_hot, f)\n
\n\n

Then to Load the Saved Encoder, use the below code:

\n\n
encoder = pickle.load(f) \nencoded_docs =[encoder(d, vocab_size) for d in df.text]\n
\n\n

Since the function, from.keras.preprocessing.text import one_hot uses hash() to generate quasi-unique encodings, we need to use a HashSeed for reproducing our Results (getting same result even after multiple executions).

\n\n

Run the below code in the Terminal, for Setting the HashSeed:

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "How to name custom metrics in Keras fit output", + "input": "", + "output": "

Yes, this is possible. In the metric factory, just set an appropriate __name__ of metric function. For example:

\n\n
def my_metric_factory(the_param=1.0):\n    def fn(y_true, y_pred):\n        return my_dummy_metric(y_true, y_pred, the_param=the_param)\n\n    fn.__name__ = 'metricname_{}'.format(the_param)\n    return fn\n
\n", + "system": "" + }, + { + "instruction": "Unable to save model with tensorflow 2.0.0 beta1", + "input": "", + "output": "

I have tried the same minimal reproduction example in tensorflow-gpu 2.0.0-rc0 and the error was more revealing than what the beta version gave me. The error in RC says:

\n\n
\n

NotImplementedError: When subclassing the Model class, you should\n implement a call method.

\n
\n\n

This got me read through https://www.tensorflow.org/beta/guide/keras/custom_layers_and_models where I found examples of how to do subclassing in TF2 in a way that allows saving. I was able to resolve the error and have the model saved by replacing my 'decode' method by 'call' in the above example (although this will be more complicated with my actual code where I had various methods defined for the class). This solved the error both in beta and in rc. Strangely, the training (or the saving) got also much faster in rc.

\n", + "system": "" + }, + { + "instruction": "Higher loss penalty for true non-zero predictions", + "input": "", + "output": "

Not sure there is anything better than a custom loss just like you did, but there is a cleaner way:

\n\n
def weightedLoss(w):\n\n    def loss(true, pred):\n\n        error = K.square(true - pred)\n        error = K.switch(K.equal(true, 0), w * error , error)\n\n        return error \n\n    return loss\n
\n\n

You may also return K.mean(error), but without mean you can still profit from other Keras options like adding sample weights and other things.

\n\n

Select the weight when compiling:

\n\n
model.compile(loss = weightedLoss(0.1), ...)\n
\n\n

If you have the entire data in an array, you can do:

\n\n
w = K.mean(y_train)\nw = w / (1 - w) #this line compesates the lack of the 90% weights for class 1\n
\n\n
\n\n

Another solution that can avoid using a custom loss, but requires changes in the data and the model is:

\n\n\n\n

For the zero values, make the first of the two classes = 1
\nFor the one values, make the second of the two classes = 1

\n\n
newY = np.stack([1-oldY, oldY], axis=-1)    \n
\n\n

Adjust the model to output this new shape.

\n\n
...\nmodel.add(Dense(2*classes))\nmodel.add(Reshape((classes,2)))\nmodel.add(Activation('softmax'))\n
\n\n

Make sure you are using a softmax and a categorical_crossentropy as loss.

\n\n

Then use the argument class_weight={0: w, 1: 1} in fit.

\n", + "system": "" + }, + { + "instruction": "Keras: display model shape in Jupyter Notebook", + "input": "", + "output": "

since the resulting image is not a svg file anymore you should replace SVG with Image\nuse

\n\n
from IPython.display import Image \n... \n\nplot_model(model, show_shapes=True, show_layer_names=True, to_file='model.png')\nImage('model.png')\n
\n", + "system": "" + }, + { + "instruction": "'RefVariable' object has no attribute '_id'", + "input": "", + "output": "

You forgot to enable eager execution mode.

\n\n

Add below line after the import statements:

\n\n
tf.enable_eager_execution()\n
\n\n

Updated Code:

\n\n
import tensorflow as tf\nimport numpy as np\nimport pandas as pd\n\ntf.enable_eager_execution()\n\n#define trainable variables \n#for linear regression this is the intercept and the slope\nintercept = tf.Variable(0.1, tf.float32)\nslope = tf.Variable(0.1, tf.float32)\n\n#define a linear regression function\ndef linear_regression(intercept,slope, features):\n  return intercept + slope*features\n\n#compute predicted values and return loss function\ndef loss_function (intercept,slope,targets,features):\n  predictions = linear_regression(intercept,slope,features)\n  return tf.keras.losses.mse(targets,predictions)\n\n#OPTIMIZER\nopt = tf.train.AdamOptimizer()\n\nfor batch in pd.read_csv('kc_house_data.csv', chunksize = 100):\n  #extract the target and feature columns\n  price_batch = np.array(batch['price'], np.float32)\n  size_batch = np.array(batch['sqft_lot'], np.float32)\n\n  loss_function(intercept,slope,price_batch,size_batch)\n\n  #minimize the loss function\n  opt.minimize(lambda: loss_function(intercept,slope,price_batch,size_batch), var_list=[intercept,slope])\n\nprint(intercept.numpy(), slope.numpy())\n
\n", + "system": "" + }, + { + "instruction": "Default Adam optimizer doesn't work in tf.keras but string `adam` does", + "input": "", + "output": "

After a bit of digging it seems that when you type the string 'adam' it calls another adam, which it refers to as adam_v2.

\n

This can be found here.

\n
from tensorflow.python.keras.optimizer_v2.adam import Adam\n\nadam = Adam()\n\nmodel.compile(optimizer=adam, loss='categorical_crossentropy')\nmodel.fit(x, y)\n
\n", + "system": "" + }, + { + "instruction": "How to fix 'ValueError: Empty Training Data' error in tensorflow", + "input": "", + "output": "

There might be other reasons for this error, but I realized that I had a batch size that was greater than my sample size.

\n", + "system": "" + }, + { + "instruction": "Keras: how to reset optimizer state?", + "input": "", + "output": "

There isn't an \"easy\" way to reset the \"states\", but you can always simply recompile your model with a new optimizer (model's weights are preserved):

\n\n
newOptimizer = Adadelta()\nmodel.compile(optimizer=newOptimizer)     \n
\n\n

You can also use the method set_weights(weightsListInNumpy) (not recommended), in the base class Optimizer, but this would be rather cumbersome as you would need to know all initial values and shapes, which sometimes may not be trivial zeroes .

\n\n

Now, the property self.weights doesn't do much, but the functions that save and load optimizers will save and load this property. It's a list of tensors and should not be changed directly. At most use K.set_value(...) in each entry of the list. You can see the weights in saving the optimizer in the _serialize_model method.

\n\n

The self.updates are something a little more complex to understand. It stores the variables that will be updated with every batch that is processed by the model in training. But it's a symbolic graph variable.

\n\n

The self.updates, as you can see in the code, is always appended with a K.update(var, value) or K.update_add(var, value). This is the correct way to tell the graph that these values should be updated every iteration.

\n\n

Usually, the updated vars are iterations, params (the model's weights), moments, accumulators, etc.

\n", + "system": "" + }, + { + "instruction": "Difference between DepthwiseConv2D and SeparableConv2D", + "input": "", + "output": "

Correct, checking the source code (I did this for tf.keras but I suppose it is the same for standalone keras) shows that in SeparableConv2D, the separable convolution works using only filters, no biases, and a single bias vector is added at the end. The second version, on the other hand, has biases for both DepthwiseConv2D and Conv2D.

\n\n

Given that convolution is a linear operation and you are using no non-linearity inbetween depthwise and 1x1 convolution, I would suppose that having two biases is unnecessary in this case, similar to how you don't use biases in a layer that is followed by batch normalization, for example. As such, the extra 10 parameters wouldn't actually improve the model (nor should they really hurt either).

\n", + "system": "" + }, + { + "instruction": "Keras How To Resume Training With Adam Optimizer", + "input": "", + "output": "
\n

I'm wondering what's the right approach to resume training using Adam optimizer?

\n
\n\n

As mentioned here: https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model, model.save() followed by load_model() will take care of compiling the model using the saved training configuration.

\n\n
if not os.path.exists('tf_keras_cifar10.h5'):\n    model = get_model() #this method constructs the model and compiles it \nelse:\n    model = load_model('tf_keras_cifar10.h5') #load the model from file\n    print('lr is ', K.get_session().run(model.optimizer.lr))\n    initial_epoch=10\n    epochs=13\n\nhistory = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,validation_data=(x_test, y_test), initial_epoch=initial_epoch)\nmodel.save('tf_keras_cifar10.h5')\n
\n\n\n\n

Epoch 10/10\n50000/50000 [==============================] - 13s 255us/sample - loss: 0.6257 - acc: 0.7853 - val_loss: 0.8886 - val_acc: 0.6985

\n\n\n\n

Epoch 11/13\n50000/50000 [==============================] - 15s 293us/sample - loss: 0.6438 - acc: 0.7777 - val_loss: 0.8732 - val_acc: 0.7083

\n\n

Please check this issue as well related to resuming training using Adam Optimizer(tf.keras): https://github.com/tensorflow/tensorflow/issues/27049

\n\n

The recommendation is to upgrade the TF version.

\n", + "system": "" + }, + { + "instruction": "what is the difference between conv2d and Conv2D in Keras?", + "input": "", + "output": "

Tensorflow and Keras now are using channel_last convention. So first you should \npermute the channel dim to the last using K.permute_dimension. You might try this code in colab.research.google.com to figure out yourself.

\n\n

First question:

\n\n\n\n
# The second \nimport keras\nconv_layer = keras.layers.Conv2D(filters=64, kernel_size=8, strides=(4, 4), padding='same')\n
\n\n

Basically, they differ from the way to define and the way to use. K.conv2d is used inside keras.layers.Conv2D when conv_layer apply convolution on some input x such as conv_layer.

\n\n
\n

The example below may help you to understand it easier the difference between say_hello and SayHello.

\n
\n\n
def say_hello(word, name):\n    print(word, name)\n\n\nclass SayHello():\n\n    def __init__(self, word='Hello'):\n        self.word = word\n        pass\n\n    def __call__(self, name):\n        say_hello(self.word, name)\n\n\nsay_hello('Hello', 'Nadia') #Hello Nadia\n\nsayhello = SayHello(word='Hello') # you will get an instance `sayhello` from class SayHello\n\nsayhello('Nadia') # Hello Nadia\n\n
\n\n

Second question:

\n\n\n\n
import tensorflow as tf\nimport tensorflow.keras.backend as K\n\nimage = tf.random_normal((10,3, 32, 32))\nprint(image.shape) # shape=(10, 3, 32, 32)\n\nchannel = 1\nimage_yuv_ch = K.expand_dims(image[:, channel,:,:], axis=1) # shape=(10, 1, 32, 32)\nimage_yuv_ch = K.permute_dimensions(image_yuv_ch, (0, 2, 3, 1)) # shape=(10, 32, 32, 1)\n\n# The first K.conv2d\nin_channels = 1\nout_channels = 64 # same as filters\nkernel = tf.random_normal((8, 8, in_channels, out_channels)) # shape=(8, 8, 1, 64)\n\nimage_conv = tf.keras.backend.conv2d(image_yuv_ch, kernel=kernel, strides=(4, 4), padding='same')\nprint(image_conv.shape) #shape=(10, 8, 8, 64)\n\n\n# The second \nimport keras\nconv_layer = keras.layers.Conv2D(filters=64, kernel_size=8, strides=(4, 4), padding='same')\nimage_conv = conv_layer(image_yuv_ch)\nprint(image_conv.shape) #shape=(10, 8, 8, 64)\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Unknown activation function: my_custom_activation_function", + "input": "", + "output": "

I want to share, how I solved this :

\n\n
model= load_model(\"model_baseline_lsm.h5\",\n                  custom_objects = {\"weibull_loglik_discrete\": weibull_loglik_discrete,\"activate\":activate})\n
\n\n

The pattern is as follow:

\n\n
model = load_model(f\"{SAVED_MODELS_DIR}/model_{model_idx}_epoch_{global_epoch}\", \n                   custom_objects = {\"custom_loss\": custom_loss})\n
\n\n

I hope this help :)

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'tensorflow' has no attribute 'get_default_graph'", + "input": "", + "output": "

Change

\n\n
Import keras.<something>.<something>\n
\n\n

to

\n\n
Import tensorflow.keras.<something>.<something>\n
\n\n

where \"something\" refers to the module you want to import. It worked for me.

\n", + "system": "" + }, + { + "instruction": "How can I implement dilated convolution in keras?", + "input": "", + "output": "

The standard keras Conv2D layer supports dilation, you just need to set the dilation_rate to a value bigger than one. For example:

\n\n
out = Conv2D(10, (3, 3), dilation_rate=2)(input_tensor)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow predict the class of output", + "input": "", + "output": "

You are trying to map the predicted class probabilities back to class labels. Each row in the list of output predictions contains the three predicted class probabilities. Use np.argmax to obtain the one with the highest predicted probability in order to map to the predicted class label:

\n
import numpy as np\n\npredictions = [[0.3112209,  0.3690182,  0.31357136],\n [0.31085992, 0.36959863, 0.31448898],\n [0.31073445, 0.3703295, 0.31469804],\n [0.31177694, 0.37011752, 0.3145326 ],\n [0.31220382, 0.3692756, 0.31515726],\n [0.31232828, 0.36947766, 0.3149037 ],\n [0.31190437, 0.36756667, 0.31323162],\n [0.31339088, 0.36542615, 0.310322  ],\n [0.31598282, 0.36328828, 0.30711085]] \n\nnp.argmax(predictions, axis=1) \n
\n

Gives:

\n
array([1, 1, 1, 1, 1, 1, 1, 1, 1])\n
\n

In this case, class 1 is predicted 9 times.

\n

As noted in the comments: this is exactly what Keras does under the hood, as you'll see in the source code.

\n", + "system": "" + }, + { + "instruction": "Should I use the standalone Keras library or tf.keras?", + "input": "", + "output": "

You are mixing things up:

\n\n\n\n

So to answer your question: no, you don't need to convert Keras code to tf.keras code. Keras code uses the Keras library, potentially even runs on top of a different backend than TensorFlow, and will continue to work just fine in the future. Even more, it's important to not just mix up Keras and tf.keras objects within the same script, since this might produce incompatabilities, as you can see for example in this question.

\n\n

Update: Keras will be abandoned in favor of tf.keras: https://twitter.com/fchollet/status/1174019423541157888

\n", + "system": "" + }, + { + "instruction": "Element-wise multiplication with Keras", + "input": "", + "output": "

You need a Reshape so both tensors have the same number of dimensions, and a Multiply layer

\n\n
mask = Reshape((256,256,1))(mask) \nout = Multiply()([image,mask])\n
\n\n

If you have variable shapes, you can use a single Lambda layer like this:

\n\n
import keras.backend as K \n\ndef multiply(x):\n    image,mask = x\n    mask = K.expand_dims(mask, axis=-1) #could be K.stack([mask]*3, axis=-1) too \n    return mask*image\n\nout = Lambda(multiply)([image,mask])\n
\n", + "system": "" + }, + { + "instruction": "How to use TensorFlow tf.print with non capital p?", + "input": "", + "output": "

Both the documentation of tf.print and tf.Print mention that tf.print returns an operation with no output, so it cannot be evaluated to any value. The syntax of tf.print is meant to be more similar to Python's builtin print. In your case, you could use it as follows:

\n\n
def custom_loss(y_true, y_pred):\n    loss = K.mean(...)\n    print_op = tf.print(\"Debug output:\", loss, y_true, y_true.shape)\n    with tf.control_dependencies([print_op]):\n        return K.identity(loss)\n
\n\n

Here K.identity creates a new tensor identical to loss but with a control dependency to print_op, so evaluating it will force executing the printing operation. Note that Keras also offers K.print_tensor, although it is less flexible than tf.print.

\n", + "system": "" + }, + { + "instruction": "Keras: update model with a bigger training set", + "input": "", + "output": "

You can save/load model/weights. Check out this tutorial by Jason Brownlee.

\n\n

After you loaded the weights, you can start training with the new dataset (the 55000 samples). As the 'training' is basically just updating weights, and you loaded your trained weights, you are now 'updating' the already trained model.

\n", + "system": "" + }, + { + "instruction": "What does DeepMind's Sonnet afford that Keras doesn't?", + "input": "", + "output": "

There isn't much difference between them. They are both:

\n\n\n\n

So why did they make Sonnet? It appears that Keras doesn't seem to suit the needs of DeepMind. So DeepMind came up with Sonnet, a high-level object oriented programming library built on top of TensorFlow to address its research needs.

\n\n

Keras and Sonnet are both trying to simplify deep reinforcement learning, with the major difference being Sonnet is specifically adapted to the problems that DeepMind explores.

\n\n

The main advantage of Sonnet, from my perspective, is you can use it to reproduce the research demonstrated in DeepMind's papers with greater ease than keras, since DeepMind will be using Sonnet themselves. Aside from that advantage, it's just yet another framework with which to explore deep RL problems.

\n", + "system": "" + }, + { + "instruction": "TypeError: __init__() got an unexpected keyword argument 'trainable'", + "input": "", + "output": "

I think you missed a small detail in your layer definition. You layers' __init__ method should take keyword arguments (**kwargs) and you should pass these keyword arguments to the parent class __init__, like this:

\n\n
class AttLayer(Layer):\n    def __init__(self, attention_dim, **kwargs):\n        self.init = initializers.get('normal')\n        self.supports_masking = True\n        self.attention_dim = attention_dim\n        super(AttLayer, self).__init__(**kwargs)\n
\n\n

This way any generic layer parameter will be correctly passed to the parent class, in your case, the trainable flag.

\n", + "system": "" + }, + { + "instruction": "Validation accuracy constant in Keras CNN for multiclass image classification", + "input": "", + "output": "

There are a variety of possible underlying factors that can potentially cause this phenomenon - below is a list, by no means exhaustive, of some preliminary fixes you could try:

\n\n\n\n

Have a look at this GitHub issue for further suggestions that may help resolve your problem:

\n\n

https://github.com/keras-team/keras/issues/1597

\n", + "system": "" + }, + { + "instruction": "Passing multiple inputs to keras model from tf.dataset API?", + "input": "", + "output": "

In the new version of TensorFlow (1.14 and above( , tf.keras allow me to pass multiple instances to model.fit.

\n", + "system": "" + }, + { + "instruction": "Why does sigmoid & crossentropy of Keras/tensorflow have low precision?", + "input": "", + "output": "

TL;DR version: the probability values (i.e. the outputs of sigmoid function) are clipped due to numerical stability when computing the loss function.

\n\n
\n\n

If you inspect the source code, you would find that using binary_crossentropy as the loss would result in a call to binary_crossentropy function in losses.py file:

\n\n
def binary_crossentropy(y_true, y_pred):\n    return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)\n
\n\n

which in turn, as you can see, calls the equivalent backend function. In case of using Tensorflow as the backend, that would result in a call to binary_crossentropy function in tensorflow_backend.py file:

\n\n
def binary_crossentropy(target, output, from_logits=False):\n    \"\"\" Docstring ...\"\"\"\n\n    # Note: tf.nn.sigmoid_cross_entropy_with_logits\n    # expects logits, Keras expects probabilities.\n    if not from_logits:\n        # transform back to logits\n        _epsilon = _to_tensor(epsilon(), output.dtype.base_dtype)\n        output = tf.clip_by_value(output, _epsilon, 1 - _epsilon)\n        output = tf.log(output / (1 - output))\n\n    return tf.nn.sigmoid_cross_entropy_with_logits(labels=target,\n                                                   logits=output)\n
\n\n

As you can see from_logits argument is set to False by default. Therefore, the if condition evaluates to true and as a result the values in the output are clipped to the range [epsilon, 1-epislon]. That's why no matter how small or large a probability is, it could not be smaller than epsilon and greater than 1-epsilon. And that explains why the output of binary_crossentropy loss is also bounded.

\n\n

Now, what is this epsilon here? It is a very small constant which is used for numerical stability (e.g. prevent division by zero or undefined behaviors, etc.). To find out its value you can further inspect the source code and you would find it in the common.py file:

\n\n
_EPSILON = 1e-7\n\ndef epsilon():\n    \"\"\"Returns the value of the fuzz factor used in numeric expressions.\n    # Returns\n        A float.\n    # Example\n    ```python\n        >>> keras.backend.epsilon()\n        1e-07\n    ```\n    \"\"\"\n    return _EPSILON\n
\n\n

If for any reason, you would like more precision you can alternatively set the epsilon value to a smaller constant using set_epsilon function from the backend:

\n\n
def set_epsilon(e):\n    \"\"\"Sets the value of the fuzz factor used in numeric expressions.\n    # Arguments\n        e: float. New value of epsilon.\n    # Example\n    ```python\n        >>> from keras import backend as K\n        >>> K.epsilon()\n        1e-07\n        >>> K.set_epsilon(1e-05)\n        >>> K.epsilon()\n        1e-05\n    ```\n    \"\"\"\n    global _EPSILON\n    _EPSILON = e\n
\n\n

However, be aware that setting epsilon to an extremely low positive value or zero, may disrupt the stability of computations all over the Keras.

\n", + "system": "" + }, + { + "instruction": "Different loss function for validation set in Keras", + "input": "", + "output": "

You can try the backend function K.in_train_phase(), which is used by the Dropout and BatchNormalization layers to implement different behaviors in training and validation.

\n\n\n\n
def custom_loss(y_true, y_pred):\n    weighted_loss = ... # your implementation of weighted crossentropy loss\n    unweighted_loss = K.sparse_categorical_crossentropy(y_true, y_pred)\n    return K.in_train_phase(weighted_loss, unweighted_loss)\n
\n\n

The first argument of K.in_train_phase() is the tensor used in training phase, and the second is the one used in test phase.

\n\n

For example, if we set weighted_loss to 0 (just to verify the effect of K.in_train_phase() function):

\n\n
def custom_loss(y_true, y_pred):\n    weighted_loss = 0 * K.sparse_categorical_crossentropy(y_true, y_pred)\n    unweighted_loss = K.sparse_categorical_crossentropy(y_true, y_pred)\n    return K.in_train_phase(weighted_loss, unweighted_loss)\n\nmodel = Sequential([Dense(100, activation='relu', input_shape=(100,)), Dense(1000, activation='softmax')])\nmodel.compile(optimizer='adam', loss=custom_loss)\nmodel.outputs[0]._uses_learning_phase = True  # required if no dropout or batch norm in the model\n\nX = np.random.rand(1000, 100)\ny = np.random.randint(1000, size=1000)\nmodel.fit(X, y, validation_split=0.1)\n\nEpoch 1/10\n900/900 [==============================] - 1s 868us/step - loss: 0.0000e+00 - val_loss: 6.9438\n
\n\n

As you can see, the loss in training phase is indeed the one multiplied by 0.

\n\n

Note that if there's no dropout or batch norm in your model, you'll need to manually \"turn on\" the _uses_learning_phase boolean switch, otherwise K.in_train_phase() will have no effect by default.

\n", + "system": "" + }, + { + "instruction": "Proceed (y/n)? in Python", + "input": "", + "output": "

You can't interact with shell commands after you launch them so you have to either pass the --yes flag:

\n\n
!pip install keras==2.1.2 --yes\n
\n\n

or you can execute your command outside of ipython (jupyter).

\n", + "system": "" } ] \ No newline at end of file