diff --git "a/data/dataset.json" "b/data/dataset.json" --- "a/data/dataset.json" +++ "b/data/dataset.json" @@ -32860,5 +32860,4637 @@ "input": "", "output": "

This error will occur when the index of blank is larger than the total number of classes, which equals number of chars + blank. What's more, the index starts from 0, instead of 1, so if you have 62 characters in total, their index should be 0-61 and the index of blank should be 62 instead of 63. (Or you can set blank as 0, other characters from 1-62)

\n

You should also check the shape of the output tensor, it should has shape [T, B, C], where T is the time step length, B is the batch size, C is the class num, remember to add blank in to the class num or you will meet the problem

\n", "system": "" + }, + { + "instruction": "RuntimeError: CUDNN_STATUS_INTERNAL_ERROR", + "input": "", + "output": "

I just encountered this problem on ubuntu16.04 and solved it. My solution was to run

\n\n
sudo rm -rf ~/.nv\n
\n\n

and then reboot.

\n", + "system": "" + }, + { + "instruction": "Using autograd to compute Jacobian matrix of outputs with respect to inputs", + "input": "", + "output": "

The code you posted should give you the partial derivative of your first output w.r.t. the input. However, you also have to set requires_grad_(True) on the inputs, as otherwise PyTorch does not build up the computation graph starting at the input and thus it cannot compute the gradient for them.

\n\n

This version of your code example computes du and dv:

\n\n
net = GeneratorNet()\ncoords = torch.randn(10, 2)\ncoords.requires_grad = True\ntensor = net(coords)\nu, v = torch.split(tensor, 1, dim=1)\ndu = torch.autograd.grad(u, coords, grad_outputs=torch.ones_like(u))[0]\ndv = torch.autograd.grad(v, coords, grad_outputs=torch.ones_like(v))[0]\n
\n\n

You can also compute the partial derivative for a single output:

\n\n
net = GeneratorNet()\ncoords = torch.randn(10, 2)\ncoords.requires_grad = True\ntensor = net(coords)\nu, v = torch.split(tensor, 1, dim=1)\ndu_0 = torch.autograd.grad(u[0], coords)[0]\n
\n\n

where du_0 == du[0].

\n", + "system": "" + }, + { + "instruction": "How is spectral norm of a parameter calculated?", + "input": "", + "output": "

I just finished reading the paper for this method which can be found on arxiv. If you have the appropriate mathematical background I would recommend reading it. See appendix A for the power algorithm which describes what u and v are.

\n\n

That said I'll try to summarize here.

\n\n

First, you should know that the spectral norm of a matrix is the maximum singular value. The authors propose finding the spectral norm of weight matrix W, then dividing W by its spectral norm to make it close to 1 (justification for this decision is in the paper).

\n\n

While we could just use torch.svd to find a precise estimate of the singular values, they instead use a fast (but imprecise) method called \"power iteration\". Long story short, the weight_u and weight_v are rough approximations of the left and right singular vectors corresponding to the largest singular value of W. They are useful because the associated singular value, i.e. the spectral norm, of W is equal to u.transpose(1,0) @ W @ v if u and v are the actual left/right singular vectors of W.

\n\n\n\n

We can verify these claims by showing that the actual left and right singular vectors are nearly parallel to y.weight_u and y.weight_v

\n\n
import torch\nimport torch.nn as nn\n\n# pytorch default is 1\nn_power_iterations = 1\n\ny = nn.Linear(3,3)\ny = nn.utils.spectral_norm(y, n_power_iterations=n_power_iterations)\n\n# spectral normalization is performed during forward pre hook for technical reasons, we\n# need to send something through the layer to ensure normalization is applied\n# NOTE: After this is performed, x.weight is changed in place!\n_ = y(torch.randn(1,3))\n\n# test svd vs. spectral_norm u/v estimates\nu,s,v = torch.svd(y.weight_orig)\ncos_err_u = 1.0 - torch.abs(torch.dot(y.weight_u, u[:, 0])).item()\ncos_err_v = 1.0 - torch.abs(torch.dot(y.weight_v, v[:, 0])).item()\nprint('u-estimate cosine error:', cos_err_u)\nprint('v-estimate cosine error:', cos_err_v)\n\n# singular values\nactual_orig_sn = s[0].item()\napprox_orig_sn = (y.weight_u @ y.weight_orig @ y.weight_v).item()\nprint('Actual original spectral norm:', actual_orig_sn)\nprint('Approximate original spectral norm:', approx_orig_sn)\n\n# updated weights singular values\nu,s_new,v = torch.svd(y.weight.data, compute_uv=False)\nactual_sn = s_new[0].item()\nprint('Actual updated spectral norm:', actual_sn)\nprint('Desired updated spectral norm: 1.0')\n
\n\n

which results in

\n\n
u-estimate cosine error: 0.00764310359954834\nv-estimate cosine error: 0.034041762351989746\nActual original spectral norm: 0.8086231350898743\nApproximate original spectral norm: 0.7871124148368835\nActual updated spectral norm: 1.0273288488388062\nDesired updated spectral norm: 1.0\n
\n\n

Increasing the n_power_iterations parameter will increase the accuracy of the estimate at the cost of computation time.

\n", + "system": "" + }, + { + "instruction": "Using torch.nn.Embedding for GloVe: should we fine-tune the embeddings or just use them as they are?", + "input": "", + "output": "

You should absolutely fine-tune your word embedding matrix. Here is the thing, when you initialize the word embedding matrix with the GloVe word embeddings, your word embeddings will already capture most of the semantic properties of the data. However, you want your word embeddings to be tailored to the task your solving i.e task specific (Check Yang). Now, assuming that you don't have enough data in your dataset, you can't learn the word embedding matrix on your own (If you initialize the word embedding matrix with random vectors). Because of that, you want to initialize it with vectors that have been trained on huge datasets and are general.

\n

One really important thing to keep in mind \u2192 Because the rest of your model is going to be initialized randomly, when you start training your word embedding matrix may suffer from catastrophic forgetting (Check the work of Howard and Ruder and Kirkpatrick et al.), i.e., the gradients will be huge because your model will drastically underfit the data for the first few batches, and you will lose the initial vectors completely. You can overcome this by:

\n
    \n
  1. For the first several epochs don't fine-tune the word embedding matrix, just keep it as it is: embeddings = nn.Embedding.from_pretrained(glove_vectors, freeze=True).

    \n
  2. \n
  3. After the rest of the model has learned to fit your training data, decrease the learning rate, unfreeze the your embedding module embeddings.weight.requires_grad = True, and continue training.

    \n
  4. \n
\n

By following the above mentioned steps, you will get the best of both worlds. In other words, your word embeddings will still capture semantic properties while being tailored for your own downstream task. Finally, there are works (Check Ye Zhang for example) showing that it is fine to fine-tune immediately, but I would opt for the safer option.

\n", + "system": "" + }, + { + "instruction": "Multilabel classification with class imbalance in Pytorch", + "input": "", + "output": "

I would suggest either one of these strategies

\n

Focal Loss

\n

A very interesting approach for dealing with un-balanced training data through tweaking of the loss function was introduced in
\nTsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar Focal Loss for Dense Object Detection (ICCV 2017).
\nThey propose to modify the binary cross entropy loss in a way that decrease the loss and gradient of easily classified examples while "focusing the effort" on examples where the model makes gross errors.

\n

Hard Negative Mining

\n

Another popular approach is to do "hard negative mining"; that is, propagate gradients only for part of the training examples - the "hard" ones.
\nsee, e.g.:
\nAbhinav Shrivastava, Abhinav Gupta and Ross Girshick Training Region-based Object Detectors with Online Hard Example Mining (CVPR 2016)

\n", + "system": "" + }, + { + "instruction": "How do you generate positive definite matrix in pytorch?", + "input": "", + "output": "

The answer is one should make a dot product of matrix A and it's transpose matrix (A.t()) in order to obtain a positive semi-definite matrix. The last thing is to ensure that it is definite (strictly greater than zero).

\n\n

With Pytorch:

\n\n
Sigma_k = torch.rand(512, 512)\nSigma_k = torch.mm(Sigma_k, Sigma_k.t())\nSigma_k.add_(torch.eye(512))\n
\n\n

Formal algorithm is described here.

\n", + "system": "" + }, + { + "instruction": "Using imbalanced-learn with Pandas DataFrame", + "input": "", + "output": "

If the string column is an input to your RNN then, assuming that you plan to encode it somehow (one-hot encoding for example), then just encode that column before oversampling and then run the oversampling with your new encoded columns instead of the string column.

\n", + "system": "" + }, + { + "instruction": "Unable to reproduce result obtained from hyperparameter tuning using hyperopt", + "input": "", + "output": "

It seems like the seeds you have set in start that was not linked to hyperopt fmin function. It can be done by introducing np.random.default_rng parameter as explained below, also do not forget to define the random_state parameter in the model.

\n
fmin(fn=objective, space=space, algo=tpe.suggest, max_evals=100,\n                    trials=trials,rstate=np.random.default_rng(42))\n
\n

It should work; however, if it does not then mostly problem lies with parallelisation, so try setting n_jobs = -1, then debug parallelisation parameters accordingly.

\n", + "system": "" + }, + { + "instruction": "Loading custom dataset of images using PyTorch", + "input": "", + "output": "

Assuming you only plan on running resent on the images once and save the output for later use, I suggest you write your own data set, derived from ImageFolder.
\nSave each resnet output at the same location as the image file with .pth extension.

\n\n
class MyDataset(torchvision.datasets.ImageFolder):\n  def __init__(self, root, transform):\n    super(MyDataset, self).__init__(root, transform)\n\n  def __getitem__(self, index):\n    # override ImageFolder's method\n    \"\"\"\n    Args:\n      index (int): Index\n    Returns:\n      tuple: (sample, resnet, target) where target is class_index of the target class.\n    \"\"\"\n    path, target = self.samples[index]\n    sample = self.loader(path)\n    if self.transform is not None:\n      sample = self.transform(sample)\n    if self.target_transform is not None:\n      target = self.target_transform(target)\n    # this is where you load your resnet data\n    resnet_path = os.path.join(os.path.splitext(path)[0], '.pth')  # replace image extension with .pth\n    resnet = torch.load(resnet_path)  # load the stored features\n    return sample, resnet, target\n
\n", + "system": "" + }, + { + "instruction": "pytorch "log_softmax_lastdim_kernel_impl" not implemented for 'torch.LongTensor'", + "input": "", + "output": "

criterion is defined as torch.nn.CrossEntropyLoss() in your notebook. As mentioned in documentation of CrossEntropyLoss, it expects probability values returned by model for each of the 'K' classes and corresponding value for ground-truth label as input. Now, probability values are float tensors, while ground-truth label should be a long tensor representing a class (class can not be a float, e.g. 2.3 can not represent a class). hence:

\n\n
loss = criterion(predictions, batch.label.long())\n
\n\n

should work.

\n", + "system": "" + }, + { + "instruction": "How to use Distributed Data Parallel properly in pytorch", + "input": "", + "output": "

According to pytorch DDP tutorial,

\n\n
\n

Across processes, DDP inserts necessary parameter synchronizations in forward passes and gradient synchronizations in backward passes. It is up to users to map processes to available resources, as long as processes do not share GPU devices.

\n
\n\n

I think you cannot initialize the model in DDP on one GPU when each process needs to share this GPU device.

\n", + "system": "" + }, + { + "instruction": "How to enable Dict/OrderedDict/NamedTuple support in pytorch 1.1.0 JIT compiler?", + "input": "", + "output": "

The release notes are accurate albeit a little vague. The dictionary/list/user defined classes support described in that link (and the official release notes) only apply to the TorchScript compiler (there are some code examples in the release notes), but SummaryWriter by default will run the TorchScript tracer on whatever module you pass to it, and the tracer only supports Tensors and lists/tuples of Tensors.

\n\n

So the fix would be to use the TorchScript compiler rather than the tracer, but that requires:

\n\n
    \n
  1. Access to the original code
  2. \n
  3. Support for the compiled output (ScriptModule) in Tensorboard
  4. \n
\n\n

You should file an issue for (2), and there is ongoing work to fix (1), but this won't work in the short term for that model afaik.

\n", + "system": "" + }, + { + "instruction": "How can I optimize gradient flow in LSTM with Pytorch?", + "input": "", + "output": "

I have been dealing with these problems several times. And here is my advice:

\n\n
\n

Use smaller number of timesteps

\n
\n\n

The hidden output of the previous timestep is passed to the current steps and multiplied by the weights. When you multiply several times, the gradient will explode or vanish exponentially with the number of timesteps.\nLet's say:

\n\n
# it's exploding\n1.01^121 = 101979  # imagine how large it is when the weight is not 1.01\n\n# or it's vanishing\n0.9^121 = 2.9063214161987074e-06 # ~ 0.0 when we init the weight smaller than 1.0\n
\n\n

For less cluttering, I take an example of simple RNNCell - with weights W_ih and W_hh with no bias. And in your case, W_hh is just a single number but the case might generalize for any matrix W_hh. We use the indentity activation as well.

\n\n

If we unroll the RNN along all the time steps K=3, we get:

\n\n
h_1 = W_ih * x_0 + W_hh * h_0 (1)\nh_2 = W_ih * x_1 + W_hh * h_1 (2)\nh_3 = W_ih * x_2 + W_hh * h_2 (3)\n
\n\n

Therefore, when we need to update the weights W_hh, we have to accummulate all the gradients in the step (1), (2), (3).

\n\n
grad(W_hh) = grad(W_hh at step 1) + grad(W_hh at step 2) + grad(W_hh at step 3)\n\n# step 3\ngrad(W_hh at step3) = d_loss/d(h_3) * d(h_3)/d(W_hh)\ngrad(W_hh at step3) = d_loss/d(h_3) * h_2\n\n\n# step 2\ngrad(W_hh at step2) = d_loss/d(h_2) * d(h_2)/d(W_hh)\ngrad(W_hh at step2) = d_loss/d(h_3) * d_(h_3)/d(h_2) * d(h_2)/d(W_hh)\ngrad(W_hh at step2) = d_loss/d(h_3) * d_(h_3)/d(h_2) * h_1\n\n# step 1\ngrad(W_hh at step1) = d_loss/d(h_1) * d(h_1)/d(W_hh)\ngrad(W_hh at step1) = d_loss/d(h_3) * d(h_3)/d(h_2) * d(h_2)/d(h_1) * d(h_1)/d(W_hh)\ngrad(W_hh at step1) = d_loss/d(h_3) * d(h_3)/d(h_2) * d(h_2)/d(h_1) * h_0\n\n# As we also:\nd(h_i)/d(h_i-1) = W_hh\n\n# Then:\ngrad(W_hh at step3) = d_loss/d(h_3) * h_2\ngrad(W_hh at step2) = d_loss/d(h_3) * W_hh * h_1\ngrad(W_hh at step1) = d_loss/d(h_3) * W_hh * W_hh * h_0\nLet d_loss/d(h_3) = v\n\n# We accumulate all gradients for W_hh\ngrad(W_hh) = v * h_2 + v * W_hh * h_1 + v * W_hh * W_hh * h_0\n\n# If W_hh is initialized too big >> 1.0, grad(W_hh) explode quickly (-> infinity).\n# If W_hh is initialized too small << 1.0, grad(W_hh) vanishes quickly (-> 0), since h_2, h_1 are vanishing after each forward step (exponentially)\n
\n\n

Although LSTM cell has different gates (like forget gate reduce irrelevantly lengthy dependency in timestep) to mitigate these problems, it will be affected by long nummber of timesteps. It still a big question for sequential data about how to design network architecture for learning long dependency.

\n\n

To avoid the problems, just reduce the number of timesteps (seq_len) into subsequences.

\n\n
bs = 121\nseq_len = 121\nnew_seq_len = seq_len // k # k = 2, 2.5 or anything to experiment\n\nX (of [bs,seq_len, 1]) -> [ X1[bs, new_seq_len, 1], X2[bs, new_seq_len, 1],...]\n
\n\n

Then, you pass each small batch Xi into the model, such that the initial hidden is h_(i-1) which is the hidden output of previous batch `X(i-1)

\n\n
h_i = model(Xi, h_(i-1))\n
\n\n

So it will help the model to learn some long dependency as the model of 121 timesteps.

\n", + "system": "" + }, + { + "instruction": "How to handle large JSON file in Pytorch?", + "input": "", + "output": "

I suppose IterableDataset (docs) is what you need, because:

\n
    \n
  1. you probably want to traverse files without random access;
  2. \n
  3. number of samples in jsons is not pre-computed.
  4. \n
\n

I've made a minimal usage example with an assumption that every line of dataset file is a json itself, but you can change the logic.

\n
import json\nfrom torch.utils.data import DataLoader, IterableDataset\n\n\nclass JsonDataset(IterableDataset):\n    def __init__(self, files):\n        self.files = files\n\n    def __iter__(self):\n        for json_file in self.files:\n            with open(json_file) as f:\n                for sample_line in f:\n                    sample = json.loads(sample_line)\n                    yield sample['x'], sample['time'], ...\n\n...\n\ndataset = JsonDataset(['data/1.json', 'data/2.json', ...])\ndataloader = DataLoader(dataset, batch_size=32)\n\nfor batch in dataloader:\n    y = model(batch)\n
\n", + "system": "" + }, + { + "instruction": "Choosing a margin for contrastive loss in a siamese network", + "input": "", + "output": "

You don't need to project it to a lower dimensional space. \nThe dependence of the margin with the dimensionality of the space depends on how the loss is formulated: If you don't normalize the embedding values and compute a global difference between vectors, the right margin will depend on the dimensionality. But if you compute a normalize difference, such as cosine distance, the margin values won't depend on the dimensionality of the embedding space.\nHere ranking (or contrastive) losses are explained, it might be useful https://gombru.github.io/2019/04/03/ranking_loss/

\n", + "system": "" + }, + { + "instruction": "Expected input to torch Embedding layer with pre_trained vectors from gensim", + "input": "", + "output": "

The documentation says the following

\n\n
\n

This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.

\n
\n\n

So if you want to feed in a sentence, you give a LongTensor of indices, each corresponding to a word in the vocabulary, which the nn.Embedding layer will map into word vectors going forward.

\n\n

Here's an illustration

\n\n
test_voc = [\"ok\", \"great\", \"test\"]\n# The word vectors for \"ok\", \"great\" and \"test\"\n# are at indices, 0, 1 and 2, respectively.\n\nmy_embedding = torch.rand(3, 50)\ne = nn.Embedding.from_pretrained(my_embedding)\n\n# LongTensor of indicies corresponds to a sentence,\n# reshaped to (1, 3) because batch size is 1\nmy_sentence = torch.tensor([0, 2, 1]).view(1, -1)\n\nres = e(my_sentence)\nprint(res.shape)\n# => torch.Size([1, 3, 50])\n# 1 is the batch dimension, and there's three vectors of length 50 each\n
\n\n

In terms of RNNs, next you can feed that tensor into your RNN module, e.g

\n\n
lstm = nn.LSTM(input_size=50, hidden_size=5, batch_first=True)\noutput, h = lstm(res)\nprint(output.shape)\n# => torch.Size([1, 3, 5])\n
\n\n

I also recommend you look into torchtext. It can automatate some of the stuff you will have to do manually otherwise.

\n", + "system": "" + }, + { + "instruction": "How to Save pytorch tensor in append mode", + "input": "", + "output": "

It's probably not possible to directly append to the file, at least, I could not find documentation for this. In your example, however, a better approach is to append to a list, and save at the end.

\n\n
import torch\n\nlosses = []\nfor i in range(20):\n    #   ......\n    loss = criterion(scores, labels) \n    losses.append(loss.item())\n\n\ntorch.save(losses, 'loss.pt')\n
\n", + "system": "" + }, + { + "instruction": "How to use GPUs with Ray in Pytorch? Should I specify the num_gpus for the remote class?", + "input": "", + "output": "

If you also want to deploy the model on a gpu, you need to make sure that your actor or task indeed has access to a gpu (with @ray.remote(num_gpus=1), this will make sure that torch.cuda.is_available() will be true in that remote function). If you want to deploy your model on a CPU, you need to specify that when loading the model, see for example https://github.com/pytorch/pytorch/issues/9139.

\n", + "system": "" + }, + { + "instruction": "Why does gpytorch seem to be less accurate than scikit-learn?", + "input": "", + "output": "

(I also answer your question on the GitHub issue you created for it here)

\n\n

Primarily this happened because you used different models in sklearn and gpytorch. In particular, sklearn learns independent GPs in the multi-output setting by default (see e.g., the discussion here). In GPyTorch, you used the multitask GP method introduced in Bonilla et al, 2008. Correcting for this difference yields:

\n\n
\n

test_results_gpytorch = [5.207913e-04 -8.469360e-05]

\n \n

test_results_scitlearn = [3.65288816e-04 4.79017145e-05]

\n
\n", + "system": "" + }, + { + "instruction": "NumPy is faster than PyTorch for larger cross or outer products", + "input": "", + "output": "

Unfortunately there's really no way to specifically speed up torch's method of computing the outer product torch.ger() without a vast amount of effort.

\n\n

Explanation and Options

\n\n

The reason numpy function np.outer() is so fast is because it's written in C, which you can see here: https://github.com/numpy/numpy/blob/7e3d558aeee5a8a5eae5ebb6aef03de892a92ebd/numpy/core/numeric.py#L1123\nwhere the function uses operations from the umath C source code.

\n\n

Pytorch's torch.ger() function is written in C++ here: https://github.com/pytorch/pytorch/blob/7ce634ebc2943ff11d2ec727b7db83ab9758a6e0/aten/src/ATen/native/LinearAlgebra.cpp#L142 which makes it ever so slightly slower as you can see in your example.

\n\n

Your options to \"speed up computing outer product in PyTorch\" would be to add a C implementation for outer product in pytorch's native code, or make your own outer product function while interfacing with C using something like Cython if you really don't want to use numpy (which wouldn't make much sense).

\n\n

P.S.

\n\n

Also just as an aside, using GPUs would only improve your parallel computation speed on the GPU which may not outweigh the cost of time required to transfer data between RAM and GPU memory.

\n", + "system": "" + }, + { + "instruction": "Use Pytorch SSIM loss function in my model", + "input": "", + "output": "

The usual way to transform a similarity (higher is better) into a loss is to compute 1 - similarity(x, y).

\n

To create this loss you can create a new "function".

\n
def ssim_loss(x, y):\n    return 1. - ssim(x, y)\n
\n

Alternatively, if the similarity is a class (nn.Module), you can overload it to create a new one.

\n
class SSIMLoss(SSIM):\n    def forward(self, x, y):\n        return 1. - super().forward(x, y)\n
\n

Also, there are better implementations of SSIM than the one of this repo. For example, the one of the piqa Python package is faster.\nThe package can be installed with

\n
pip install piqa\n
\n

For your problem

\n
from piqa import SSIM\n\nclass SSIMLoss(SSIM):\n    def forward(self, x, y):\n        return 1. - super().forward(x, y)\n\ncriterion = SSIMLoss() # .cuda() if you need GPU support\n\n...\nloss = criterion(x, y)\n...\n
\n

should work well.

\n", + "system": "" + }, + { + "instruction": "Normalising images before learning in pytorch", + "input": "", + "output": "

Run this code to get the values of mean, standard deviation, standard deviation with delta degrees of freedom = 1.

\n\n
transform = transforms.Compose([\n    transforms.ToPILImage(),\n    transforms.ToTensor()\n])\n\ndataloader = torch.utils.data.DataLoader(*torch_dataset*, batch_size=4096, shuffle=False, num_workers=4)\n\ndata_mean = [] # Mean of the dataset\ndata_std0 = [] # std of dataset\ndata_std1 = [] # std with ddof = 1\nfor i, data in enumerate(dataloader, 0):\n    # shape (batch_size, 3, height, width)\n    numpy_image = data['image'].numpy()\n\n    # shape (3,)\n    batch_mean = np.mean(numpy_image, axis=(0,2,3))\n    batch_std0 = np.std(numpy_image, axis=(0,2,3))\n    batch_std1 = np.std(numpy_image, axis=(0,2,3), ddof=1)\n\n    data_mean.append(batch_mean)\n    data_std0.append(batch_std0)\n    data_std1.append(batch_std1)\n\n# shape (num_iterations, 3) -> (mean across 0th axis) -> shape (3,)\ndata_mean = np.array(data_mean).mean(axis=0)\ndata_std0 = np.array(data_std0).mean(axis=0)\ndata_std1 = np.array(data_std1).mean(axis=0)\n
\n", + "system": "" + }, + { + "instruction": "How to create Datasets Like MNIST in Pytorch?", + "input": "", + "output": "

There are two ways to go. First, the manual. Torchvision.datasets states the following:

\n\n
\n

datasets are subclasses of torch.utils.data.Dataset i.e, they have __getitem__ and __len__ methods implemented. Hence, they can all be passed to a torch.utils.data.DataLoader which can load multiple samples parallelly using torch.multiprocessing workers.

\n
\n\n

So you can just implement your own class which scans for all the images and labels, keeps a list of their paths (so that you don't have to keep them in RAM) and has the __getitem__ method which given index i reads the i-th file, its label and returns them. This minimal interface is enough to work with the parallel dataloader in torch.utils.data.

\n\n

Secondly, if your data directory can be rearranged into either structure, you can use DatasetFolder and ImageFolder pre-built loaders. This will save you some coding and automatically provide support for data augumentation routines from torchvision.transforms.

\n", + "system": "" + }, + { + "instruction": "Simple but disturbing problem about function "tensor.backward()" in PyTorch", + "input": "", + "output": "

It's the order in which you are running the commands.

\n\n

First you run backward on y and z. Then you run it on Y and Z, but Y and Z are based on y and z - which have already had backward run on them.

\n\n

You need to run backward on y and z then reset the kernel, then redo the calculation and run backward on Y and Z.

\n\n
x = torch.ones(2,5,requires_grad=True)\ny = x + 2\nz = x*x\nY = torch.mean(y)\nZ = torch.mean(z)\n
\n\n
y.backward(torch.ones(2,5))\nz.backward(torch.ones(2,5))\n
\n\n

Then reset.

\n\n
x = torch.ones(2,5,requires_grad=True)\ny = x + 2\nz = x*x\nY = torch.mean(y)\nZ = torch.mean(z)\n
\n\n
Y.backward()\nZ.backward()\n
\n", + "system": "" + }, + { + "instruction": "Matrix multiplication in pyTorch", + "input": "", + "output": "

Consider it step by step:

\n\n
x = torch.tensor([[10, 2], [3,5]])\ny = torch.tensor([[1,3], [5,6]])\n\nx * y\n# tensor([[10,  6],\n#         [15, 30]])\n\ntorch.sum(x*y)\n\n#tensor(61)\n\nx = torch.tensor([[10, 2], [3,5]])\ny = torch.tensor([[1,3], [5,6]])\n\nnp.matmul(x, y)\n# array([[20, 42],\n#       [28, 39]])\n
\n\n

So there is a difference betweeen matmul and * operator. Furthermore, torch.sum makes an entire sum from the tensor, not row or columnwisely.

\n", + "system": "" + }, + { + "instruction": "PyTorch - Do overwritten variables stay in the graph?", + "input": "", + "output": "

I think it is ok to reset before backward. The graph preserves the required information.

\n\n
class A (torch.nn.Module):\n     def __init__(self):\n         super().__init__()\n         self.f1 = torch.nn.Linear(10,1)\n     def forward(self, x):\n         self.x = x \n         return torch.nn.functional.sigmoid (self.f1(self.x))\n     def reset_x (self):\n        self.x = torch.zeros(self.x.shape) \nnet = A()\nnet.zero_grad()\nX = torch.rand(10,10) \nloss = torch.nn.functional.binary_cross_entropy(net(X), torch.ones(10,1))\nloss.backward()\nparams = list(net.parameters())\nfor i in params: \n    print(i.grad)\nnet.zero_grad() \n\nloss = torch.nn.functional.binary_cross_entropy(net(X), torch.ones(10,1))\nnet.reset_x()\nprint (net.x is X)\ndel X\nloss.backward()     \nparams = list(net.parameters())\nfor i in params:\n    print(i.grad)\n
\n\n

In the above code, I print the grads with/ without resetting input x. Gradient depends on x for sure and resetting it doesn't matter. Therefore, I think graph preserves information to do the backward op.

\n", + "system": "" + }, + { + "instruction": "Getting diagonal "stripe" from matrix in NumPy or PyTorch", + "input": "", + "output": "

stride_tricks do the trick:

\n\n
>>> import numpy as np\n>>> \n>>> def stripe(a):\n...    a = np.asanyarray(a)\n...    *sh, i, j = a.shape\n...    assert i >= j\n...    *st, k, m = a.strides\n...    return np.lib.stride_tricks.as_strided(a, (*sh, i-j+1, j), (*st, k, k+m))\n... \n>>> a = np.arange(24).reshape(6, 4)\n>>> a\narray([[ 0,  1,  2,  3],\n       [ 4,  5,  6,  7],\n       [ 8,  9, 10, 11],\n       [12, 13, 14, 15],\n       [16, 17, 18, 19],\n       [20, 21, 22, 23]])\n>>> stripe(a)\narray([[ 0,  5, 10, 15],\n       [ 4,  9, 14, 19],\n       [ 8, 13, 18, 23]])\n
\n\n

If a is an array this creates a writable view, meaning that if you feel so inclined you can do things like

\n\n
>>> stripe(a)[...] *= 10\n>>> a\narray([[  0,   1,   2,   3],\n       [ 40,  50,   6,   7],\n       [ 80,  90, 100,  11],\n       [ 12, 130, 140, 150],\n       [ 16,  17, 180, 190],\n       [ 20,  21,  22, 230]])\n
\n\n

UPDATE: bottom-left to top-right stripes can be obtained in the same spirit. Only minor complication: It is not based at the same address as the original array.

\n\n
>>> def reverse_stripe(a):\n...     a = np.asanyarray(a)\n...     *sh, i, j = a.shape\n...     assert i >= j\n...     *st, k, m = a.strides\n...     return np.lib.stride_tricks.as_strided(a[..., j-1:, :], (*sh, i-j+1, j), (*st, k, m-k))\n... \n>>> a = np.arange(24).reshape(6, 4)\n>>> reverse_stripe(a)\narray([[12,  9,  6,  3],\n       [16, 13, 10,  7],\n       [20, 17, 14, 11]])\n
\n", + "system": "" + }, + { + "instruction": "CTC loss goes down and stops", + "input": "", + "output": "

I've been training with ctc loss and encountered the same problem. I know this is a rather late answer but hopefully it'll help someone else who's researching on this. After trial and error and a lot of research there are a few things that's worth knowing when it comes to training with ctc (if your model is set up correctly):

\n
    \n
  1. The quickest way for the model to lower cost is to predict only blanks. This is noted in a few papers and blogs: see http://www.tbluche.com/ctc_and_blank.html
  2. \n
  3. The model learns to predict only blanks first, then it starts picking up on the error signal in regards to the correct underlying labels. This is also explained in the above link. In practice, I noticed that my model starts to learn the real underlying labels/targets after a couple hundred epochs and the loss starts decreasing dramatically again. Similar to what is shown for the toy example here: https://thomasmesnard.github.io/files/CTC_Poster_Mesnard_Auvolat.pdf
  4. \n
  5. These parameters have a great impact on whether your model converges or not - learning rate, batch size and epoch number.
  6. \n
\n", + "system": "" + }, + { + "instruction": "Differentiate gradients", + "input": "", + "output": "

You just need to use the function torch.autograd.grad it does exactly the same as tf.gradients.

\n\n

So in pytorch this would be:

\n\n
from torch.autograd import Variable, grad\nimport torch\n\n\n\ndef gradient_descent( loss_fnc, w, max_its, lr):\n    '''a gradient descent \"RNN\" '''    \n    for k in range(max_its):\n        w = w - lr * grad( loss_fnc(w), w )\n    return w\n\nlr = Variable(torch.zeros(1), , requires_grad=True)\nw = Variable( torch.zeros(10), requires_grad=True)\nreg = Variable( torch.ones(1) , requires_grad=True)\n\ndef loss_fnc(w):\n    return torch.sum((Variable(torch.ones(10)) - w)**2) + reg * torch.sum( w**2 )\n\nw_n = gradient_descent( loss_fnc, w, 10, lr )\n\n\n# differentiate through the gradient_descent RNN with respnect to the initial weight \nprint(grad( w_n, w))\n\n# differentiate through the gradient_descent RNN with respnect to the learning rate\nprint(grad( w_n, lr))\n
\n", + "system": "" + }, + { + "instruction": "PyTorch nn.Linear layer output nan on well formed input and weights", + "input": "", + "output": "

For me, I was copying over code from the RNN name categorization example. I added the optimizer and criterion pattern whereas the example was doing things manually and updating weights by hand. I accidentally added a momentum value to the optimizer and that was what caused my problem. Setting momentum to the default value of 0 fixed it.

\n", + "system": "" + }, + { + "instruction": "how to deal with keras: erro:slice index 0 of dimension 0 out of bounds", + "input": "", + "output": "

Let's take a single observation and see what happens. A single observation has the following shape : (14 , 49). \nAfter the first Conv1D layer, it will change to (11,32) (kernel size of 4 and strides of 1). After the first Maxpooling1D layer, it will go to (3 , 32) (since you din't set strides, it will default to your pool size which is 3). If we look at your second conv1D layer, it has a kernel size equal to 4, which is greater to the number of the first dimension of your dataframe.

\n\n

I recommend setting the first line to :

\n\n
x_input = Input(shape=(x_train.shape[-2],x_train.shape[-1]),name='input')\n
\n\n

this will allow you to see more easily how the shape of your input change in each layer.

\n", + "system": "" + }, + { + "instruction": "Feature selection on a keras model", + "input": "", + "output": "

I assume your Keras model is some kind of a neural network. And with NN in general it is kind of hard to see which input features are relevant and which are not. The reason for this is that each input feature has multiple coefficients that are linked to it - each corresponding to one node of the first hidden layer. Adding additional hidden layers makes it even more complicated to determine how big of an impact the input feature has on the final prediction.

\n\n

On the other hand, for linear models it is very straightforward since each feature x_i has a corresponding weight/coefficient w_i and its magnitude directly determines how big of an impact it has in prediction (assuming that features are scaled of course).

\n\n

The RFE estimator (Recursive feature elimination) assumes that your prediction model has an attribute coef_ (linear models) or feature_importances_(tree models) that has the length of input features and that it represents their relevance (in absolute terms).

\n\n

My suggestion:

\n\n
    \n
  1. Feature selection: (Option a) Run the RFE on any linear / tree model to reduce the number of features to some desired number n_features_to_select. (Option b) Use regularized linear models like lasso / elastic net that enforce sparsity. The problem here is that you cannot directly set the actual number of selected features. (Option c) Use any other feature selection technique from here.
  2. \n
  3. Neural Network: Use only features from (1) for your neural network.
  4. \n
\n", + "system": "" + }, + { + "instruction": "How does Keras ImageDataGenerator rescale parameter works?", + "input": "", + "output": "

I altered your example a little to plot the image and to print a pixel value. It seems that the image is automagically rescaled back when plotted, because I did not noticed any difference between my input image and the plotted one. I assume the same happens when saving.

\n\n
from keras.preprocessing.image import load_img, img_to_array, ImageDataGenerator\nimport numpy as np\nfrom matplotlib import pyplot\n\nimg = load_img('capture102.jpg')\nimg_arr = np.expand_dims(img_to_array(img), axis=0)\ndatagen = ImageDataGenerator(rescale=1./255)\n\nfor batch in datagen.flow(img_arr, batch_size=1, save_to_dir='path/to/save', save_prefix='1_param', save_format='jpeg'):\n    print(batch[0][0][0])\n    pyplot.imshow(batch[0])\n    pyplot.show()\n    break\n
\n\n

The printed values are:[0.21960786 0.23529413 0.27058825]

\n", + "system": "" + }, + { + "instruction": "0% accuracy with evaluate_generator but 75% accuracy during training with same data - what is going on?", + "input": "", + "output": "

Two things I suspect.

\n\n

1 - No, your data is not the same.

\n\n

You're using three types of augmentation in ImageDataGenerator, and it seems there isn't a random seed being set. So, test data is not equal to training data.

\n\n

And as it seems, you're also training for only one epoch, which is very little (unless you really have tons of data, but since you're using augmentation, maybe that's not the case). (PS: I don't see the steps_per_epoch argument in your fit_generator call...)

\n\n

So, if you want to see good results, here are some solutions:

\n\n\n\n

2 - (This may happen if you're very new to Keras/programming, so please ignore if it's not the case) You might be running the code that defines the model again when testing.

\n\n

If you run the code that defines the model again, it will replace all your previous training with random weights.

\n\n

3 - Since we're out of suggestions:

\n\n

Maybe save the weights instead of saving the model. I usually do this instead of saving the models. (For some reason I don't understand, I've never been able to load a model like that)

\n\n
def createModel():\n    ....\n\nmodel = createModel()\n...\nmodel.fit_generator(....)\n\nnp.save('model_weights.npy',model.get_weights())\n\nmodel = createModel()\nmodel.set_weights(np.load('model_weights.npy'))\n...\nmodel.evaluate_generator(...)\n
\n\n
\n\n

Hint:

\n\n

It's not related to the bug, but make sure that the base model layer is really layer 0. If I remember well, sequential models have an input layer and you should actually be making layer 1 untrainable instead.

\n\n

Use the model.summary() to confirm the number of untrainable parameters.

\n", + "system": "" + }, + { + "instruction": "Keras training with batches: Is the training loss computed before or after each optimization step?", + "input": "", + "output": "

The loss is computed before the optimization step. The reason for this is efficiency and has to do with how back-propagation works.

\n\n

In particular, suppose we want to minimize ||A(x, z) - y||^2 w.r.t. z. Then when we perform back-propagation we need to evaluate this computational graph:

\n\n
A(x, z) -> grad ||. - y||^2 -> backpropagate\n
\n\n

Now, if we add a \"evaluate loss\" to this and evaluate the loss before updating the parameters the computational graph would look like this

\n\n
           >  grad ||. - y||^2 -> backpropagate\n         /\nA(x, z) \n         \\\n           >  ||. - y||^2\n
\n\n

On the other hand, if we evaluate the loss after updating them, the graph would look like this

\n\n
A(x, z) -> grad ||. - y||^2 -> backpropagate -> A(x, z) -> ||. - y||^2\n
\n\n

Hence, if we evaluate the loss after updating, we need to compute A(x, z) twice, whereas if we compute it before updating we only need to compute it once. Hence, computing it before updating becomes twice as fast.

\n", + "system": "" + }, + { + "instruction": "Train multiple neural nets in parallel on CPU in keras", + "input": "", + "output": "

There are some key points to making this work:

\n\n\n\n

Implementation:

\n\n

extend the python Process class:

\n\n\n\n
from keras.layers import Dense\nfrom keras.models import Sequential\nfrom multiprocessing import Process, Queue\nimport tensorflow as tf\n\nfrom train_val_set import TrainValSet\n\n\nclass NNProcess(Process):\n    def __init__(self, process_id: int, nr_nets: int, ret_queue: Queue):\n        super(NNProcess, self).__init__()\n        self.process_id = process_id\n        self.neural_nets = []\n        self.train_val_set = None\n        self.nr_nets = nr_nets\n        self.ret_queue = ret_queue\n\n    def set_train_val(self, train_val_set: TrainValSet):\n        self.train_val_set = train_val_set\n\n    def get_session_config(self):\n        num_cores = 1\n        num_CPU = 1\n        num_GPU = 0\n\n        config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,\n                                inter_op_parallelism_threads=num_cores, allow_soft_placement=False,\n                                device_count={'CPU': num_CPU, 'GPU': num_GPU})\n\n        return config\n\n    def run(self):\n        print(\"process \" + str(self.process_id) + \" starting...\")\n\n        with tf.Session(graph=tf.Graph(), config=self.get_session_config()) as session:\n            self.init_nets()\n            self.compile()\n            self.fit_nets(self.train_val_set)\n            for i in range(0, self.nr_nets):\n                file_name = self.neural_nets[i].name + \"_\" + str(i) + \".pickle\"\n                self.neural_nets[i].save(file_name)\n                self.ret_queue.put(file_name)\n        print(\"process \" + str(self.process_id) + \" finished.\")\n\n    def compile(self):\n        for neural_net in self.neural_nets:\n            neural_net.compile(loss='categorical_crossentropy',\n                          optimizer='sgd',\n                          metrics=['accuracy'])\n\n    def init_nets(self):\n        for i in range(0, self.nr_nets):\n            model = Sequential()\n            model.add(Dense(units=64, activation='relu', input_dim=100))\n            model.add(Dense(units=10, activation='softmax'))\n            self.neural_nets.append(model)\n\n    def fit_nets(self, train_val_set: TrainValSet):\n        for i in range(0, self.nr_nets):\n            self.neural_nets[i].fit()\n
\n\n

Helper class:

\n\n\n\n
from pandas import DataFrame\n\n\nclass TrainValSet:\n    def __init__(self, df_train: DataFrame, df_val: DataFrame):\n        self.x_train, self.y_train = self.get_x_y(df_train)\n        self.x_val, self.y_val = self.get_x_y(df_val)\n\n    def get_x_y(self, df: DataFrame):\n        X = df.iloc[:, 0:-1].values\n        y = df.iloc[:, -1].values\n\n        return X, y\n
\n\n

main file:

\n\n\n\n
import pandas as pd\nfrom multiprocessing import Manager\nimport tensorflow as tf\nfrom keras import backend as K\n\nfrom train_val_set import TrainValSet\nfrom nn_process import NNProcess\n\n\ndef load_train_val_test_datasets(dataset_dir: str, dataset_name: str):\n    df_train = pd.read_csv(dataset_dir + dataset_name + \"/\" + dataset_name + \"_train.csv\", header=None)\n    df_val = pd.read_csv(dataset_dir + dataset_name + \"/\" + dataset_name + \"_val.csv\", header=None)\n    df_test = pd.read_csv(dataset_dir + dataset_name + \"/\" + dataset_name + \"_test.csv\", header=None)\n\n    return df_train, df_val, df_test\n\n\n# config for prediction and evaluation only\ndef get_session_config(num_cores):\n    num_CPU = 1\n    num_GPU = 0\n\n    config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,\n                            inter_op_parallelism_threads=num_cores, allow_soft_placement=True,\n                            device_count={'CPU': num_CPU, 'GPU': num_GPU})\n\n    return config\n\n\ndef train_test(nr_nets: int, nr_processes: int):\n    df_train, df_val, df_test = load_train_val_test_datasets('MNIST')\n    train_val_set = TrainValSet(df_train, df_val)\n    nets_per_proc = int(nr_nets/nr_processes)\n\n    nn_queue = Manager().Queue()\n\n    processes = []\n\n    for i in range(0, nr_processes):\n        nn_process = NNProcess(i, nets_per_proc, nn_queue)\n        nn_process.set_train_val(train_val_set)\n        processes.append(nn_process)\n\n    for nn_process in processes:\n        nn_process.start()\n\n    for nn_process in processes:\n        nn_process.join()\n\n    tf_session = tf.Session(config=get_session_config(4))\n    K.set_session(tf_session)\n\n    # ...\n    # load neural nets from files\n    # do predictions\n
\n", + "system": "" + }, + { + "instruction": "How to multiply Keras tensor by scalar?", + "input": "", + "output": "

You can use a Lambda layer for any other scalar manipulations

\n\n

Scalar Multiplication:

\n\n
res5 = Lambda(lambda x: x * 3)(res4)\n
\n\n

Scalar Addition:

\n\n
res5 = Lambda(lambda x: x + 4)(res4)\n
\n", + "system": "" + }, + { + "instruction": "Why doesn't Keras need the gradient of a custom loss function?", + "input": "", + "output": "

To my understanding, as long as each operator that you will use in your Error function has already a predefined gradient. the underlying framework will manage to calculate the gradient of you loss function.

\n", + "system": "" + }, + { + "instruction": "How to wrap a custom TensorFlow loss function in Keras?", + "input": "", + "output": "

I answered my own question. I'm posting the solution for anyone who may come across this same problem.

\n\n

I tried using my TF loss function directly in Keras, as was independently suggested by Matias Valdenegro. I did not provoke any errors from Keras by doing so, however, the loss value went immediately to NaN.

\n\n

Eventually I identified the problem. The calling convention for a Keras loss function is first y_true (which I called tgt), then y_pred (my pred). But the calling convention for a TensorFlow loss function is pred first, then tgt. So if you want to keep a Tensorflow-native version of the loss function around, this fix works:

\n\n
def keras_l2_angle_distance(tgt, pred):\n    return l2_angle_distance(pred, tgt)\n\n<snip>\n\nmodel.compile(loss = keras_l2_angle_distance, optimizer = \"something\")\n
\n\n

Maybe Theano or CNTK uses the same parameter order as Keras, I don't know. But I'm back in business.

\n", + "system": "" + }, + { + "instruction": "How does Keras calculate the accuracy?", + "input": "", + "output": "

You are a little confused here; you speak about accuracy, while showing the formula for the loss.

\n\n

The equation you show is indeed the cross-entropy loss formula for binary classification (or simply logistic loss).

\n\n

y[i] are the labels, which are indeed either 0 or 1.

\n\n

p[i] are the predictions, usually interpreted as probabilities, which are real numbers in [0,1] (without any rounding).

\n\n

Now for each i, only one term in the sum will survive - the first term vanishes when y[i] = 0, and similarly the second term vanishes when y[i] = 1.

\n\n

Let's see some examples:

\n\n

Suppose that y[0] = 1, while we have predicted p[0] = 0.99 (i.e. a rather good prediction). The second term of the sum vanishes (since 1 - y[0] = 0), while the first one becomes log(0.99) = -0.01; so, the contribution of this sample prediction (i=0) to the overall loss is 0.01 (due to the - sign in front of the sum).

\n\n

Suppose now that the true label of the next sample is again 1, i.e. y[1] = 1, but here we have made a rather poor prediction of p[1] = 0.1; again, the second term vanishes, and the contribution of this prediction to the overall loss is now -log(0.1) = 2.3, which is indeed greater than our first, good prediction, as we should expect intuitively.

\n\n

As a final example, let's suppose that y[2] = 0, and we have made a perfectly good prediction here of p[2] = 0; hence, the first term vanishes, and the second term becomes

\n\n
(1 - y[2]) * log(1 - p[2]) = 1 * log(1) = log(1) = 0\n
\n\n

i.e. we have no loss contributed, again as we intuitively expected, since we have made a perfectly good prediction here for i=2.

\n\n

The logistic loss formula simply computes all these errors of the individual predictions, sums them, and divides by their number n.

\n\n

Nevertheless, this is the loss (i.e. scores[0] in your snippet), and not the accuracy.

\n\n

Loss and accuracy are different things; roughly speaking, the accuracy is what we are actually interested in from a business perspective, while the loss is the objective function that the learning algorithms (optimizers) are trying to minimize from a mathematical perspective. Even more roughly speaking, you can think of the loss as the \"translation\" of the business objective (accuracy) to the mathematical domain, a translation which is necessary in classification problems (in regression ones, usually the loss and the business objective are the same, or at least can be the same in principle, e.g. the RMSE)...

\n\n
\n

Will Keras automatically round our predictions to 0 or 1?

\n
\n\n

Actually yes: to compute the accuracy, we implicitly set a threshold in the predicted probabilities (usually 0.5 in binary classification, but this may differ in the case of highly imbalanced data); so, in model.evaluate, Keras actually converts our predictions to 1 if p[i] > 0.5 and to 0 otherwise. Then, the accuracy is computed by simply counting the cases where y_true==y_pred (correct predictions) and dividing by the total number of samples, to give a number in [0,1].

\n\n

So, to summarize:

\n\n\n", + "system": "" + }, + { + "instruction": "extracting Bottleneck features using pretrained Inceptionv3 - differences between Keras' implementation and Native Tensorflow implementation", + "input": "", + "output": "

Different results between 1 and 2

\n

Since you haven't shown your code, I (maybe wrongly) suggest that the problem is that you might not have used preprocess_input when declaring ImageDataGenerator ?

\n
from keras.applications.inception_v3 import preprocess_input\n\ntrain_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) \n
\n

Make sure, though, that your saved image files range from 0 to 255. (Bit depth 24).

\n

Different shapes between 1 and 3

\n

There are three possible types of model in this case:

\n\n

So, your declared model without an explicit pooling=something doesn't have the final pooling layer in keras. Then the outputs will still have the spatial dimensions.

\n

Solve that simply by adding a pooling at the end. One of these:

\n
pretrained_model = InceptionV3(include_top=False, pooling = 'avg', weights='imagenet', input_shape=(img_width, img_height, 3))\npretrained_model = InceptionV3(include_top=False, pooling = 'max', weights='imagenet', input_shape=(img_width, img_height, 3))\n
\n

Not sure which one the model in the tgz file is using.

\n

As an alternative, you can also get another layer from the Tensorflow model, the one coming immediately before 'pool_3'.

\n", + "system": "" + }, + { + "instruction": "How to do multi-class image classification in keras?", + "input": "", + "output": "

For multi-class classification, the last dense layer must have a number of nodes equal to the number of classes, followed by softmax activation, i.e. the last two layers of your model should be:

\n\n
model.add(Dense(num_classes))\nmodel.add(Activation('softmax'))\n
\n

Additionally, your labels (both train and test) must be one-hot encoded; so, assuming that your initial cats and dogs were labeled as integers (0/1), and your new category (airplane) is initially similarly labeled as '2', you should convert them as follows:

\n
train_labels = keras.utils.to_categorical(train_labels, num_classes)\ntest_labels = keras.utils.to_categorical(test_labels, num_classes)\n
\n

Finally, on a terminology level, what you are doing is multi-class, and not multi-label classification (I have edited the title of your post) - the last term is used for problems where a sample might belong to more than one categories at the same time.

\n", + "system": "" + }, + { + "instruction": "Understanding Seq2Seq model", + "input": "", + "output": "
\n
    \n
  1. Are we passing the last hidden state only to the blue LSTMs as the initial hidden state. Or is it last hidden state and cell memory.
  2. \n
\n
\n

Both hidden state h and cell memory c are passed to the decoder.

\n

TensorFlow

\n

In seq2seq source code, you can find the following code in basic_rnn_seq2seq():

\n
_, enc_state = rnn.static_rnn(enc_cell, encoder_inputs, dtype=dtype)\nreturn rnn_decoder(decoder_inputs, enc_state, cell)\n
\n

If you use an LSTMCell, the returned enc_state from the encoder will be a tuple (c, h). As you can see, the tuple is passed directly to the decoder.

\n

Keras

\n

In Keras, the "state" defined for an LSTMCell is also a tuple (h, c) (note that the order is different from TF). In LSTMCell.call(), you can find:

\n
    h_tm1 = states[0]\n    c_tm1 = states[1]\n
\n

To get the states returned from an LSTM layer, you can specify return_state=True. The returned value is a tuple (o, h, c). The tensor o is the output of this layer, which will be equal to h unless you specify return_sequences=True.

\n
\n
    \n
  1. Is there a way to set the initial hiddden state and cell memory in Keras or Tensorflow? If so reference?
  2. \n
\n
\n

###TensorFlow###\nJust provide the initial state to an LSTMCell when calling it. For example, in the official RNN tutorial:

\n
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n...\n    output, state = lstm(current_batch_of_words, state)\n
\n

There's also an initial_state argument for functions such as tf.nn.static_rnn. If you use the seq2seq module, provide the states to rnn_decoder as have been shown in the code for question 1.

\n

###Keras###

\n

Use the keyword argument initial_state in the LSTM function call.

\n
out = LSTM(32)(input_tensor, initial_state=(h, c))\n
\n

You can actually find this usage on the official documentation:

\n
\n

###Note on specifying the initial state of RNNs###

\n

You can specify the initial state of RNN layers symbolically by\ncalling them with the keyword argument initial_state. The value of\ninitial_state should be a tensor or list of tensors representing the\ninitial state of the RNN layer.

\n
\n
\n

EDIT:

\n

There's now an example script in Keras (lstm_seq2seq.py) showing how to implement basic seq2seq in Keras. How to make prediction after training a seq2seq model is also covered in this script.

\n", + "system": "" + }, + { + "instruction": "Proper way to save Transfer Learning model in Keras", + "input": "", + "output": "

As per Keras official docs,\nIf you only need to save the architecture of a model you can use

\n\n
model_json = model.to_json()\nwith open(\"model_arch.json\", \"w\") as json_file:\n    json_file.write(model_json)\n
\n\n

To save weights

\n\n
model.save_weights(\"my_model_weights.h5\")\n
\n\n

You can later load the json file and use

\n\n
from keras.models import model_from_json\nmodel = model_from_json(json_string)\n
\n\n

And similarly, for weights you can use

\n\n
model.load_weights('my_model_weights.h5')\n
\n\n

I am using the same approach and this works perfectly well.

\n", + "system": "" + }, + { + "instruction": "Keras misinterprets training data shape", + "input": "", + "output": "

(Edited, according to OP's comment on this question, where they posted this link: https://github.com/fchollet/keras/issues/1920)

\n
\n

Your X is not a single numpy array, it's an array of arrays. (Otherwise its shape would be X.shape=(35730,513,15).

\n

It must be a single numpy array for the fit method. Since you have a variable length, you cannot have a single numpy array containing all your data, you will have to divide it in smaller arrays, each array containing data with the same length.

\n

For that, you should maybe create a dictionary by shape, and loop the dictionary manually (there may be other better ways to do this...):

\n
#code in python 3.5\nxByShapes = {}\nyByShapes = {}\nfor itemX,itemY in zip(X,Y):\n    if itemX.shape in xByShapes:\n        xByShapes[itemX.shape].append(itemX)\n        yByShapes[itemX.shape].append(itemY)\n    else:\n        xByShapes[itemX.shape] = [itemX] #initially a list, because we're going to append items\n        yByShapes[itemX.shape] = [itemY]\n
\n

At the end, you loop this dictionary for training:

\n
for shape in xByShapes:\n    model.fit(\n              np.asarray(xByShapes[shape]), \n              np.asarray(yByShapes[shape]),...\n              )\n
\n
\n

Masking

\n

Alternatively, you can pad your data so all samples have the same length, using zeros or some dummy value.

\n

Then before anything in your model you can add a Masking layer that will ignore these padded segments. (Warning: some types of layer don't support masking)

\n", + "system": "" + }, + { + "instruction": "keras load_model raise error when executed a second time", + "input": "", + "output": "

You can create a new sesstion and load the model to it.

\n\n
from keras.models import load_model\nimport keras\n\ndef network(campaign):\n    with keras.backend.get_session().graph.as_default():\n        model = load_model(\"sunshade/neural_network/model.h5\")\n        label = model.predict(images, batch_size = 128, verbose = 1)\n
\n", + "system": "" + }, + { + "instruction": "Why does a binary Keras CNN always predict 1?", + "input": "", + "output": "

It always predicts one because of the output in your network. You have a Dense layer with one neuron, with a Softmax activation. Softmax normalizes by the sum of exponential of each output. Since there is one output, the only possible output is 1.0.

\n\n

For a binary classifier you can either use a sigmoid activation with the \"binary_crossentropy\" loss, or put two output units at the last layer, keep using softmax and change the loss to categorical_crossentropy.

\n", + "system": "" + }, + { + "instruction": "Keras Tensorboard callback not writing images", + "input": "", + "output": "

You need to add histogram_freq=x, where x should be different than zero, so that the writing of images is enabled.

\n\n

But if you do this, it might still fail, depending on the version of Keras (see https://github.com/fchollet/keras/issues/6096)

\n", + "system": "" + }, + { + "instruction": "Is it possible to save a trained layer to use layer on Keras?", + "input": "", + "output": "

Yes, it is.

\n\n

You will probably have to save the layer's weights and biases instead of saving the layer itself, but it's possible.

\n\n

Keras also allows you to save entire models.

\n\n

Suppose you have a model in the var model:

\n\n
weightsAndBiases = model.layers[i].get_weights()\n
\n\n

This is a list of numpy arrays, very probably with two arrays: weighs and biases. You can simply use numpy.save() to save these two arrays and later you can create a similar layer and give it the weights:

\n\n
from keras.layers import *\nfrom keras.models import Model\n\ninp = Input(....)    \nout1 = SomeKerasLayer(...)(inp)  \nout2 = AnotherKerasLayer(....)(out1)\n.... \nmodel = Model(inp,out2)\n#above is the usual process of creating a model    \n\n#supposing layer 2 is the layer you want (you can also use names)    \n\nweights = numpy.load(...path to your saved weights)    \nbiases = numpy.load(... path to your saved biases)\nmodel.layers[2].set_weights([weights,biases])\n
\n\n

You can make layers untrainable (must be done before the model compilation):

\n\n
model.layers[2].trainable = False    \n
\n\n

Then you compile the model:

\n\n
model.compile(.....)    \n
\n\n

And there you go, a model, whose one layer is untrainable and has weights and biases defined by you, taken from somewhere else.

\n", + "system": "" + }, + { + "instruction": "Wrapping Tensorflow For Use in Keras", + "input": "", + "output": "

The newer version of Keras uses tf.keras.layers.AdditiveAttention(). This should work off the shelf.

\n

Alternatively a custom Bahdanau layer can be written as shown in half a dozen lines of code: Custom Attention Layer using in Keras

\n", + "system": "" + }, + { + "instruction": "Keras VGG16 fine tuning", + "input": "", + "output": "

I think that the weights described by the vgg net do not fit your model and the error stems from this. Anyways there is a way better way to do this using the network itself as described in (https://keras.io/applications/#vgg16).

\n\n

You can just use:

\n\n
base_model = keras.applications.vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=None)\n
\n\n

to instantiate a vgg net that is pre-trained. Then you can freeze the layers and use the model class to instantiate your own model like this:

\n\n
x = base_model.output\nx = Flatten()(x)\nx = Dense(your_classes, activation='softmax')(x) #minor edit\nnew_model = Model(input=base_model.input, output=x)\n
\n\n

To combine the bottom and the top network you can use the following code snippet. The following functions are used (Input Layer (https://keras.io/getting-started/functional-api-guide/) / load_model (https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model) and the functional API of keras):

\n\n
final_input = Input(shape=(3, 224, 224))\nbase_model = vgg...\ntop_model = load_model(weights_file)\n\nx = base_model(final_input)\nresult = top_model(x)\nfinal_model = Model(input=final_input, output=result)\n
\n", + "system": "" + }, + { + "instruction": "Removing dimension using reshape in keras?", + "input": "", + "output": "

I wanted to remove all dimensions that are equal to 1, but not specify a specific size with Reshape so that my code does not break if I change the input size or number of kernels in a convolution. This works with the functional keras API on a tensorflow backend.

\n\n
from keras.layers.core import Reshape\n\nold_layer = Conv2D(#actualArguments) (older_layer)\n#old_layer yields, e.g., a (None, 15,1,36) size tensor, where None is the batch size\n\nnewdim = tuple([x for x in old_layer.shape.as_list() if x != 1 and x is not None])\n#newdim is now (15, 36). Reshape does not take batch size as an input dimension.\nreshape_layer = Reshape(newdim) (old_layer)\n
\n", + "system": "" + }, + { + "instruction": "Keras multi-class prediction output is limited to one class", + "input": "", + "output": "

The easiest way to rebalance your loss is to:

\n\n
    \n
  1. Define a class_weights dict:

    \n\n
    class_weights_dict = {0: 1.0 / 0_class_freq, 1: 1.0 / 1_class_freq, ... }\n
    \n\n

    Where i_class_freq is a frequency of i-th class.

  2. \n
  3. Change your fit function to:

    \n\n
    model.fit(X, Y, epochs=500,  verbose=1, class_weight=class_weights_dict)\n
  4. \n
\n\n

The model defined above should be equivalent to your model with Bayessian reweighted classes.

\n", + "system": "" + }, + { + "instruction": "Keras : training with an array as an input", + "input": "", + "output": "

If your final output has 3 dimensions your last Dense layer has to have 3 columns.

\n\n

You do so with

\n\n
model.add(Dense(3))\n
\n\n

If your input has 5 dimensions your first Dense Matrix will have 5 lines.

\n\n

That's done with input_dim parameter in your first Dense layer.

\n\n
model.add(Dense(10, input_dim=2, activation='relu'))\n
\n\n

You can see now that this first dense layer matrix is a 2 lines by 10 columns matrix. Since the next matrix will necessarily have 10 lines (so that dimensions will match), in the next layer you have only to define the number of columns. Hence you pass a single dimension argument to every following layer.

\n\n

Since it's a regression problem, using 'mse' as loss should work (as you correctly selected).

\n\n

Your dataset in this situation should be two matrices:

\n\n\n\n

In both situations you should have the number of lines to be the same: the total number of samples N.

\n", + "system": "" + }, + { + "instruction": "Defining a stride of (2, 2) in Keras -- what is the second value?", + "input": "", + "output": "

Indeed the two values are related to the two dimensions x and y along which we move the filters. \nYou will move the filters along both dimensions, so you will need to know the strides that you want to use. The thing you are misinterpreting is that the filters won't move by increasing both x and y values at the same time. They will be incremented one by one, so first you will go along the x axis 2 by 2, without changing y, the you will go down y by one step of 2 and start over along x, and so on...

\n\n

So it just gives you flexibility on the steps sizes you want to make vertically and horizontally. In cases where you are working with simple pictures it makes sense to use the sames steps (2,2) or (3,3) for example. But you never know what this tool can be used for so they made it general in case someone wants to subsample more on one axis or the other.

\n\n

Does that answer your question?

\n", + "system": "" + }, + { + "instruction": "How to retrieve the filename of an image with keras flow_from_directory shuffled method?", + "input": "", + "output": "

Internally, the DirectoryIterator will iterate over an index_arrray which is shuffled when you ask it to.

\n\n

You just need to index the filename array using the current indexes of the batch:

\n\n
it = ImageDataGenerator().flow_from_directory(shuffle=True,...)\nfor img in it:\n    idx = (it.batch_index - 1) * it.batch_size\n    fnames = [it.filenames[it.index_array[i]] for i in range(idx, idx + it.batch_size)]\n
\n", + "system": "" + }, + { + "instruction": "Keras: ValueError: decode_predictions expects a batch of predictions", + "input": "", + "output": "

You should change a first line to:

\n\n
model = VGG16(weights='imagenet', include_top=True)\n
\n\n

Without this line your model is producing a 512 feature maps with size of 7 x 7 pixels. This the reason behind your error.

\n", + "system": "" + }, + { + "instruction": "How can I use categorical one-hot labels for training with Keras?", + "input": "", + "output": "

There are a lot of things that do not add up.

\n\n

I assume that you are trying to solve a sequential classification task, i.e. your data is shaped as (<batch size>, <sequence length>, <feature length>).

\n\n

In your batch generator you create a batch consisting of one sequence of length 8038 and 463 features per sequence element. You create a matching Y batch to compare against, consisting of one sequence with 8038 elements, each of size 2.

\n\n

Your problem is that Y does not match up with the output of the last layer. Your Y is 3-dimensional while the output of your model is only 2-dimensional: Y.shape = (1, 8038, 2) does not match dense_1.shape = (1,1). This explains the error message you get.

\n\n

The solution to this: you need to enable return_sequences=True in the LSTM layer to return a sequence instead of only the last element (effectively removing the time-dimension). This would give an output shape of (1, 8038, 100) at the LSTM layer. Since the Dense layer is not able to handle sequential data you need to apply it to each sequence element individually which is done by wrapping it in a TimeDistributed wrapper. This then gives your model the output shape (1, 8038, 1).

\n\n

Your model should look like this:

\n\n
from keras.layers.wrappers import TimeDistributed\n\nmodel = Sequential()\nmodel.add(LSTM(100, input_dim=num_features, return_sequences=True))\nmodel.add(TimeDistributed(Dense(1, activation='sigmoid')))\n
\n\n

This can be easily spotted when examining the summary of the model:

\n\n
print(model.summary()) \n
\n", + "system": "" + }, + { + "instruction": "scikit-learn - Convert pipeline prediction to original value/scale", + "input": "", + "output": "

Exactly. The StandardScaler() in a pipeline is only mapping the inputs (trainX) of pipeline.fit(trainX,trainY).

\n\n

So, if you fit your model to approximate trainY and you need it to be standardized as well, you should map your trainY as

\n\n
scalerY = StandardScaler().fit(trainY)  # fit y scaler\npipeline.fit(trainX, scalerY.transform(trainY))  # fit your pipeline to scaled Y\ntestY = scalerY.inverse_transform(pipeline.predict(testX))  # predict and rescale\n
\n\n

The inverse_transform() function maps its values considering the standard deviation and mean calculated in StandardScaler().fit().

\n\n

You can always fit your model without scaling Y, as you mentioned, but this can be dangerous depending on your data since it can lead your model to overfit. You have to test it ;)

\n", + "system": "" + }, + { + "instruction": "How do I implement the Triplet Loss in Keras?", + "input": "", + "output": "

This issue explains how to create a custom objective (loss) in Keras:

\n\n
def dummy_objective(y_true, y_pred):\n    return 0.5  # your implem of tripletLoss here\n\nmodel.compile(loss=dummy_objective, optimizer='adadelta')\n
\n\n

Regarding the y parameter of .fit(), since you are the one handling it in the end (the y_true parameter of the objective function is taken from it), I would say you can pass whatever you need that can fit through Keras plumbing. And maybe a dummy vector to pass dimension checks if your really don't need any supervision.

\n\n

Eventually, as to how to implement this particular paper, looking for triplet or facenet in Keras doc didn't return anything. So you'll probably have to either implement it yourself or find someone who has.

\n", + "system": "" + }, + { + "instruction": "PyCharm cannot find installed packages: keras", + "input": "", + "output": "

This is strange, but you can install Keras directly through Pycharm.

\n\n

You can follow this steps:

\n\n\n\n

Please let me know if this procedure solve your issue.

\n", + "system": "" + }, + { + "instruction": "Mean or max pooling with masking support in Keras", + "input": "", + "output": "

Jacoxu's answer is right. But if you are using a tensorflow backend for keras, the Tensor type doesn't support dimshuffle function, try this instead.

\n\n
def call(self, x, mask=None):\n    if mask is not None:\n        # mask (batch, time)\n        mask = K.cast(mask, K.floatx())\n        # mask (batch, x_dim, time)\n        mask = K.repeat(mask, x.shape[-1])\n        # mask (batch, time, x_dim)\n        mask = tf.transpose(mask, [0,2,1])\n        x = x * mask\n    return K.sum(x, axis=1) / K.sum(mask, axis=1)\n
\n", + "system": "" + }, + { + "instruction": "Keras BFGS training using Scipy minimize", + "input": "", + "output": "
\n

Is it because I didn't input the gradient to minimize, and it cannot calculate the numerical approximation in this case?

\n
\n\n

It's because you don't output the gradients, so scipy approximates them by numerical differentiation. That is it evaluate the function at X, then at X + epsilon, to approximate the local gradient.

\n\n

But the epsilon is small enough that in the conversion to 32bit for theano, the change is completely lost. The starting guess is not in fact a minimum, scipy just thinks so since it sees no change in value in the objective function. You simply need to increase the epsilon as such:

\n\n
V = [1.0, 2.0, 3.0, 4.0, 1.0]\nprint('Starting loss = {}'.format(loss(V)))\n# set the eps option to increase the epsilon used in numerical diff\nres = minimize(loss, x0=V, method = 'BFGS', options={'eps':1e-6,'disp':True})\nprint('Ending loss = {}'.format(loss(res.x)))\n
\n\n

Which gives:

\n\n
Using Theano backend.\nStarting loss = 2.49976992001\nOptimization terminated successfully.\n         Current function value: 1.002703\n         Iterations: 19\n         Function evaluations: 511\n         Gradient evaluations: 73\nEnding loss = 1.00270344184\n
\n", + "system": "" + }, + { + "instruction": "Simple Recurrent Neural Network input shape", + "input": "", + "output": "

The input format should be three-dimensional: the three components represent sample size, number of time steps and output dimension

\n\n

Once appropriately reformatted the RNN does indeed manage to predict the target sequence well.

\n\n
np.random.seed(1337)\n\nsample_size = 256\nx_seed = [1, 0, 0, 0, 0, 0]\ny_seed = [1, 0.8, 0.6, 0, 0, 0]\n\nx_train = np.array([[x_seed] * sample_size]).reshape(sample_size,len(x_seed),1)\ny_train = np.array([[y_seed]*sample_size]).reshape(sample_size,len(y_seed),1)\n\nmodel=Sequential()\nmodel.add(SimpleRNN(input_dim  =  1, output_dim = 50, return_sequences = True))\nmodel.add(TimeDistributed(Dense(output_dim = 1, activation  =  \"sigmoid\")))\nmodel.compile(loss = \"mse\", optimizer = \"rmsprop\")\nmodel.fit(x_train, y_train, nb_epoch = 10, batch_size = 32)\n\nprint(model.predict(np.array([[[1],[0],[0],[0],[0],[0]]])))\n#[[[ 0.87810659]\n#[ 0.80646527]\n#[ 0.61600274]\n#[ 0.01652312]\n#[ 0.00930419]\n#[ 0.01328572]]]\n
\n", + "system": "" + }, + { + "instruction": "How to store result of an operation (like TOPK) per epoch in keras", + "input": "", + "output": "

I think you could save the variable using a list-tracking Checkpoint.

\n

you need to add code in the training so you need to code your training loop and save the variable at the end of each epoch.

\n
def fit_and_save_log(self, train_X, val_X, nb_epoch=50, batch_size=100, contractive=None):\n    import tensorflow as tf\n    optimizer = Adam(lr=0.0005)\n\n    self.autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy') # kld, binary_crossentropy, mse   \n    \n    save = tf.train.Checkpoint()\n    save.listed = []\n    \n    # Prepare dataset\n    X, y = train_X\n    train_ds = tf.data.Dataset.from_tensor_slices((x, y))\n    train_ds = train_ds.shuffle(10000)\n    train_ds = train_ds.batch(batch_size)\n    iterator = train_ds.make_initializable_iterator()\n    next_batch = iterator.get_next()\n\n    for epoch in range(nb_epoch):\n        sess.run(iterator.initializer)           \n        \n        while True:\n            try:\n                self.autoencoder.train_on_batch(next_batch[0], next_batch[1])\n            except tf.errors.OutOfRangeError:\n                break\n        \n        save.listed.append(self.encoded_instant.topk_mat)\n\n        # you can compute validation results here \n\n    save_path = save.save('./topk_mat_log', session=tf.keras.backend.get_session())\n    return self\n
\n

Or you can use the model.fit function if you prefer it. Doing it this way can be easier, as we do not need to care about creating the batches. However, repeatedly calling model.fit may result in memory leak. You can give it a try and check how it behaves. [1]

\n
def fit_and_save_log(self, train_X, val_X, nb_epoch=50, batch_size=100, contractive=None):\n    import tensorflow as tf\n    optimizer = Adam(lr=0.0005)\n\n    self.autoencoder.compile(optimizer=optimizer, loss='binary_crossentropy') # kld, binary_crossentropy, mse   \n    \n    save = tf.train.Checkpoint()\n    save.listed = []\n    \n    for epoch in range(nb_epoch):\n        self.autoencoder.fit(train_X[0], train_X[1],\n                epochs=1,\n                batch_size=batch_size,\n                shuffle=True,\n                validation_data=(val_X[0], val_X[1]))\n        \n        save.listed.append(self.encoded_instant.topk_mat)\n\n        # you can compute validation results here \n\n    save_path = save.save('./topk_mat_log', session=tf.keras.backend.get_session())\n    return self\n
\n

Then you can restore the saved variable like this

\n
restore = tf.train.Checkpoint()\nrestore.restore(save_path)\nrestore.listed = []\nv1 = tf.Variable(0.)\nrestore.listed.append(v1) # Now v1 corresponds with topk_mat in the first epoch\n
\n", + "system": "" + }, + { + "instruction": "What exactly does tf.keras.layers.Dense do?", + "input": "", + "output": "

Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True).

\n\n
\n

Note: If the input to the layer has a rank greater than 2, then it is\n flattened prior to the initial dot product with kernel.

\n
\n\n

Example:

\n\n
# as first layer in a sequential model:\nmodel = Sequential()\nmodel.add(Dense(32, input_shape=(16,)))\n# now the model will take as input arrays of shape (*, 16)\n# and output arrays of shape (*, 32)\n\n# after the first layer, you don't need to specify\n# the size of the input anymore:\nmodel.add(Dense(32))\n
\n\n

Arguments :

\n\n
> units: Positive integer, dimensionality of the output space.\n\n> activation: Activation function to use. If you don't specify anything,\n\n> no activation is applied (ie. \"linear\" activation: a(x) = x).\n\n> use_bias: Boolean, whether the layer uses a bias vector.\n\n> kernel_initializer: Initializer for the kernel weights matrix.\n\n> bias_initializer: Initializer for the bias vector. \n\n>kernel_regularizer:Regularizer function applied to the kernel weights matrix.\n> bias_regularizer: Regularizer function applied to the bias vector.\n\n> activity_regularizer: Regularizer function applied to the output of the layer (its \"activation\").. \n\n>kernel_constraint: Constraint function applied to the kernel weights matrix. \n\n>bias_constraint: Constraint function applied to the bias vector.\n
\n\n

Input shape:

\n\n

N-D tensor with shape: (batch_size, ..., input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).

\n\n

Output shape:

\n\n

N-D tensor with shape: (batch_size, ..., units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units).

\n", + "system": "" + }, + { + "instruction": "Why val_loss and val_acc are not displaying?", + "input": "", + "output": "

Validation loss and accuracy are computed on epoch end, not on batch end. If you want to compute those values after each batch, you would have to implement your own callback with an on_batch_end() method and call self.model.evaluate() on the validation set. See https://keras.io/callbacks/.

\n\n

But computing the validation loss and accuracy after each epoch is going to slow down your training a lot and doesn't bring much in terms of evaluation of the network performance.

\n", + "system": "" + }, + { + "instruction": "How to expand dims of my Keras layer and avoid using Lambda layer", + "input": "", + "output": "

I solved it by myself.\nJust use reshape.

\n\n
from keras.backend import int_shape\n        ......\n        embedded_sequences = Reshape((int_shape(embedding_layer)[1], int_shape(embedding_layer)[2], 1))(embedding_layer)\n        ......\n
\n\n

You could expand dim without using expand_dims and Lambda layer.

\n", + "system": "" + }, + { + "instruction": "How to visualize attention weights?", + "input": "", + "output": "

Visualizing attention is not complicated but you need some tricks. While constructing the model you need to give a name to your attention layer.

\n\n
(...)\nattention = keras.layers.Activation('softmax', name='attention_vec')(attention)\n(...)\n
\n\n

On loading saved model you need to get the attention layer output on predict.

\n\n
model = load_model(\"./saved_model.h5\")\nmodel.summary()\nmodel = Model(inputs=model.input,\n              outputs=[model.output, model.get_layer('attention_vec').output])\n
\n\n

Now you can get the output of model and also the attention vector.

\n\n
ouputs = model.predict(encoded_input_text)\nmodel_outputs = outputs[0]\nattention_outputs = outputs[1]\n
\n\n

There are lots of visualising approach of attention vector. Basically the attention output is a softmax output and they are between 0 and 1. You can change these values to rgb codes. If you are working on a Jupyter notebook this following snippet helps you to understand consept and visualise:

\n\n
class CharVal(object):\n    def __init__(self, char, val):\n        self.char = char\n        self.val = val\n\n    def __str__(self):\n        return self.char\n\ndef rgb_to_hex(rgb):\n    return '#%02x%02x%02x' % rgb\ndef color_charvals(s):\n    r = 255-int(s.val*255)\n    color = rgb_to_hex((255, r, r))\n    return 'background-color: %s' % color\n\n# if you are using batches the outputs will be in batches\n# get exact attentions of chars\nan_attention_output = attention_outputs[0][-len(encoded_input_text):]\n\n# before the prediction i supposed you tokenized text\n# you need to match each char and attention\nchar_vals = [CharVal(c, v) for c, v in zip(tokenized_text, attention_output)]\nimport pandas as pd\nchar_df = pd.DataFrame(char_vals).transpose()\n# apply coloring values\nchar_df = char_df.style.applymap(color_charvals)\nchar_df\n
\n\n

To summarize you need to get attention outputs from model, match outputs with inputs and convert them rgb or hex and visualise. I hope it was clear.

\n", + "system": "" + }, + { + "instruction": "Is there R command(s) making Keras Tensorflow-GPU to run on CPU?", + "input": "", + "output": "

Let me give you 2 answers.

\n

Answer #1 (normal answer)

\n

No, unfortunately not. For keras CPU and GPU are 2 different versions, from which you select at install time.

\n

It seems you remember that you selected GPU at install time. I guess you're hoping that you were only setting a minor option, not selecting a version of the program. Unfortunately, you were selecting the version of keras to install.

\n

Answer #2 (ok, maybe you can "trick" keras)

\n

It seems you can use environment variable values to trick keras into thinking that your CPU is your GPU.

\n

This seems like it may have unexpected results, but it seemed to work for these Python users.

\n

I wouldn't worry about the fact that they are using Python. They are just using their language to set environment variables. So you can do the same in R\nor directly within your OS.

\n", + "system": "" + }, + { + "instruction": "cannot import name 'ops' python", + "input": "", + "output": "

You can try this:

\n\n
pip install tensorflow --upgrade\npip install keras --upgrade\n
\n\n

Perhaps the Keras framework checks your backend version of TensorFlow is too old.

\n", + "system": "" + }, + { + "instruction": "Stateful LSTM - Hidden State transfer between and within batches (Keras)", + "input": "", + "output": "

Yes, it works as described by you above.

\n

And yes you're right, from the perspective of a data stream t[] it is maybe not very intuitive to order the data like that:

\n\n

But I think it makes completely sense out of a data-processing perspective. This data ordering allow you to completely parallelize/vectorize the processing of all batches. In other words: [t[0], t[3], t[6]] can be processed in parallel, whereas the (sequential) sequence [t[0], t[1], t[2]] could be only processed one after each other.

\n", + "system": "" + }, + { + "instruction": "Keras Tensorflow Binary Cross entropy loss greater than 1", + "input": "", + "output": "

Keras binary_crossentropy first convert your predicted probability to logits. Then it uses tf.nn.sigmoid_cross_entropy_with_logits to calculate cross entropy and return to you the mean of that. Mathematically speaking, if your label is 1 and your predicted probability is low (like 0.1), the cross entropy can be greater than 1, like losses.binary_crossentropy(tf.constant([1.]), tf.constant([0.1])).

\n", + "system": "" + }, + { + "instruction": "Classification of Images with Recurrent Neural Networks", + "input": "", + "output": "

Aymericdamien has some of the best examples out there, and they have an example of using an RNN with images.

\n\n

https://github.com/aymericdamien/TensorFlow-Examples

\n\n

https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/recurrent_network.ipynb

\n\n

The example is using MNIST, but it can be applied to any image.

\n\n

However, I'll point out that you're unlikely to find many examples of using an RNN to classify an image because RNNs are inferior to CNNs for most image processing tasks. The example linked to above is for educational purposes more than practical purposes.

\n\n

Now, if you are attempting to use an RNN because you have a sequence of images you wish to process, such as with a video, in this case a more natural approach would be to combine both a CNN (for the image processing part) with an RNN (for the sequence processing part). To do this you would typically pretrain the CNN on some classification task such as Imagenet, then feed the image through the CNN, then the last layer of the CNN would be the input to each timestep of an RNN. You would then let the entire network train with the loss function defined on the RNN.

\n", + "system": "" + }, + { + "instruction": "Keras input shape ordering is (width, height, ch)?", + "input": "", + "output": "

You can change the dimension ordering as you prefer. \nYou can print and change the dimension ordering like this:

\n\n\n\n
from keras import backend as K\nprint(K.image_data_format()) # print current format\nK.set_image_data_format('channels_last') # set format\n
\n\n

If you want to permanently change the dimension ordering, you should edit it in the keras.json file, usually located at ~/.keras/keras.json:

\n\n
\"image_data_format\": \"channels_last\"\n
\n", + "system": "" + }, + { + "instruction": "How do I show both Training loss and validation loss on the same graph in tensorboard through keras?", + "input": "", + "output": "

You can add a regex in the text box in the upper left corner of the Tensorboard window.

\n\n

Add acc for accuracy of both train/validation data. Add lossfor the loss values. This works for me for Keras as well as Tensorflow.

\n\n

Got this from this nice tutorial on TB: https://www.youtube.com/watch?v=eBbEDRsCmv4

\n\n

As a code snippet I use this:

\n\n
logdir = \"_tf_logs/\" + now.strftime(\"%Y%m%d-%H%M%S\") + \"/\"\ntb = TensorBoard(log_dir=logdir)\ncallbacks=[tb]\n...\nmodel.fit(X_train, Y_train, validation_data=val_data, epochs=10, verbose=2, callbacks=callbacks)\n
\n", + "system": "" + }, + { + "instruction": "3D Convolutional Neural Network input shape", + "input": "", + "output": "

I think that the problem is that you are setting the input shape in Theano ordering but you are using Keras with Tensorflow backend and Tensorflow img ordering. In addition the y_train array has to be converted to categorical labels.

\n\n

Updated code:

\n\n
from keras.utils import np_utils\nfrom keras import backend as K\n\nif K.image_dim_ordering() == 'th':\n    X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols, img_depth)\n    input_shape = (1, img_rows, img_cols, img_depth)\nelse:\n    X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, img_depth, 1)\n    input_shape = (img_rows, img_cols, img_depth, 1)\n\nY_train = np_utils.to_categorical(Y_train, nb_classes)\n
\n\n

Adding this lines should fix it.

\n", + "system": "" + }, + { + "instruction": "How does Keras evaluate loss on test set?", + "input": "", + "output": "

From the documentation:

\n
\n

evaluate

\n

Computes the loss on some input data, batch by batch.

\n

Returns

\n

Scalar test loss (if the model has no metrics) or list of scalars (if the model computes other metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

\n
\n

So it returns you either a single value that represents a loss, or a list of values that correspond to different metrics that were added to your model. These values are calculated based on the whole test set, i. e. all values in x_test and y_test.

\n", + "system": "" + }, + { + "instruction": "How do I show both Training loss and validation loss on the same graph in tensorboard through keras?", + "input": "", + "output": "

You can add a regex in the text box in the upper left corner of the Tensorboard window.

\n\n

Add acc for accuracy of both train/validation data. Add lossfor the loss values. This works for me for Keras as well as Tensorflow.

\n\n

Got this from this nice tutorial on TB: https://www.youtube.com/watch?v=eBbEDRsCmv4

\n\n

As a code snippet I use this:

\n\n
logdir = \"_tf_logs/\" + now.strftime(\"%Y%m%d-%H%M%S\") + \"/\"\ntb = TensorBoard(log_dir=logdir)\ncallbacks=[tb]\n...\nmodel.fit(X_train, Y_train, validation_data=val_data, epochs=10, verbose=2, callbacks=callbacks)\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow 2 getting "WARNING:tensorflow:x out of the last x calls to <function> triggered tf.function retracing."", + "input": "", + "output": "

For the benefit of community providing solution here

\n
\n

After some tries to suppress the warning from the predict method, I\nhave checked the documentation of Tensorflow and in one of the first\ntutorials on how to use Tensorflow it is explained that, by default,\nTensorflow is executed in eager mode, which is useful for testing and\ndebugging the network models. Since I have already tested my models\nmany times, it was only required to disable the eager mode by writing\nthis single python line of code:

\n
tf.compat.v1.disable_eager_execution()\n
\n

Now the warning doesn't show up anymore. (paraphrased from Simone)

\n
\n

tf.compat.v1.disable_eager_execution() can only be called before any Graphs, Ops, or Tensors have been created. It can be used at the beginning of the program for migration projects from TensorFlow 1.x to 2.x.

\n

For more details you can refer Eager execution

\n", + "system": "" + }, + { + "instruction": "Is validation_split=0.2 in Keras a cross-validation?", + "input": "", + "output": "

The model first shuffles the data and then splits it to train and validation
\nFor the next epoch, the train & validation have already been defined in the first epoch, so it does not shuffle & split again, but uses the previously defined datasets.

\n

Therefore, it is a cross-validation.

\n", + "system": "" + }, + { + "instruction": "Why GAN is unable to generate samples from some distributions?", + "input": "", + "output": "

As you can see from the accuracy plots you have in Tensorboard, your principal problem here is with the discriminator. Because its accuracy oscillates around 50-60% and doesn't improve. And this is very bad because the generator is downstream and can't train until the discriminator achieves decent accuracy. So what is wrong with the discriminator?

\n

First, it is the way you train it. You feed it positive and negative samples in two separate batches. This can generate gradients pushing your model coefficients randomly in opposite directions with very poor convergence. If you combine both types of samples in a single batch, convergence will improve significantly.

\n

Second, the batch size. 32 random points around a circle is too little for the model to feel the difference against 32 random points. You need to have a batch size of at least 256.

\n

Third, the number of neurons in hidden layers. Actually, you have too many neurons for such simple data in both generator and discriminator. Having too many of them in the discriminator doesn't seem to do much harm, but having too many neurons in the generator makes it too unstable, the discriminator each time receives different training data and this is one more reason why it fails to train properly. If you put 16 and 32 hidden neurons instead of 64 and 128 into the generator, it will be much better.

\n

And the last point: it is not only the circular form that makes your circle difficult to learn but also its size. It has a radius of 1, and 1 is the saturation value of your generator, so it is very easy for it to produce values around 1. And this makes additional trouble for the generator: it starts receiving fake data too close to the true data before it achieves decent accuracy.

\n

To summarize:

\n
    \n
  1. Combine true and fake data into a single batch.
  2. \n
  3. Use a larger batch size (at least 256).
  4. \n
  5. Reduce the number of neurons at least in the generator (e.g. to 16 and 32).
  6. \n
\n

Enjoy the result:\n\"Training

\n

And one more thing: it is better to ask such questions in this community https://stats.stackexchange.com/.

\n", + "system": "" + }, + { + "instruction": "How to deal with KerasTensor and Tensor?", + "input": "", + "output": "

you need to pass mu and sigma to your loss function. vae_loss is now accepting 4 inputs:

\n
def vae_loss(x, y, mu, sigma):\n    loss = tf.reduce_mean(K.square(x-y))\n    kl_loss = -0.5 * tf.reduce_mean(1 + sigma - tf.square(mu) - tf.exp(sigma))\n    return loss + kl_loss\n
\n

you can use it inside your model simply using autoencoder.add_loss.

\n

It's important also that encoder returns not only z but also mu and sigma.

\n
z, mu, sigma = encoder(img)\nout = decoder(z)\nautoencoder = Model(img, out, name='autoencoder')\nautoencoder.add_loss(vae_loss(img, out, mu, sigma))  # <======= add_loss\nautoencoder.compile(loss=None, optimizer='adam')\n
\n

here the running notebook https://colab.research.google.com/drive/1r5lMZ2Dc_Lq4KJDqiirXla1qfDVmdwxW?usp=sharing

\n", + "system": "" + }, + { + "instruction": "Keras loss and metrics values do not match with same function in each", + "input": "", + "output": "

This blog post suggests that keras adds any regularisation used in the training when calculating the validation loss. And obviously, when calculating the metric of choice no regularisation is applied. This is why it occurs with any loss function of choice as stated in the question.

\n

This is something I could not find any documentation on from Keras. However, it seems to hold up since when I remove all regularisation hyperparameters, the val_loss and val_custom_fn match exactly in each epoch.

\n

An easy workaround is to either use the custom_fn as a metric and save the best model based on the metric (val_custom_fn) than on the val_loss. Or else Loop through each epoch manually and calculate the correct val_loss manually after training each epoch. The latter seems to make more sense since there is no reason to include custom_fn both as a metric and as a loss function.

\n

If anyone can find any evidence of this in the Keras documentation that would be helpful.

\n", + "system": "" + }, + { + "instruction": "Why does keras model.fit with sample_weight have long initialization time?", + "input": "", + "output": "

The issue is caused by how TensorFlow validates some type of input objects. Such validations, when the data are surely correct, are exclusively a wasted time expenditure (I hope in the future it will be handled better).

\n

In order to force TensorFlow to skip such validation procedures, you can trivially wrap the weights in a Pandas Series, such as follows:

\n
model.fit(x_train.iloc, y_train, sample_weight=pd.Series(w_train))\n
\n

Do note that in your code you are using the metrics keyword. If you want the accuracy to be actually weighted on the provided weights, to use the weighted_metrics argument instead.

\n", + "system": "" + }, + { + "instruction": "In Tensorflow 2.2.0, my model.history.history is empty after fitting the data along with validation_data", + "input": "", + "output": "
history = model.fit(...)\nprint(f\"model hist is : \\n {history.history}\")\n
\n", + "system": "" + }, + { + "instruction": "tensorflow TypeError: cannot unpack non-iterable float object", + "input": "", + "output": "

you need to define a metric when you compile the model model.compile('adam', 'binary_crossentropy', metrics='accuracy')

\n\n

in this way during evaluation, loss and accuracy are returned

\n", + "system": "" + }, + { + "instruction": "How to make Google Cloud AI Platform detect `tf.summary.scalar` calls during training?", + "input": "", + "output": "

I am having the same issue that I can't get AI platform to pick up tf.summary.scalar. I tried to debug it with the GCP support and AI Platform Engineering team for the last 2 months. They didn't manage to reproduce the issue even if we were using almost the same code. We even did one coding session but were still having different results.

\n

Recommendation from the GCP AI Platform Engineering team: "don't use tf.summary.scalar" the main reason is that by using the other method:

\n\n

They will update the documentation to reflect this new recommendation.

\n

Setup:

\n\n

With the following setup the "issue" is observed:

\n\n

It seems to work with other setup. Anyway I will follow the recommendation from GCP and use the custom solution to avoid issue

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "With ResNet50 the validation accuracy and loss is not changing", + "input": "", + "output": "

There is no mistake in your Model but this might be the issue with ResNet as such, because there are many issues raised, 1,2,3, in Github and Stack Overflow, already regarding this Pre-Trained Model.

\n\n

Having said that, I found out a workaround, which worked for me, and hopefully works for you as well.

\n\n

Workaround was to replace the Data Augmentation step,

\n\n
Train_Datagen = ImageDataGenerator(rescale=1./255,  rotation_range=40, width_shift_range=0.2,\n    height_shift_range=0.2, brightness_range=(0.2, 0.7), shear_range=45.0, zoom_range=60.0,\n    horizontal_flip=True, vertical_flip=True)\n\nVal_Datagen = ImageDataGenerator(rescale=1./255,  rotation_range=40, width_shift_range=0.2,\n    height_shift_range=0.2, brightness_range=(0.2, 0.7), shear_range=45.0, zoom_range=60.0,\n    horizontal_flip=True, vertical_flip=True)\n
\n\n

with tf.keras.applications.resnet.preprocess_input, as shown below:

\n\n
Train_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)\nVal_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)\n
\n\n

By modifying the Data Augmentation as shown above, my Validation Accuracy, which got stuck at 50% increased gradually up to 97%. Reason for this might be that ResNet might expect specific Pre-Processing Operations (not quite sure).

\n\n

Complete working code which resulted in more than 95% of both Train and Validation Accuracy (for Cat and Dog Dataset) using ResNet50 is shown below:

\n\n
import tensorflow as tf\nfrom tensorflow.keras.applications import ResNet50\nimport os\nimport numpy as np\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom tensorflow.keras.layers import Dense, Dropout, Flatten\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.models import Sequential\n\n# The Convolutional Base of the Pre-Trained Model will be added as a Layer in this Model\nConv_Base = ResNet50(include_top = False, weights = 'imagenet', input_shape = (150,150, 3))\n\nfor layer in Conv_Base.layers[:-8]:\n    layer.trainable = False\n\nmodel = Sequential()\nmodel.add(Conv_Base)\nmodel.add(Flatten())\nmodel.add(Dense(units = 256, activation = 'relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(units = 1, activation = 'sigmoid'))\n\nmodel.summary()\n\nbase_dir = 'Deep_Learning_With_Python_Book/Dogs_Vs_Cats_Small'\nif os.path.exists(base_dir):    \n    train_dir = os.path.join(base_dir, 'train')\n    validation_dir = os.path.join(base_dir, 'validation')\n    test_dir = os.path.join(base_dir, 'test')\nelse:\n    print(\"The Folder, {}, doesn't exist'\".format(base_dir))\n\nbatch_size = 20\n\nTrain_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)\nVal_Datagen = ImageDataGenerator(dtype = 'float32', preprocessing_function=tf.keras.applications.resnet.preprocess_input)\n\ntrain_gen = Train_Datagen.flow_from_directory(directory = train_dir, target_size = (150,150), \n                                       batch_size = batch_size, class_mode = 'binary')\n\nval_gen = Val_Datagen.flow_from_directory(directory = validation_dir, target_size = (150,150), \n                                       batch_size = batch_size, class_mode = 'binary')\n\nepochs = 15\nNumber_Of_Training_Images = train_gen.classes.shape[0]\nsteps_per_epoch = Number_Of_Training_Images/batch_size\n\nmodel.compile(optimizer = 'Adam', loss = 'binary_crossentropy', metrics = ['accuracy'])\n\nhistory = model.fit(train_gen, epochs = epochs, \n                    #batch_size = batch_size,\n                    validation_data = val_gen, steps_per_epoch = steps_per_epoch)\n\nimport matplotlib.pyplot as plt\n\ntrain_acc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\ntrain_loss = history.history['loss']\nval_loss = history.history['val_loss']\nNo_Of_Epochs = range(epochs)\n\nplt.plot(No_Of_Epochs, train_acc, marker = 'o', color = 'blue', markersize = 12, \n                 linewidth = 2, label = 'Training Accuracy')\nplt.plot(No_Of_Epochs, val_acc, marker = '.', color = 'red', markersize = 12, \n                 linewidth = 2, label = 'Validation Accuracy')\n\nplt.title('Training Accuracy and Testing Accuracy w.r.t Number of Epochs')\n\nplt.legend()\n\nplt.figure()\n\nplt.plot(No_Of_Epochs, train_loss, marker = 'o', color = 'blue', markersize = 12, \n                 linewidth = 2, label = 'Training Loss')\nplt.plot(No_Of_Epochs, val_acc, marker = '.', color = 'red', markersize = 12, \n                 linewidth = 2, label = 'Validation Loss')\n\nplt.title('Training Loss and Testing Loss w.r.t Number of Epochs')\n\nplt.legend()\n\nplt.show()\n
\n\n

Metrics are shown in the below graph,

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "How to implement maclaurin series in keras?", + "input": "", + "output": "

Interesting question. I have implemented a Keras model that computes the Taylor expansion as you described:

\n\n
from tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Dense, Input, Lambda\n\n\ndef taylor_expansion_network(input_dim, max_pow):\n    x = Input((input_dim,))\n\n    # 1. Raise input x_i to power p_i for each i in [0, max_pow].\n    def raise_power(x, max_pow):\n        x_ = x[..., None]  # Shape=(batch_size, input_dim, 1)\n        x_ = tf.tile(x_, multiples=[1, 1, max_pow + 1])  # Shape=(batch_size, input_dim, max_pow+1)\n        pows = tf.range(0, max_pow + 1, dtype=tf.float32)  # Shape=(max_pow+1,)\n        x_p = tf.pow(x_, pows)  # Shape=(batch_size, input_dim, max_pow+1)\n        x_p_ = x_p[..., None]  # Shape=(batch_size, input_dim, max_pow+1, 1)\n        return x_p_\n\n    x_p_ = Lambda(lambda x: raise_power(x, max_pow))(x)\n\n    # 2. Multiply by alpha coefficients\n    h = LocallyConnected2D(filters=1,\n                           kernel_size=1,  # This layer is computing a_i * x^{p_i} for each i in [0, max_pow]\n                           use_bias=False)(x_p_)  # Shape=(batch_size, input_dim, max_pow+1, 1)\n\n    # 3. Compute s_i for each i in [0, max_pow]\n    def cumulative_sum(h):\n        h = tf.squeeze(h, axis=-1)  # Shape=(batch_size, input_dim, max_pow+1)\n        s = tf.cumsum(h, axis=-1)  # s_i = sum_{j=0}^i h_j. Shape=(batch_size, input_dim, max_pow+1)\n        s_ = s[..., None]  # Shape=(batch_size, input_dim, max_pow+1, 1)\n        return s_\n\n    s_ = Lambda(cumulative_sum)(h)\n\n    # 4. Compute sum w_i * s_i each i in [0, max_pow]\n    s_ = LocallyConnected2D(filters=1,  # This layer is computing w_i * s_i for each i in [0, max_pow]\n                            kernel_size=1,\n                            use_bias=False)(s_)  # Shape=(batch_size, input_dim, max_pow+1)\n    y = Lambda(lambda s_: tf.reduce_sum(tf.squeeze(s_, axis=-1), axis=-1))(s_)  # Shape=(batch_size, input_dim)\n\n    # Return Taylor expansion model\n    model = Model(inputs=x, outputs=y)\n    model.summary()\n    return model\n
\n\n

The implementation applies the same Taylor expansion to each element of the flattened tensor with shape (batch_size, input_dim=512) coming from the convolutional network.

\n\n
\n\n

UPDATE: As we discussed in the comments section, here is some code to show how your function expandable_cnn could be modified to integrate the model defined above:

\n\n
def expandable_cnn(input_shape, nclass, approx_order):\n    inputs = Input(shape=(input_shape))\n    h = inputs\n    h = Conv2D(filters=32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=input_shape)(h)\n    h = Conv2D(filters=32, kernel_size=(3, 3), activation='relu')(h)\n    h = MaxPooling2D(pool_size=(2, 2))(h)\n    h = Dropout(0.25)(h)\n    h = Flatten()(h)\n    h = Dense(512, activation='relu')(h)\n    h = Dropout(0.5)(h)\n    taylor_model = taylor_expansion_network(input_dim=512, max_pow=approx_order)\n    h = taylor_model(h)\n    h = Activation('relu')(h)\n    print(h.shape)\n    h = Dense(nclass, activation='softmax')(h)\n    model = Model(inputs=inputs, outputs=h)\n    return model\n
\n\n

Please note that I do not guarantee that your model will work (e.g. that you will get good performance). I just provided a solution based on my interpretation of what you want.

\n", + "system": "" + }, + { + "instruction": "Modify trained model architecture and continue training Keras", + "input": "", + "output": "

Without knowing the details of your model, the following snippet might help:

\n\n
from tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Dense, Input\n\n# Train your initial model\ndef get_initial_model():\n    ...\n    return model\n\nmodel = get_initial_model()\nmodel.fit(...)\nmodel.save_weights('initial_model_weights.h5')\n\n# Use Model API to create another model, built on your initial model\ninitial_model = get_initial_model()\ninitial_model.load_weights('initial_model_weights.h5')\n\nnn_input = Input(...)\nx = initial_model(nn_input)\nx = Dense(...)(x)  # This is the additional layer, connected to your initial model\nnn_output = Dense(...)(x)\n\n# Combine your model\nfull_model = Model(inputs=nn_input, outputs=nn_output)\n\n# Compile and train as usual\nfull_model.compile(...)\nfull_model.fit(...)\n
\n\n

Basically, you train your initial model, save it. And reload it again, and wrap it together with your additional layers using the Model API. If you are not familiar with Model API, you can check out the Keras documentation here (afaik the API remains the same for Tensorflow.Keras 2.0).

\n\n

Note that you need to check if your initial model's final layer's output shape is compatible with the additional layers (e.g. you might want to remove the final Dense layer from your initial model if you are just doing feature extraction).

\n", + "system": "" + }, + { + "instruction": "What is the difference between input_shape and input_dim in keras?", + "input": "", + "output": "

You need input_shape = (8,), which is exactly the same as input_dim = 8.

\n\n

Your final Dense, naturally, needs to have 1 unit only. For regression you might want activation = 'linear' in the last layer and for classification you will want 'sigmoid'.

\n\n

For higher dimension data, you will need input_shape = (dim_1, dim2, ...).

\n", + "system": "" + }, + { + "instruction": "Don't understand keras double parentheses syntax", + "input": "", + "output": "

This might be an easier to understand version of these chained calls:

\n\n
model = VGG19(weights='imagenet',include_top=False)\nmodel.trainable=False\nlayer1 = Flatten(name='flat')(model)\nlayer2 = Dense(512, activation='relu', name='fc1')(layer1)\nlayer3 = Dense(512, activation='relu', name='fc2')(layer2)\nlayer4 = Dense(10, activation='softmax', name='predictions')(layer3)\n
\n\n

which could be rewritten as:

\n\n
model = VGG19(weights='imagenet',include_top=False)\nmodel.trainable=False\nmodel.add( Flatten(name='flat'))\nmodel.add( Dense(512, activation='relu', name='fc1'))\nmodel.add( Dense(512, activation='relu', name='fc2'))\nmodel.add( Dense(10, activation='softmax', name='predictions'))\n
\n", + "system": "" + }, + { + "instruction": "What's difference between using metrics 'acc' and tf.keras.metrics.Accuracy()", + "input": "", + "output": "

Took some digging, but I believe the difference is:

\n\n

acc used def binary_accuracy(y_true, y_pred, threshold=0.5) in metrics.py under the hood

\n\n

while

\n\n

tf.keras.metrics.Accuracy used class Accuracy(MeanMetricWrapper) in metrics.py.

\n\n

I came to this conclusion by testing & inspecting the source code for tensorflow's keras metrics.py file

\n", + "system": "" + }, + { + "instruction": "LSTM RNN to predict multiple time-steps and multiple features simultaneously", + "input": "", + "output": "

I managed to achieve this in the end by using a Dense layer with the number of time steps I desired multiplied by the number of features I was predicting. Then after that, I reshaped this into the output shape I desired.

\n\n

I'm not sure if this is the optimal method for doing this, but it works okay.

\n\n
#Experimental code for predicting multiple sensors\nimport tensorflow.keras.layers as tfl\n\ntf.keras.backend.clear_session()\n\n\nprint(\"Input shape\", x_train_multi.shape[-2:]) \n# Input shape (96, 4)\n\nmulti_step_model = tf.keras.Sequential()\nmulti_step_model.add(tf.keras.layers.LSTM(32, return_sequences=True, input_shape=x_train_multi.shape[-2:]))\nmulti_step_model.add(tf.keras.layers.Dropout(rate=0.5))\nmulti_step_model.add(tf.keras.layers.LSTM(16, return_sequences=False, activation='relu'))\nmulti_step_model.add(tf.keras.layers.Dropout(rate=0.5)) \nprint(\"After LSTMs\", multi_step_model.output_shape)  \n# After LSTMs (None, 16)\nmulti_step_model.add(tf.keras.layers.Dense((future_target / STEP) * 4))\nprint(\"After Dense Layer\", multi_step_model.output_shape) \n#  After Dense Layer (None, 192)\nmulti_step_model.add(tf.keras.layers.Reshape((int(future_target / STEP), 4)))\nprint(\"After Reshape\", multi_step_model.output_shape)\n# After Reshape (None, 48, 4)\n\n\nmulti_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss='mae')\n\n# Shape of predictions\nfor x, y in val_data_multi.take(1):\n    print (\"Prediction shape\", multi_step_model.predict(x).shape)\n    # Prediction shape (512, 48, 4)\n
\n", + "system": "" + }, + { + "instruction": "Zero predictions despite masking support for zero-padded mini batch LSTM training in keras", + "input": "", + "output": "

First problem: your X data after reshaping is not what you expected. If you look at the first sample after reshaping, it is:

\n\n
array([[[-1.00612917,  1.47313952,  2.68021318,  1.54875809,\n          0.98385996],\n        [ 1.49465265,  0.60429106,  1.12396908, -0.24041602,\n          1.77266187],\n        [ 0.1961381 ,  1.28019637,  1.78803092,  2.05151245,\n          0.93606708],\n        [ 0.51554755,  0.        ,  0.        ,  0.        ,\n          0.        ],\n        [-0.97596563,  2.04536053,  0.88367922,  1.013342  ,\n         -0.16605355],\n        [ 3.02994344,  2.04080806, -0.25153046, -0.5964068 ,\n          2.9607247 ],\n        [-0.49722121,  0.02734492,  2.16949987,  2.77367066,\n          0.15628842],\n        [ 2.19823207,  0.        ,  0.        ,  0.        ,\n          0.        ],\n        [ 0.31546283,  3.27420503,  3.23550769, -0.63724013,\n          0.89150128],\n        [ 0.69774266,  2.76627308, -0.58408384, -0.45681779,\n          1.98843041],\n        [-0.31850477,  0.83729882,  0.45471165,  3.61974147,\n         -1.45610756],\n        [ 1.35217453,  0.        ,  0.        ,  0.        ,\n          0.        ],\n        [ 1.03329532,  1.97471646,  1.33949611,  1.22857243,\n         -1.46890642],\n        [ 1.74105506,  1.40969261,  0.52465603, -0.18895266,\n          2.81025597],\n        [ 2.64901037, -0.83415186,  0.76956826,  1.48730868,\n         -0.16190164],\n        [ 2.24389007,  0.        ,  0.        ,  0.        ,\n          0.        ],\n        [-1.0676654 ,  3.08429323,  1.7601179 ,  0.85448051,\n          1.15537064],\n        [ 2.82487842,  0.27891413,  0.57842569, -0.62392063,\n          1.00343057],\n        [ 1.15348843, -0.37650332,  3.37355345,  2.22285473,\n          0.43444434],\n        [ 0.15743873,  0.        ,  0.        ,  0.        ,\n          0.        ]]])\n
\n\n

So actually no timestep is masked, because Masking layer only mask timesteps where all features are 0, so the above 20 timesteps are not masked because none of them are completely 0.

\n\n

For the Masking layer, to ensure you have the mask propogated to the output layer successfully, you can do:

\n\n
for i, l in enumerate(model.layers):\n    print(f'layer {i}: {l}')\n    print(f'has input mask: {l.input_mask}')\n    print(f'has output mask: {l.output_mask}')\n\nlayer 0: <tensorflow.python.keras.layers.core.Masking object at 0x6417b7f60>\nhas input mask: None\nhas output mask: Tensor(\"masking/Identity_1:0\", shape=(None, 20), dtype=bool)\nlayer 1: <tensorflow.python.keras.layers.wrappers.Bidirectional object at 0x641e25cf8>\nhas input mask: Tensor(\"masking/Identity_1:0\", shape=(None, 20), dtype=bool)\nhas output mask: Tensor(\"masking/Identity_1:0\", shape=(None, 20), dtype=bool)\nlayer 2: <tensorflow.python.keras.layers.core.Dropout object at 0x641814128>\nhas input mask: Tensor(\"masking/Identity_1:0\", shape=(None, 20), dtype=bool)\nhas output mask: Tensor(\"masking/Identity_1:0\", shape=(None, 20), dtype=bool)\nlayer 3: <tensorflow.python.keras.layers.wrappers.TimeDistributed object at 0x6433b6ba8>\nhas input mask: Tensor(\"masking/Identity_1:0\", shape=(None, 20), dtype=bool)\nhas output mask: Tensor(\"time_distributed/Reshape_3:0\", shape=(None, 20), dtype=bool)\n
\n\n

So you can see that the final layer also has the output_mask, which means the masks are successfully propogated. You seem to have a misunderstanding of how Masking works in Keras, what it actually does is it will generate a mask, which is a boolean array, the shape of the mask is (None, Timesteps), since in your model definition, the Timestep dimension is always kept the same, so the mask will be propogated to the end without any changes. Then when Keras calculate loss (and of course when it calculate gradients), the timesteps which has a mask value False will be ignored. The Masking layer doesn't change the output value and of course your model will still predict class 0, what it only does is to produce a boolean array indicating which timestep should be skipped and pass it to the end (if all the layers accept the mask).

\n\n

So what you can do is change one line of your model definition as follows and make your y_labels shifted by 1, which means your current classes:

\n\n

0 -> 0 (since the loss of these timesteps will be ignored, not contributing to the training of the model, so whether it is 0 or 1 doesn't matter)

\n\n

1 -> 0

\n\n

2 -> 1

\n\n
# I would prefer softmax if doing classification\n# here we only need to specify 2 classes\n# and actually TimeDistributed can be thrown away (at least in recent Keras versions)\nmodel.add(TimeDistributed(Dense(2, activation='softmax')))\n
\n\n

You can also see my answer here https://stackoverflow.com/a/59313862/11819266 for understanding how the loss are calculated with / without masking.

\n", + "system": "" + }, + { + "instruction": "Google Colab: Why is CPU faster than TPU?", + "input": "", + "output": "

Thank you for your question.

\n\n

I think what's happening here is a matter of overhead -- since the TPU runs on a separate VM (accessible at grpc://$COLAB_TPU_ADDR), each call to run a model on the TPU incurs some amount of overhead as the client (the Colab notebook in this case) sends a graph to the TPU, which is then compiled and run. This overhead is small compared to the time it takes to run e.g. ResNet50 for one epoch, but large compared to run a simple model like the one in your example.

\n\n

For best results on TPU we recommend using tf.data.Dataset. I updated your example for TensorFlow 2.2:

\n\n
%tensorflow_version 2.x\nimport timeit\nimport os\nimport tensorflow as tf\nfrom sklearn.datasets import load_iris\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.optimizers import Adam\n\n# Load Iris dataset\nx = load_iris().data\ny = load_iris().target\n\n# Split data to train and validation set\nx_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.30, shuffle=False)\n\n# Convert train data type to use TPU \nx_train = x_train.astype('float32')\nx_val = x_val.astype('float32')\n\nresolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])\ntf.config.experimental_connect_to_cluster(resolver)\ntf.tpu.experimental.initialize_tpu_system(resolver)\nstrategy = tf.distribute.experimental.TPUStrategy(resolver)\n\ntrain_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(20)\nval_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(20)\n\n# Use the strategy to create and compile a Keras model\nwith strategy.scope():\n  model = Sequential()\n  model.add(Dense(32, input_shape=(4,), activation=tf.nn.relu, name=\"relu\"))\n  model.add(Dense(3, activation=tf.nn.softmax, name=\"softmax\"))\n  model.compile(optimizer=Adam(learning_rate=0.1), loss='logcosh')\n\nstart = timeit.default_timer()\n\n# Fit the Keras model on the dataset\nmodel.fit(train_dataset, epochs=20, validation_data=val_dataset)\n\nprint('\\nTime: ', timeit.default_timer() - start)\n
\n\n

This takes about 30 seconds to run, compared to ~1.3 seconds to run on CPU. We can substantially reduce the overhead here by repeating the dataset and running one long epoch rather than several small ones. I replaced the dataset setup with this:

\n\n
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).repeat(20).batch(20)\nval_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(20)\n
\n\n

And replaced the fit call with this:

\n\n
model.fit(train_dataset, validation_data=val_dataset)\n
\n\n

This brings the runtime down to about 6 seconds for me. This is still slower than CPU, but that's not surprising for such a small model that can easily be run locally. In general, you'll see more benefit from using TPUs with larger models. I recommend looking through TensorFlow's official TPU guide, which presents a larger image classification model for the MNIST dataset.

\n", + "system": "" + }, + { + "instruction": "TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [bool, float32] that don't all match", + "input": "", + "output": "

I also came across this problem today. what worked for me was to remove mask_zero=True from the embedding layer. unfortunately I don't know why this helps.

\n", + "system": "" + }, + { + "instruction": "TypeError: Tensor is unhashable if Tensor equality is enabled. Instead, use tensor.experimental_ref() as the key", + "input": "", + "output": "

I had a similar error. In my case it was due to using an old version of Keras and Tensorflow 2 from conda. There currently is some issues preventing the use of Tensorflow 2 with current Keras via conda.

\n\n

I created a new environment and installed using according to the Keras/Tensorflow websites (CPU only version in my case):

\n\n
pip install tensorflow\npip install keras\n
\n", + "system": "" + }, + { + "instruction": "Unable to open file (truncated file: eof = 7340032, sblock->base_addr = 0, stored_eof = 126651688)", + "input": "", + "output": "

In my case somehow the .h5 file was corrupted or erroneous so I downloaded it again and voila it worked

\n", + "system": "" + }, + { + "instruction": "The clear_session() method of keras.backend does not clean up the fitting data", + "input": "", + "output": "

K.clear_session() isn't enough to reset states and ensure reproducibility. You'll also need to:

\n\n\n\n

Code accomplishing each below.

\n\n
reset_seeds()\nmodel = make_model() # example function to instantiate model\nmodel.fit(x_good, y_good)\n\ndel model\nK.clear_session()\ntf.compat.v1.reset_default_graph()\n\nreset_seeds()\nmodel = make_model()\nmodel.fit(x_bad, y_bad)\n
\n\n

Note that if other variables reference the model, you should del them also - e.g. model = make_model(); model2 = model --> del model, model2 - else they may persist. Lastly, tf random seeds aren't as easily reset as random's or numpy's, and require the graph to be cleared beforehand.

\n\n


\nFunction/modules used:

\n\n
import tensorflow as tf\nimport numpy as np\nimport random\nimport keras.backend as K\n\ndef reset_seeds():\n    np.random.seed(1)\n    random.seed(2)\n    if tf.__version__[0] == '2':\n        tf.random.set_seed(3)\n    else:\n        tf.set_random_seed(3)\n    print(\"RANDOM SEEDS RESET\")\n
\n", + "system": "" + }, + { + "instruction": "Stuck understanding ResNet's Identity block and Convolutional blocks", + "input": "", + "output": "
\n

I don't understand the reason behind keeping stride > 1 with window\n size 1. Isn't it just data loss?

\n
\n\n

Please refer the section on Deeper Bottleneck Architectures in the resnet paper. Also, Figure 5.\nhttps://arxiv.org/pdf/1512.03385.pdf

\n\n

1 x 1 convolutions are typically used to increase or decrease the dimensionality along the filter dimension. So, in the bottleneck architecture the first 1 x 1 layer reduces the dimensions so that the 3 x 3 layer needs to handle smaller input/output dimensions. Then the final 1 x 1 layer increases the filter dimensions again.

\n\n

It's done to save on computation/training time.

\n\n

From the paper,

\n\n

\"Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design\".

\n", + "system": "" + }, + { + "instruction": "Keras model training memory leak", + "input": "", + "output": "

One source of the problem is, a new loop of model = Sequential() does not remove the previous model; it remains built within its TensorFlow graph scope, and every new model = Sequential() adds another lingering construction which eventually overflows memory. To ensure a model is properly destroyed in full, run below once you're done with a model:

\n\n
import gc\ndel model\ngc.collect()\nK.clear_session()\ntf.compat.v1.reset_default_graph() # TF graph isn't same as Keras graph\n
\n\n

gc is Python's garbage collection module, which clears remnant traces of model after del. K.clear_session() is the main call, and clears the TensorFlow graph.

\n\n

Also, while your idea for model checkpointing, logging, and hyperparameter search is quite sound, it's quite faultily executed; you will actually be testing only one hyperparameter combination for the entire nested loop you've set up there. But this should be asked in a separate question.

\n\n
\n\n

UPDATE: just encountered the same problem, on a fully properly setup environment; the likeliest conclusion is, it's a bug - and a definite culprit is Eager execution. To work around, use

\n\n
tf.compat.v1.disable_eager_execution() # right after `import tensorflow as tf`\n
\n\n

to switch to Graph mode, which can also run significantly faster. Also see updated clear code above.

\n", + "system": "" + }, + { + "instruction": "How to access sample weights in a Keras custom loss function supplied by a generator?", + "input": "", + "output": "

Manual training loop alternative

\n

The only thing I can think of is a manual training loop where you get the weights yourself.

\n

Have a weights tensor and a non variable batch size:

\n
weights = K.variable(np.zeros((batch_size,)))\n
\n

Use them in your custom loss:

\n
def custom_loss(true, pred):\n    return someCalculation(true, pred, weights)\n
\n

For a "generator":

\n
for e in range(epochs):\n    for s in range(steps_per_epoch):\n        x, y, w = next(generator) #or generator.next(), not sure\n        K.set_value(weights, w)\n\n        model.train_on_batch(x, y)\n
\n

For a keras.utils.Sequence:

\n
for e in range(epochs):\n    for s in range(len(generator)):\n        x,y,w = generator[s]\n\n        K.set_value(weights, w)\n        model.train_on_batch(x,y)\n
\n

I know this answer is not optimal because it does not parallelize getting data from the generator as it happens with fit_generator. But it's the best easy solution I can think of. Keras didn't expose the weights, they are applied automatically in some hidden source code.

\n
\n

Let the model calculate the weights alternative

\n

If calculating the weights can be done from x and y, you can delegate this task to the loss function itself.

\n

This is sort of hacky, but may work:

\n
input1 = Input(shape1)\ninput2 = Input(shape2)\n\n# .... model creation .... #\n\nmodel = Model([input1, input2], outputs)\n
\n

Let the loss have access to input1 and input2:

\n
def custom_loss(y_true, y_pred):\n    w = calculate_weights(input1, input2, y_pred)\n    # .... rest of the loss .... #\n
\n

The issue here is whether you can or not calculate the weigths as a tensor from the inputs.

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'str' object has no attribute 'shape'", + "input": "", + "output": "

There is an error in this line:\nmodel.fit(x_train_pad, y_train, epochs=5, batch_size=256)\nBasically x_train_pad is an str (string) an it should be a numpy array

\n", + "system": "" + }, + { + "instruction": "Keras custom loss function (elastic net)", + "input": "", + "output": "

You can simply use built-in weight regularization in Keras for each layer. To do that you can use kernel_regularizer parameter of the layer and specify a regularizer for that. For example:

\n\n
from keras import regularizers\n\nmodel.add(Dense(..., kernel_regularizer=regularizers.l2(0.1)))\n
\n\n

Those regularizations would create a loss tensor which would be added to the loss function, as implemented in Keras source code:

\n\n
# Add regularization penalties\n# and other layer-specific losses.\nfor loss_tensor in self.losses:\n    total_loss += loss_tensor\n
\n", + "system": "" + }, + { + "instruction": "How to make Keras compute a certain metric on validation data only?", + "input": "", + "output": "

To do this you can create a tf.Variable in the metric calculation that determines if the calculation goes ahead and then updates it when a test is run using a callback. e.g.

\n
class MyCustomMetric(tf.keras.metrics.Metrics):\n\n    def __init__(self, **kwargs):\n        # Initialise as normal and add flag variable for when to run computation\n        super(MyCustomMetric, self).__init__(**kwargs)\n        self.metric_variable = self.add_weight(name='metric_variable', initializer='zeros')\n        self.on = tf.Variable(False)\n\n    def update_state(self, y_true, y_pred, sample_weight=None):\n        # Use conditional to determine if computation is done\n        if self.on:\n            # run computation\n            self.metric_variable.assign_add(computation_result)\n\n    def result(self):\n        return self.metric_variable\n\n    def reset_states(self):\n        self.metric_variable.assign(0.)\n\nclass ToggleMetrics(tf.keras.callbacks.Callback):\n    '''On test begin (i.e. when evaluate() is called or \n     validation data is run during fit()) toggle metric flag '''\n    def on_test_begin(self, logs):\n        for metric in self.model.metrics:\n            if 'MyCustomMetric' in metric.name:\n                metric.on.assign(True)\n    def on_test_end(self,  logs):\n        for metric in self.model.metrics:\n            if 'MyCustomMetric' in metric.name:\n                metric.on.assign(False)\n
\n", + "system": "" + }, + { + "instruction": "Grid Search for Keras with multiple inputs", + "input": "", + "output": "

this is workaround to use GridSearch and Keras model with multiple inputs. the trick consists in merge all the inputs in a single array. I create a dummy model that receives a SINGLE input and then split it into the desired parts using Lambda layers. the procedure can be easily modified according to your own data structure

\n
def createMod(optimizer='Adam'):\n    \n    combi_input = Input((3,)) # (None, 3)\n    a_input = Lambda(lambda x: tf.expand_dims(x[:,0],-1))(combi_input) # (None, 1) \n    b_input = Lambda(lambda x: tf.expand_dims(x[:,1],-1))(combi_input) # (None, 1)\n    c_input = Lambda(lambda x: tf.expand_dims(x[:,2],-1))(combi_input) # (None, 1)\n    \n    ## do something\n    c = Concatenate()([a_input, b_input, c_input])\n    x = Dense(32)(c)\n\n    out = Dense(1,activation='sigmoid')(x)\n    model = Model(combi_input, out)\n\n    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics='accuracy')\n\n    return model\n\n\n## recreate multiple inputs\nn_sample = 1000\na_input, b_input, c_input = [np.random.uniform(0,1, n_sample) for _ in range(3)]\ny = np.random.randint(0,2, n_sample)\n\n## merge inputs\ncombi_input = np.stack([a_input, b_input, c_input], axis=-1)\n\n\nmodel = tf.keras.wrappers.scikit_learn.KerasClassifier(build_fn=createMod, verbose=0)\nbatch_size = [10, 20]\nepochs = [10, 5]\noptimizer = ['adam','SGD']\nparam_grid = dict(batch_size=batch_size, epochs=epochs)\ngrid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)\ngrid_result = grid.fit(combi_input, y)\n
\n

Another simple and valuable solution

\n", + "system": "" + }, + { + "instruction": "What that mean this message when I update tensorflow and keras in Anaconda Prompt ? Is wrong or Okay?", + "input": "", + "output": "

ECHO \"WARNING: Did not find VS in registry or in VS140COMNTOOLS env var - your compiler may not work\"

\n\n

Got this warning message after i installed KERAS using Anaconda Prompt. Just make sure KERAS is the last you install otherwise you won't be able to install others eg. Pyinstaller or Nuitka.

\n", + "system": "" + }, + { + "instruction": "Keras apply different weight to different misclassification", + "input": "", + "output": "

Building on issue #2115, I've coded the following solution and posted it there too.
\nI only tested it in Tensorflow 1.14, so I guess it should work with Keras v2.

\n\n
\n\n

Adding to the class solution here in #2115 (comment)\nhere's a more robust and vectorized implementation:

\n\n
import tensorflow.keras.backend as K\nfrom tensorflow.keras.losses import CategoricalCrossentropy\n\n\nclass WeightedCategoricalCrossentropy(CategoricalCrossentropy):\n\n    def __init__(self, cost_mat, name='weighted_categorical_crossentropy', **kwargs):\n        assert(cost_mat.ndim == 2)\n        assert(cost_mat.shape[0] == cost_mat.shape[1])\n\n        super().__init__(name=name, **kwargs)\n        self.cost_mat = K.cast_to_floatx(cost_mat)\n\n    def __call__(self, y_true, y_pred):\n\n        return super().__call__(\n            y_true=y_true,\n            y_pred=y_pred,\n            sample_weight=get_sample_weights(y_true, y_pred, self.cost_mat),\n        )\n\n\ndef get_sample_weights(y_true, y_pred, cost_m):\n    num_classes = len(cost_m)\n\n    y_pred.shape.assert_has_rank(2)\n    y_pred.shape[1].assert_is_compatible_with(num_classes)\n    y_pred.shape.assert_is_compatible_with(y_true.shape)\n\n    y_pred = K.one_hot(K.argmax(y_pred), num_classes)\n\n    y_true_nk1 = K.expand_dims(y_true, 2)\n    y_pred_n1k = K.expand_dims(y_pred, 1)\n    cost_m_1kk = K.expand_dims(cost_m, 0)\n\n    sample_weights_nkk = cost_m_1kk * y_true_nk1 * y_pred_n1k\n    sample_weights_n = K.sum(sample_weights_nkk, axis=[1, 2])\n\n    return sample_weights_n\n
\n\n

Usage:

\n\n
model.compile(loss=WeightedCategoricalCrossentropy(cost_matrix), ...)\n
\n\n
\n\n

Similarly, this can be applied for the CategoricalAccuracy metric too:

\n\n
from tensorflow.keras.metrics import CategoricalAccuracy\n\n\nclass WeightedCategoricalAccuracy(CategoricalAccuracy):\n\n    def __init__(self, cost_mat, name='weighted_categorical_accuracy', **kwargs):\n        assert(cost_mat.ndim == 2)\n        assert(cost_mat.shape[0] == cost_mat.shape[1])\n\n        super().__init__(name=name, **kwargs)\n        self.cost_mat = K.cast_to_floatx(cost_mat)\n\n    def update_state(self, y_true, y_pred, sample_weight=None):\n\n        return super().update_state(\n            y_true=y_true,\n            y_pred=y_pred,\n            sample_weight=get_sample_weights(y_true, y_pred, self.cost_mat),\n        )\n
\n\n

Usage:

\n\n
model.compile(metrics=[WeightedCategoricalAccuracy(cost_matrix), ...], ...)\n
\n", + "system": "" + }, + { + "instruction": "How to determine input shape in keras?", + "input": "", + "output": "

The 0th dimension (sample-axis) is determined by the batch_size of the training. You omit it when defining the input shape. That makes sense since otherwise your model would be dependent on the number of samples in the dataset.

\n\n

Same goes for the output. It seems you're only predicting a single value per example (\"SalePrice\"). So the output layer has shape 1.

\n\n
model = Sequential([\n    Dense(32, activation='relu', input_shape=(75, )),\n    Dense(32, activation='relu'),\n    Dense(1),\n])\n
\n", + "system": "" + }, + { + "instruction": "Why ImageDataGenerator is iterating forever?", + "input": "", + "output": "

Actually, train_data_gen will generate data batch by batch infinitely.

\n\n

When we call model.fit_generator(), we specify the train_data_gen as generator, and set steps_per_epoch (should be len(train_data)/batch_size). Then the model would know when a single epoch is finished.

\n", + "system": "" + }, + { + "instruction": "Multi-feature causal CNN - Keras implementation", + "input": "", + "output": "

I believe that you can have causal padding with dilation for any number of input features. Here is the solution I would propose.

\n

The TimeDistributed layer is key to this.

\n

From Keras Documentation: "This wrapper applies a layer to every temporal slice of an input. The input should be at least 3D, and the dimension of index one will be considered to be the temporal dimension."

\n

For our purposes, we want this layer to apply "something" to each feature, so we move the features to the temporal index, which is 1.

\n

Also relevant is the Conv1D documentation.

\n

Specifically about channels: "The ordering of the dimensions in the inputs. "channels_last" corresponds to inputs with shape (batch, steps, channels) (default format for temporal data in Keras)"

\n
from tensorflow.python.keras import Sequential, backend\nfrom tensorflow.python.keras.layers import GlobalMaxPool1D, Activation, MaxPool1D, Flatten, Conv1D, Reshape, TimeDistributed, InputLayer\n\nbackend.clear_session()\nlookback = 20\nn_features = 5\n\nfilters = 128\n\nmodel = Sequential()\nmodel.add(InputLayer(input_shape=(lookback, n_features, 1)))\n# Causal layers are first applied to the features independently\nmodel.add(Permute(dims=(2, 1)))  # UPDATE must permute prior to adding new dim and reshap\nmodel.add(Reshape(target_shape=(n_features, lookback, 1)))\n# After reshape 5 input features are now treated as the temporal layer \n# for the TimeDistributed layer\n\n# When Conv1D is applied to each input feature, it thinks the shape of the layer is (20, 1)\n# with the default "channels_last", therefore...\n\n# 20 times steps is the temporal dimension\n# 1 is the "channel", the new location for the feature maps\n\nmodel.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**0)))\n# You could add pooling here if you want. \n# If you want interaction between features AND causal/dilation, then apply later\nmodel.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**1)))\nmodel.add(TimeDistributed(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**2)))\n\n\n# Stack feature maps on top of each other so each time step can look at \n# all features produce earlier\nmodel.add(Permute(dims=(2, 1, 3)))  # UPDATED to fix issue with reshape\nmodel.add(Reshape(target_shape=(lookback, n_features * filters)))  # (20 time steps, 5 features * 128 filters)\n# Causal layers are applied to the 5 input features dependently\nmodel.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**0))\nmodel.add(MaxPool1D())\nmodel.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**1))\nmodel.add(MaxPool1D())\nmodel.add(Conv1D(filters, 3, activation="elu", padding="causal", dilation_rate=2**2))\nmodel.add(GlobalMaxPool1D())\nmodel.add(Dense(units=1, activation='linear'))\n\nmodel.compile(optimizer='adam', loss='mean_squared_error')\n\nmodel.summary()\n
\n

Final Model Summary

\n
_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nreshape (Reshape)            (None, 5, 20, 1)          0         \n_________________________________________________________________\ntime_distributed (TimeDistri (None, 5, 20, 128)        512       \n_________________________________________________________________\ntime_distributed_1 (TimeDist (None, 5, 20, 128)        49280     \n_________________________________________________________________\ntime_distributed_2 (TimeDist (None, 5, 20, 128)        49280     \n_________________________________________________________________\nreshape_1 (Reshape)          (None, 20, 640)           0         \n_________________________________________________________________\nconv1d_3 (Conv1D)            (None, 20, 128)           245888    \n_________________________________________________________________\nmax_pooling1d (MaxPooling1D) (None, 10, 128)           0         \n_________________________________________________________________\nconv1d_4 (Conv1D)            (None, 10, 128)           49280     \n_________________________________________________________________\nmax_pooling1d_1 (MaxPooling1 (None, 5, 128)            0         \n_________________________________________________________________\nconv1d_5 (Conv1D)            (None, 5, 128)            49280     \n_________________________________________________________________\nglobal_max_pooling1d (Global (None, 128)               0         \n_________________________________________________________________\ndense (Dense)                (None, 1)                 129       \n=================================================================\nTotal params: 443,649\nTrainable params: 443,649\nNon-trainable params: 0\n_________________________________________________________________\n
\n

Edit:

\n

"why you need to reshape and use n_features as the temporal layer"

\n

The reason why n_features needs to be at the temporal layer initially is because Conv1D with dilation and causal padding only works with one feature at a time, and because of how the TimeDistributed layer is implemented.

\n

From their documentation "Consider a batch of 32 samples, where each sample is a sequence of 10 vectors of 16 dimensions. The batch input shape of the layer is then (32, 10, 16), and the input_shape, not including the samples dimension, is (10, 16).

\n

You can then use TimeDistributed to apply a Dense layer to each of the 10 timesteps, independently:"

\n

By applying the TimeDistributed layer independently to each feature, it reduces the dimension of the problem as if there was only one feature (which would easily allow for dilation and causal padding). With 5 features, they need to each be handled separately at first.

\n\n", + "system": "" + }, + { + "instruction": "How to classify sequence of images with keras deep learning", + "input": "", + "output": "

Your problem is in the context of Sequence Classification. You need to classify sequences of images. In this case, a model is needed to learn two aspects :

\n\n
    \n
  1. Features of the images
  2. \n
  3. Features of the sequence ( temporal or time-related features )
  4. \n
\n\n
\n

This might sound similar to video classification in which a video is a sequence of several frames. See here.

\n
\n\n

For extracting features from images:

\n\n

Most real-world cases use Convolutional Neural Networks. They use layers like Max Pooling and Convolution. They are excellent at extracting features from a 3D input like an image. You can learn more from here.

\n\n

For handling temporal data:

\n\n

Here's where you will require an RNN ( Recurrent Neural Network ). LSTM ( Long-Short Term Memory ) cells are popular RNN as they can hold a stronger memory than traditional RNNs.

\n\n
\n

RNNs preserve the hidden layer activations and use them in processing each and every term in a sequence. Hence, while processing the 2nd image in a sequence, the RNN has knowledge or activations of the 1st image in that same sequence.

\n
\n\n

You can know more from here.

\n\n

Finally, we require a fusion of both the above networks:

\n\n

A CNN-LSTM network uses both convolutional as well as LSTM cells to classify the image sequences.

\n\n

\"This

\n\n

You can refer here and here

\n\n

Hope that helps you. :-)

\n", + "system": "" + }, + { + "instruction": "Limiting GPU memory usage by Keras / TF 2019?", + "input": "", + "output": "

One way to restrict reserving all GPU RAM in tensorflow is to grow the amount of reservation. This method will allow you to train multiple NN using same GPU but you cannot set a threshold on the amount of memory you want to reserve.

\n\n

Using the following snippet before importing keras or just use tf.keras instead.

\n\n
import tensorflow as tf\ngpus = tf.config.experimental.list_physical_devices('GPU')\nif gpus:\n    try:\n        for gpu in gpus:\n            tf.config.experimental.set_memory_growth(gpu, True)\n\n    except RuntimeError as e:\n        print(e)\n
\n", + "system": "" + }, + { + "instruction": "How to build an unsupervised CNN model with keras/tensorflow?", + "input": "", + "output": "

You can build an unsupervised CNN with keras using Auto Encoders. The code for it, for Fashion MNIST Data, is shown below:

\n\n
# Python \u22653.5 is required\nimport sys\nassert sys.version_info >= (3, 5)\n\n# Scikit-Learn \u22650.20 is required\nimport sklearn\nassert sklearn.__version__ >= \"0.20\"\n\n# TensorFlow \u22652.0-preview is required\nimport tensorflow as tf\nfrom tensorflow import keras\nassert tf.__version__ >= \"2.0\"\n\n# Common imports\nimport numpy as np\nimport os\n\n(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()\nX_train_full = X_train_full.astype(np.float32) / 255\nX_test = X_test.astype(np.float32) / 255\nX_train, X_valid = X_train_full[:-5000], X_train_full[-5000:]\ny_train, y_valid = y_train_full[:-5000], y_train_full[-5000:]\n\ndef rounded_accuracy(y_true, y_pred):\n    return keras.metrics.binary_accuracy(tf.round(y_true), tf.round(y_pred))\n\ntf.random.set_seed(42)\nnp.random.seed(42)\n\nconv_encoder = keras.models.Sequential([\n    keras.layers.Reshape([28, 28, 1], input_shape=[28, 28]),\n    keras.layers.Conv2D(16, kernel_size=3, padding=\"SAME\", activation=\"selu\"),\n    keras.layers.MaxPool2D(pool_size=2),\n    keras.layers.Conv2D(32, kernel_size=3, padding=\"SAME\", activation=\"selu\"),\n    keras.layers.MaxPool2D(pool_size=2),\n    keras.layers.Conv2D(64, kernel_size=3, padding=\"SAME\", activation=\"selu\"),\n    keras.layers.MaxPool2D(pool_size=2)\n])\nconv_decoder = keras.models.Sequential([\n    keras.layers.Conv2DTranspose(32, kernel_size=3, strides=2, padding=\"VALID\", activation=\"selu\",\n                                 input_shape=[3, 3, 64]),\n    keras.layers.Conv2DTranspose(16, kernel_size=3, strides=2, padding=\"SAME\", activation=\"selu\"),\n    keras.layers.Conv2DTranspose(1, kernel_size=3, strides=2, padding=\"SAME\", activation=\"sigmoid\"),\n    keras.layers.Reshape([28, 28])\n])\nconv_ae = keras.models.Sequential([conv_encoder, conv_decoder])\n\nconv_ae.compile(loss=\"binary_crossentropy\", optimizer=keras.optimizers.SGD(lr=1.0),\n                metrics=[rounded_accuracy])\nhistory = conv_ae.fit(X_train, X_train, epochs=5,\n                      validation_data=[X_valid, X_valid])\n\nconv_encoder.summary()\nconv_decoder.summary()\n
\n\n

You can refer to this link for more information.

\n", + "system": "" + }, + { + "instruction": "How to select strategy to reduce overfitting?", + "input": "", + "output": "

These results seem almost too bad to be a case of overfitting. Instead I suspect that there is a difference in the data used for training and for validation.

\n\n

I note that for the training data you are using ImageDataGenerator(rescale=1./255), however for valX I do not see any such processing. I would recommend using a separate ImageDataGenerator with the same rescaling configuration also for the validation data. That way differences are as small as possible.

\n", + "system": "" + }, + { + "instruction": "How to feed LSTM when training data is in multiple csv files of time series of different length?", + "input": "", + "output": "

Since your data points have variable sequence lengths, you can't easily train your network all at once. Instead, you must train in mini batches of size 1 or fix your sequence length, although the latter probably doesn't make sense based on the data you're dealing with.

\n\n

Take a look at the Keras function train_on_batch. Using this, you can train your model using each individual patient, although using a batch size of 1 has its own issues.

\n\n

As for the model, I would suggest using the Keras functional API. If you want to try something simple, just use an input sequence of variable length and a feature size of 3. This should give you a baseline, which is what I assume you want from your function name. Something like this:

\n\n
input_ = Input(shape=(None, 3))\nx = LSTM(128)(input_)\noutput = Dense(1, activation='sigmoid')(x)\nmodel = Model(input_, output)\n
\n", + "system": "" + }, + { + "instruction": "How do Convolutional Layers (CNNs) work in keras?", + "input": "", + "output": "

Convolutions - Language Agnostic Basics

\n\n

To understand how convolutions work in keras we need a basic understanding of how convolutions work in a language-agnostic setting.

\n\n

\"enter

\n\n

Convolutional layers slide across an input to build an activation map (also called feature map). The above is an example of a 2D convolution. Notice how at each step, the 3 X 3 dark square slides across the input (blue), and for each new 3 x 3 part of the input it analyzes, it outputs a value in our output activation map (blue-green boxes at the top).

\n\n

Kernels and filters

\n\n

The dark square is our kernel. The kernel is a matrix of weights that is multiplied with each part of our input. All the results from these multiplications put together form our activation map.

\n\n

Intuitively, our kernel lets us reuse parameters - a weights matrix that detects an eye in this part of the image will work to detect it elsewhere; there's no point in training different parameters for each part of our input when one kernel can sweep across and work everywhere. We can consider each kernel as a feature-detector for one feature, and it's output activation map as a map of how likely that feature is present in each part of your input.

\n\n

A synonym for kernel is filter. The parameter filters is asking for the number of kernels (feature-detectors) in that Conv layer. This number will also be the the size of the last dimension in your output, i.e. filters=10 will result in an output shape of (???, 10). This is because the output of each Conv layer is a set of activation maps, and there will be filters number of activation maps.

\n\n

Kernel Size

\n\n

The kernel_size is well, the size for each kernel. We discussed earlier that each kernel consists of a weights matrix that is tuned to detect certain features better and better. kernel_size dictates the size of filter mask. In English, how much \"input\" is processed during each convolution. For example our above diagram processes a 3 x 3 chunk of the input each time. Thus, it has a kernel_size of (3, 3). We can also call the above operation a \"3x3 convolution\"

\n\n

Larger kernel sizes are almost unconstrained in the features they represent, while smaller ones are restricted to specific low-level features. Note though that multiple layers of small kernel sizes can emulate the effect of a larger kernel size.

\n\n

Strides

\n\n

Notice how our above kernel shifts two units each time. The amount that the kernel \"shifts\" for each computation is called strides, so in keras speak our strides=2. Generally speaking, as we increase the number of strides, our model loses more information from one layer to the next, due to the activation map having \"gaps\".

\n\n

Padding

\n\n

Going back to the above diagram, notice the ring of white squares surrounding our input. This is our padding. Without padding, each time we pass our input through a Conv layer, the shape of our result gets smaller and smaller. As a result we pad our input with a ring of zeros, which serves a few purposes:

\n\n
    \n
  1. Preserve information around the edges. From our diagram notice how each corner white square only goes through the convolution once, while center squares go through four times. Adding padding alleviates this problem - squares originally on the edge get convolved more times.

  2. \n
  3. padding is a way to control the shape of our output. We can make shapes easier to work with by keeping the output of each Conv layer have the same shape as our input, and we can make deeper models when our shape doesn't get smaller each time we use a Conv layer.

  4. \n
\n\n

keras provides three different types of padding. The explanations in the docs are very straightforward so they are copied / paraphrased here. These are passed in with padding=..., i.e. padding=\"valid\".

\n\n

valid: no padding

\n\n

same: padding the input such that the output has the same length as the original input

\n\n

causal: results in causal (dialated convolutions). Normally in the above diagram the \"center\" of the kernel maps to the value in the output activation map. With causal convolutions the right edge is used instead. This is useful for temporal data, where you don't want to use future data to model present data.

\n\n

Conv1D, Conv2D, and Conv3D

\n\n

Intuitively the operations that occur on these layers remain the same. Each kernel still slides across your input, each filter outputs an activation map for it's own feature, and the padding is still applied.

\n\n

The difference is the number of dimensions that are convolved. For example in Conv1D a 1D kernel slides across one axis. In Conv2D a 2D kernel slides across a two axes.

\n\n

It is very important to note that the D in an X-D Conv layer doesn't denote the number of dimensions of the input, but rather the number of axes that the kernel slides across.

\n\n

\"enter

\n\n

For example, in the above diagram, even though the input is 3D (image with RGB channels), this is an example of a Conv2D layer. This is because there are two spatial dimensions - (rows, cols), and the filter only slides along those two dimensions. You can consider this as being convolutional in the spatial dimensions and fully connected in the channels dimension.

\n\n

The output for each filter is also two dimensional. This is because each filter slides in two dimensions, creating a two dimensional output. As a result you can also think of an N-D Conv as each filter outputting an N-D vector.

\n\n

\"enter

\n\n

You can see the same thing with Conv1D (pictured above). While the input is two dimensional, the filter only slides along one axis, making this a 1D convolution.

\n\n

In keras, this means that ConvND will requires each sample to have N+1 dimensions - N dimensions for the filter to slide across and one additional channels dimension.

\n\n

TLDR - Keras wrap up

\n\n

filters: The amount of different kernels in the layer. Each kernel detects and outputs an activation map for a specific feature, making this the last value in the output shape. I.e. Conv1D outputs (batch, steps, filters).

\n\n

kernel_size: Determines the dimensions of each kernel / filter / feature-detector. Also determines how much of the input is used to calculate each value in the output. Larger size = detecting more complex features, less constraints; however it's prone to overfitting.

\n\n

strides: How many units you move to take the next convolution. Bigger strides = more information loss.

\n\n

padding: Either \"valid\", \"causal\", or \"same\". Determines if and how to pad the input with zeros.

\n\n

1D vs 2D vs 3D: Denotes the number of axes that the kernel slides across. An N-D Conv layer will output an N-D output for each filter, but will require an N+1 dimensional input for each sample. This is composed from N dimensions to side across plus one additional channels dimension.

\n\n

References:

\n\n

Intuitive understanding of 1D, 2D, and 3D Convolutions in Convolutional Neural Networks

\n\n

https://keras.io/layers/convolutional/

\n\n

http://cs231n.github.io/convolutional-networks/

\n", + "system": "" + }, + { + "instruction": "anaconda prompt always start with these set of commands upon launch after installing keras", + "input": "", + "output": "

Assuming that you are using windows:

\n\n

Keras uses tensorflow and it in turn depends on VC++ build tools. Follow the below steps

\n\n
    \n
  1. Open the Microsoft Visual Studio link: https://visualstudio.microsoft.com/downloads/
  2. \n
  3. Select \u2018Community version 2019\u2019
  4. \n
  5. Download the installer and select VC++ Build tools on the list
  6. \n
\n\n

Also it is suggested to create a virtual environment for your project, so create a virtual environment (ideally inside your project folder) using command

\n\n
py -m pip install --user .venv\n
\n", + "system": "" + }, + { + "instruction": "Saving keras models with shared layers", + "input": "", + "output": "

You can take the shared layers and put them in a separate model. For example, if the shared layers are: layer1 and layer2, then you will have to create a model where the input layer is layer1 and the output layer is layer2. The output of layer2 will be the input to both models.

\n

If the shared layers are the first layers in both models, then the task becomes easier and you will have three models after separating the shared layers. If the layers are between the input and the output layer of each model, then you will have to separate the layers before the shared layers in each model as well which will result in two additional models.

\n

To achieve this easily, you can use the Keras Functional API to combine multiple models by taking the output of one model as an input to another. For example, if you have the models shared_model, model1 and model2, then you can create the two models for training or inference by taking the output of shared_model as an input for model1 and model2:

\n
input_layer = Input(input_shape)\nshared_output = shared_model(input_layer)\n\ncombined_output1 = model1(shared_output)\ncombined_model1 = Model(inputs=input_layer, outputs=combined_output1)\n\ncombined_output2 = model2(shared_output)\ncombined_model2 = Model(inputs=input_layer, outputs=combined_output2)\n
\n

This way you can train combined_model1 or combined_model2 or use them for inference.

\n

To be able to save checkpoints for the models shared_model and model1 while training combined_model1, you can use the alt-model-checkpoint library to create a callback.

\n", + "system": "" + }, + { + "instruction": "Optical Character Recognition Multiple Line Detection", + "input": "", + "output": "

You want to recognize text of a document containing multiple lines. There are two ways to achieve this:

\n\n
    \n
  1. Segment the document into lines as a pre-processing step, then feed each segmented line separately into your neural network.\nIf you want to go this way, e.g. read the paper [1] from Bunke and Marti. They essentially count the black-white transitions for each scanline and create a histogram out of it. They use the minimums of the histogram to split the document into individual lines. There are some other methods too to segment a document into lines.

  2. \n
  3. Train the neural network to implicitly segment the document into lines.\nYou need to add attention to the neural network, such that it can focus on individual lines.\nBluche has done some great work towards text recognition on document-level. \nSee the paper [2] and the website [3].

  4. \n
\n\n

[1] Bunke, Marti: The IAM-database: an English sentence database for offline handwriting recognition. Download via Springer

\n\n

[2] Bluche: Joint Line Segmentation and Transcription for End-to-End Handwritten Paragraph Recognition. Download via https://arxiv.org/abs/1604.08352

\n\n

[3] Bluche: Scan, Attend and Read. See http://www.tbluche.com/scan_attend_read.html and look for \"Handwriting Recognition with MDLSTM and CTC\" and \"The Collapse Layer and its Proposed Replacements\"

\n", + "system": "" + }, + { + "instruction": "Multi-label classification Keras metrics", + "input": "", + "output": "

I would not use Accuracy for classification tasks with unbalanced classes.\nEspecially for multi-label tasks, you probably have most of your labels to be False. That is, each data point can only have a small set of labels compared to the cardinality of all of the possibile labels.\nFor that reason accuracy is not a good metric, if your model predict all False (sigmoid activation output < 0.5) then you would measure a very high accuracy.

\n\n

I would analyze either the AUC or recall/precision at each epoch.\nAlternatively a multi-label task can be seen as a ranking task (like Recommender Systems) and you could evaluate precision@k or recall@k where k are the top predicted labels.

\n\n

If your Keras back-end is TensorFlow, check out the full list of supported metrics here: https://www.tensorflow.org/api_docs/python/tf/keras/metrics.

\n", + "system": "" + }, + { + "instruction": "When to use GlobalAveragePooling1D and when to use GlobalMaxPooling1D while using Keras for an LSTM model?", + "input": "", + "output": "

It depends a lot on your data and what you want from it. \nThis is one of those cases where testing would be the only sure answer.

\n\n

For instance, if you want to detect the presence of something in your sequences, max pooling seems a good option.

\n\n

But if the contribution of the entire sequence seems important to your result, then average pooling sounds reasonable.

\n\n

Now, since you're using LSTM layers, perhaps you should use return_sequences=False in the last LSTM layer. This is also a possibility instead of pooling. This will keep the last step of the sequence only. There might be the advantage of having the entire sequence processed, perhaps with cumulative effects in your result.

\n\n

In all cases, testing is the only definitive answer.

\n", + "system": "" + }, + { + "instruction": "Tensorboard projector visualisation - PCA keeps loading", + "input": "", + "output": "

I am answering for my own question.

\n\n

As suggested by @Tay2510 in the comment.

\n\n

Same code works after upgrading tensorboard version to 1.12.0 from 1.11.0.

\n\n

However my tensorflow version remained the the same to 1.11.0.

\n", + "system": "" + }, + { + "instruction": "How can I save an object containing keras models?", + "input": "", + "output": "

As it took me a long time to find a working solution to a similar problem (pickling an object containing objects containing Keras models), I post here the solution. It is adriangb's, found in LifeSaver:

\n
import pickle\n\nfrom tensorflow.keras.models import Sequential, Model\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.python.keras.layers import deserialize, serialize\nfrom tensorflow.python.keras.saving import saving_utils\n\n\ndef unpack(model, training_config, weights):\n    restored_model = deserialize(model)\n    if training_config is not None:\n        restored_model.compile(\n            **saving_utils.compile_args_from_training_config(\n                training_config\n            )\n        )\n    restored_model.set_weights(weights)\n    return restored_model\n\n# Hotfix function\ndef make_keras_picklable():\n\n    def __reduce__(self):\n        model_metadata = saving_utils.model_metadata(self)\n        training_config = model_metadata.get("training_config", None)\n        model = serialize(self)\n        weights = self.get_weights()\n        return (unpack, (model, training_config, weights))\n\n    cls = Model\n    cls.__reduce__ = __reduce__\n\n# Run the function\nmake_keras_picklable()\n\n# Create the model\nmodel = Sequential()\nmodel.add(Dense(1, input_dim=42, activation='sigmoid'))\nmodel.compile(optimizer='Nadam', loss='binary_crossentropy', metrics=['accuracy'])\n\n# Save\nwith open('model.pkl', 'wb') as f:\n    pickle.dump(model, f)\n
\n

Thanks Adrian !

\n", + "system": "" + }, + { + "instruction": "Don't Understand how to Implement Embeddings for Categorical Features", + "input": "", + "output": "

To combine the categorical data with numerical data, your model should use multiple inputs using the functional API. One for each categorical variable and one for the numerical inputs. Its up to you how you want to then combine all that data together, but I assume it makes sense to just concatenate everything together and then continue with the rest of your model.

\n\n
numerical_in = Input(shape=(3,))\ncat_in       = Input(shape=(3,))\nembed_layer  = Embedding(input_dim=5, output_dim=3, input_length=3)(cat_in)\nembed_layer  = Flatten(embed_layer)\nmerged_layer = concatenate([numerical_in, embed_layer])\noutput       = rest_of_your_model(merged_layer)\nmodel        = Model(inputs=[numerical_in, cat_in], outputs=[output])\n\n...\n\nmodel.fit(x=[numerical_df, cat_df], y=[your_expected_out])\n
\n", + "system": "" + }, + { + "instruction": "Keras - How to run loaded model on CPU", + "input": "", + "output": "

Load the model with the device you want to use:

\n\n
with tf.device('/cpu:0'):\n    new_model = load_model('test_model.h5')\n
\n", + "system": "" + }, + { + "instruction": "Error when checking input: expected dense_input to have shape (21,) but got array with shape (1,)", + "input": "", + "output": "

Your test array, arrTest1, is a 1d vector of 21:

\n\n
>>> arrTest1.ndim\n1\n
\n\n

What you are trying to feed your model is a row of 21 features. You simply need one more set of brackets:

\n\n
arrTest1 = np.array([[0.1, 0.1, 0.1, 0.1, 0.1, 0.5, 0.1, 0., 0.1, 0.6, 0.1, 0.1, 0., 0., 0., 0.1, 0., 0., 0.1, 0., 0.]])\n
\n\n

And now you have a row with 21 values:

\n\n
>>> arrTest1.shape\n(1, 21)\n
\n", + "system": "" + }, + { + "instruction": "Parallelize tf.from_generator using tf.contrib.data.parallel_interleave", + "input": "", + "output": "

It seems to me you're complicating your life unnecessarily with the generator.\nThis is how I'd implement your input pipeline:

\n\n
def parse_file_tf(filename):\n    return tf.py_func(parse_file, [filename], [tf.float32, tf.float32])\n\n# version with map\nfiles = tf.data.Dataset.from_tensor_slices(files_to_process)\ndataset = files.map(parse_file_tf, num_parallel_calls=N)\ndataset = dataset.flat_map(lambda *x: tf.data.Dataset.from_tensor_slices(x))\ndataset = dataset.batch(batch_size).shuffle(shuffle_size).prefetch(2)\nit = dataset.make_one_shot_iterator()\n
\n\n

To test it, I define a dummy parse_file as:

\n\n
i=0\ndef parse_file(f):\n    global i\n    i += 1\n    return np.asarray([i]*i, dtype=np.float32), np.asarray([i]*i, dtype=np.float32) # mimicks variable-length examples_x, examples_y\n
\n\n

which I feed into a basic loop that shows what the iterator returns:

\n\n
sess = tf.Session()\ntry:\n    while True:\n        x, y = it.get_next()\n        vx, vy = sess.run([x,y])\n        print(vx)\n        print(vy)\nexcept tf.errors.OutOfRangeError:\n    pass\nsess.close()\n
\n\n

Running the code above prints:

\n\n
[2. 3. 2. 1. 3. 3.]\n[2. 3. 2. 1. 3. 3.]\n
\n\n

Explanation of the pipeline

\n\n

Essentially, I leave the parallelization issue to map, where I can pass the number of threads it should run. No need for generators iterating over ranges and those extra complications.

\n\n

I chose map over parallel_interleave because the latter requires you to generate a Dataset instance for each item it returns, which in your case doesn't really make sense because you already have loaded all values in memory when you run parse_file.\nparallel_interleave makes sense if you slowly generate the values (e.g., by applying tf.data.TFRecordDataset to a list of filenames), but if your dataset fits in memory go for map.

\n\n

About the tf.py_func limitations, they do not affect your trained network, only the input pipeline. Ideally, you'll have a different pipeline for your training and for your final use of the network. You only need to take care of the limitations during the latter, while for training (unless you do something very specific with distributed training and/or moving the training across machines) you're reasonably safe.

\n\n
\n\n

Version with Generator

\n\n

If your JSON files are very big and their content won't fit in memory, you can use a generator, but slightly different from the approach you began with.\nThe idea is, the generator goes through the JSON file and yields one record at a time. Then, the generator has to be your parse_file function. As an example, let's assume you have the following parse_file generator:

\n\n
i = 3\ndef parse_file(filename):\n    global i\n    i += 1\n    ctr = 0\n    while ctr < i:\n        yield ctr, ctr\n
\n\n

In this case, the pipeline would look as follows:

\n\n
def wrap_generator(filename):\n    return tf.data.Dataset.from_generator(parse_file(filename), [tf.int32, tf.int32])\n\nfiles = tf.data.Dataset.from_tensor_slices(files_to_process)\ndataset = files.apply(tf.contrib.data.parallel_interleave(wrap_generator, cycle_length=N))\ndataset = dataset.flat_map(lambda *x: tf.data.Dataset.from_tensor_slices(x))\ndataset = dataset.shuffle(shuffle_size).batch(batch_size).prefetch(2)\nit = dataset.make_one_shot_iterator()\n
\n\n

Note that here we need to use parallel_interleave because we turn the generators into Dataset instances from which we extract values.\nThe rest stays the same.

\n\n

Feeding this to the same sample loop as above prints:

\n\n
[6. 5. 4. 4. 6. 5. 6. 6. 5. 4. 6. 4. 5. 5. 6.]\n[6. 5. 4. 4. 6. 5. 6. 6. 5. 4. 6. 4. 5. 5. 6.]\n
\n", + "system": "" + }, + { + "instruction": "Convolutional neural network architectures with an arbitrary number of input channels (more than RGB)", + "input": "", + "output": "

If you retrain the models, that's not a problem. Only if you want to use a trained model, you have to keep the input the same.

\n", + "system": "" + }, + { + "instruction": "Add class information to Generator model in keras", + "input": "", + "output": "

At first, following the suggestion which is given in Conditional Generative Adversarial Nets you have to define a second input. Then, just concatenate the two input vectors and process this concatenated vector.

\n\n\n\n
def generator_model_v2():\n    input_image = Input((IN_CH, img_cols, img_rows)) \n    input_conditional = Input((n_classes))  \n    e0 = Flatten()(input_image) \n    e1 = Concatenate()([e0, input_conditional])   \n    e2 = BatchNormalization(mode=0)(e1)\n    e3 = BatchNormalization(mode=0)(e2)\n    e4 = Dense(1024, activation=\"relu\")(e3)\n    e5 = BatchNormalization(mode=0)(e4)\n    e6 = Dense(512, activation=\"relu\")(e5)\n    e7 = BatchNormalization(mode=0)(e6)\n    e8 = Dense(512, activation=\"relu\")(e7)\n    e9 = BatchNormalization(mode=0)(e8)\n    e10 = Dense(IN_CH * img_cols *img_rows, activation=\"relu\")(e9)\n    e11  = Reshape((3, 28, 28))(e10)\n    e12 = BatchNormalization(mode=0)(e11)\n    e13 = Activation('tanh')(e12)\n\n    model = Model(input=[input_image, input_conditional] , output=e13)\n    return model\n
\n\n

Then, you need to pass the class labels during the training as well to the network:

\n\n
classifier.train_on_batch((image_batch, class_batch), label_batch)\n
\n", + "system": "" + }, + { + "instruction": "How do masked values affect the metrics in Keras?", + "input": "", + "output": "

Maybe the best answer would be this from Keras.metrics :

\n\n

A metric function is similar to a loss function, except that the results from evaluating a metric are not used when training the model.

\n\n

The training is only influenced by the loss function where masking is implemented. \nNevertheless, your displayed results are not on par with the actual results and can lead to misleading conclusions.

\n\n

As the metric is not used in the training process, a callback function can solve this.

\n\n

something like this (based on Andrew Ng). I Search for 0 here as for my masked target all one-hot-encoded targets are 0 (No class activated).

\n\n
import numpy as np\nfrom keras.callbacks import Callback\nfrom sklearn.metrics import accuracy_score\n\nclass categorical_accuracy_no_mask(Callback):\n\n   def on_train_begin(self, logs={}):\n       self.val_acc = []\n\n   def on_epoch_end(self, epoch, logs={}):\n       val_predict = (np.asarray(self.model.predict(self.model.validation_data[0]))).round()\n       val_targ = self.model.validation_data[1] \n       indx = np.where(~val_targ.any(axis=2))[0] #find where all targets are zero. That are the masked once as we masked the target with 0 and the data with 666\n       y_true_nomask = numpy.delete(val_targe, indx, axis=0)\n       y_pred_nomask = numpy.delete(val_predict, indx, axis=0)\n\n       _val_accuracy = accuracy_score(y_true_nomask, y_pred_nomask)\n       self.val_acc.append(_val_accuracy)\n\n       print \u201c \u2014 val_accuracy : %f \u201d %( _val_accuracy )\n       return\n
\n\n

Of course, now you could add also precision-recall etc.

\n", + "system": "" + }, + { + "instruction": "How to correctly use mask_zero=True for Keras Embedding with pre-trained weights?", + "input": "", + "output": "

You're second approach is correct. You will want to construct your embedding layer in the following way

\n\n
embedding = Embedding(\n   output_dim=embedding_size,\n   input_dim=vocabulary_size + 1,\n   input_length=input_length,\n   mask_zero=True,\n   weights=[np.vstack((np.zeros((1, embedding_size)),\n                       embedding_matrix))],\n   name='embedding'\n)(input_layer)\n
\n\n

where embedding_matrix is the second matrix you provided.

\n\n

You can see this by looking at the implementation of keras' embedding layer. Notably, how mask_zero is only used to literally mask the inputs

\n\n
def compute_mask(self, inputs, mask=None):\n    if not self.mask_zero:\n        return None\n    output_mask = K.not_equal(inputs, 0)\n    return output_mask\n
\n\n

thus the entire kernel is still multiplied by the input, meaning all indexes are shifted up by one.

\n", + "system": "" + }, + { + "instruction": "Keras: Method on_batch_end() is slow but I have no callbacks?", + "input": "", + "output": "

This could be related to your setting for verbose.

\n\n

I'm seeing the same the same. I believe it's because I have verbose=1 on the call to fit() - i.e. printing the progress bar and current statistics is a batch-end task which is taking a too long. This hypothesis is supported by the evidence that the warnings stop if I set verbose=2 which only prints on epoch end or verbose=0 which doesn't print at all. The warnings also stop if I increase the batch size.

\n", + "system": "" + }, + { + "instruction": "Same (?) neural network architecture in Tensorflow and Keras produces different accuracy on the same data", + "input": "", + "output": "

There is an important difference between Keras and TensorFlow regarding dropout:

\n\n\n\n

In your implementation you're setting the same value to both arguments. You should ensure that keep_prob = 1 - rate.

\n", + "system": "" + }, + { + "instruction": "Extremely poor prediction: LSTM time-series", + "input": "", + "output": "

If you want to use the model in my code (the link you passed), you need to have the data correctly shaped: (1 sequence, total_time_steps, 5 features)

\n\n

Important: I don't know if this is the best way or the best model to do this, but this model is predicting 7 time steps ahead of the input (time_shift=7)

\n\n

Data and initial vars

\n\n
    fi = 'pollution.csv'\nraw = pd.read_csv(fi, delimiter=',')\nraw = raw.drop('Dates', axis=1)\nprint(\"raw shape:\")\nprint (raw.shape)\n#(1789,5) - 1789 time steps / 5 features\n\nscaler = MinMaxScaler(feature_range=(-1, 1))\nraw = scaler.fit_transform(raw)\n\ntime_shift = 7 #shift is the number of steps we are predicting ahead\nn_rows = raw.shape[0] #n_rows is the number of time steps of our sequence\nn_feats = raw.shape[1]\ntrain_size = int(n_rows * 0.8)\n\n\n#I couldn't understand how \"ds\" worked, so I simply removed it because in the code below it's not necessary\n\n#getting the train part of the sequence\ntrain_data = raw[:train_size, :] #first train_size steps, all 5 features\ntest_data = raw[train_size:, :] #I'll use the beginning of the data as state adjuster\n\n\n#train_data = shuffle(train_data) !!!!!! we cannot shuffle time steps!!! we lose the sequence doing this\n\nx_train = train_data[:-time_shift, :] #the entire train data, except the last shift steps \nx_test = test_data[:-time_shift,:] #the entire test data, except the last shift steps\nx_predict = raw[:-time_shift,:] #the entire raw data, except the last shift steps\n\ny_train = train_data[time_shift:, :] \ny_test = test_data[time_shift:,:]\ny_predict_true = raw[time_shift:,:]\n\nx_train = x_train.reshape(1, x_train.shape[0], x_train.shape[1]) #ok shape (1,steps,5) - 1 sequence, many steps, 5 features\ny_train = y_train.reshape(1, y_train.shape[0], y_train.shape[1])\nx_test = x_test.reshape(1, x_test.shape[0], x_test.shape[1])\ny_test = y_test.reshape(1, y_test.shape[0], y_test.shape[1])\nx_predict = x_predict.reshape(1, x_predict.shape[0], x_predict.shape[1])\ny_predict_true = y_predict_true.reshape(1, y_predict_true.shape[0], y_predict_true.shape[1])\n\nprint(\"\\nx_train:\")\nprint (x_train.shape)\nprint(\"y_train\")\nprint (y_train.shape)\nprint(\"x_test\")\nprint (x_test.shape)\nprint(\"y_test\")\nprint (y_test.shape)\n
\n\n

Model

\n\n

Your model wasn't very powerful for this task, so I tried a bigger one (this on the other hand is too powerful)

\n\n
model = Sequential()\nmodel.add(LSTM(64, return_sequences=True, input_shape=(None, x_train.shape[2])))\nmodel.add(LSTM(128, return_sequences=True))\nmodel.add(LSTM(256, return_sequences=True))\nmodel.add(LSTM(128, return_sequences=True))\nmodel.add(LSTM(64, return_sequences=True))\nmodel.add(LSTM(n_feats, return_sequences=True)) \n\nmodel.compile(loss='mse', optimizer='adam')\n
\n\n

Fitting

\n\n

Notice that I had to train 2000+ epochs for the model to have good results.
\nI added the validation data so we can compare the loss for train and test.

\n\n
#notice that I'm predicting from the ENTIRE sequence, including x_train      \n#is important for the model to adjust its states before predicting the end\nmodel.fit(x_train, y_train, epochs=1000, batch_size=1, verbose=2, validation_data=(x_test,y_test))  \n
\n\n

Predicting

\n\n

Important: as for predicting the end of a sequence based on the beginning, it's important that the model sees the beginning to adjust the internal states, so I'm predicting the entire data (x_predict), not only the test data.

\n\n
y_predict_model = model.predict(x_predict)\n\nprint(\"\\ny_predict_true:\")\nprint (y_predict_true.shape)\nprint(\"y_predict_model: \")\nprint (y_predict_model.shape)\n\n\ndef plot(true, predicted, divider):\n\n    predict_plot = scaler.inverse_transform(predicted[0])\n    true_plot = scaler.inverse_transform(true[0])\n\n    predict_plot = predict_plot[:,0]\n    true_plot = true_plot[:,0]\n\n    plt.figure(figsize=(16,6))\n    plt.plot(true_plot, label='True',linewidth=5)\n    plt.plot(predict_plot,  label='Predict',color='y')\n\n    if divider > 0:\n        maxVal = max(true_plot.max(),predict_plot.max())\n        minVal = min(true_plot.min(),predict_plot.min())\n\n        plt.plot([divider,divider],[minVal,maxVal],label='train/test limit',color='k')\n\n    plt.legend()\n    plt.show()\n\ntest_size = n_rows - train_size\nprint(\"test length: \" + str(test_size))\n\nplot(y_predict_true,y_predict_model,train_size)\nplot(y_predict_true[:,-2*test_size:],y_predict_model[:,-2*test_size:],test_size)\n
\n\n

Showing entire data

\n\n

\"enter

\n\n

Showing the end portion of it for more detail

\n\n

Please notice that this model is overfitting, it means it can learn the training data and get bad results in test data.

\n\n

To solve this you must experimentally try smaller models, use dropout layers and other techniques to prevent overfitting.

\n\n

Notice also that this data very probably contains A LOT of random factors, meaning the models will not be able to learn anything useful from it. As you make smaller models to avoid overfitting, you may also find that the model will present worse predictions for training data.

\n\n

\"enter

\n\n

Finding the perfect model is not an easy task, it's an open question and you must experiment. Maybe LSTM models simply aren't the solution. Maybe your data is simply not predictable, etc. There isn't a definitive answer for this.

\n\n

How to know the model is good

\n\n

With the validation data in training, you can compare loss for train and test data.

\n\n
Train on 1 samples, validate on 1 samples\nEpoch 1/1000\n9s - loss: 0.4040 - val_loss: 0.3348\nEpoch 2/1000\n4s - loss: 0.3332 - val_loss: 0.2651\nEpoch 3/1000\n4s - loss: 0.2656 - val_loss: 0.2035\nEpoch 4/1000\n4s - loss: 0.2061 - val_loss: 0.1696\nEpoch 5/1000\n4s - loss: 0.1761 - val_loss: 0.1601\nEpoch 6/1000\n4s - loss: 0.1697 - val_loss: 0.1476\nEpoch 7/1000\n4s - loss: 0.1536 - val_loss: 0.1287\nEpoch 8/1000\n.....\n
\n\n

Both should go down together. When the test data stops going down, but the train data continues to improve, your model is starting to overfit.

\n\n
\n\n
\n\n
\n\n
\n\n

Trying another model

\n\n

The best I could do (but I didn't really try much) was using this model:

\n\n
model = Sequential()\nmodel.add(LSTM(64, return_sequences=True, input_shape=(None, x_train.shape[2])))\nmodel.add(LSTM(128, return_sequences=True))\nmodel.add(LSTM(128, return_sequences=True))\nmodel.add(LSTM(64, return_sequences=True))\nmodel.add(LSTM(n_feats, return_sequences=True)) \n\nmodel.compile(loss='mse', optimizer='adam')\n
\n\n

When the losses were about:

\n\n
loss: 0.0389 - val_loss: 0.0437\n
\n\n

After this point, the validation loss started going up (so training beyond this point is totally useless)

\n\n

Result:

\n\n

\"enter

\n\n

This shows that all this model could learn was very overall behaviour, such as zones with higher values.

\n\n

But the high frequency was either too random or the model wasn't good enough for this...

\n", + "system": "" + }, + { + "instruction": "What is the architecture behind the Keras LSTM Layer implementation?", + "input": "", + "output": "

Revisited and updated in 2020: I was partially correct! The architecture is 32 neurons. The 10 represents the timestep value. Each neuron is being fed a 64 length vector (maybe representing a word vector), representing 64 features (perhaps 64 words that help identify a word) over 10 timesteps.

\n

The 32 represents the number of neurons. It represents how many hidden states there are for this layer and also represents the output dimension (since we output a hidden state at the end of each LSTM neuron).

\n

Lastly, the 32-dimensional output vector generated from the 32 neurons at the last timestep is then fed to a Dense layer of 2 neurons, which basically means plug the 32 length vector to both neurons, with weights on the input and activation.

\n

More reading with somewhat helpful answers:

\n\n", + "system": "" + }, + { + "instruction": "Different accuracy between python keras and keras in R", + "input": "", + "output": "

That is a dramatic difference so perhaps there's a bug in the code or something unexpected in the data but reproducing Keras results from R in Python is more difficult than it may seem since setting the seed on the R side is insufficient. Instead of set.seed you should use use_session_with_seed, which comes with the R libraries for tensorflow and keras. Note that for full reproducibility you need to use_session_with_seed(..., disable_gpu=TRUE, disable_parallel_cpu=TRUE). See also stack and tf docs. Also, here is an example using the github version of kerasformula and a public dataset. Also, watch out for functions like layer_dropout that accept seed as a parameter.

\n", + "system": "" + }, + { + "instruction": "Simplifying a Keras LSTM model in functional API", + "input": "", + "output": "

You can write your model using functional API as follows-

\n\n
x=Input(shape=(timestep,n_feature))\nmodel=LSTM(8)(x)\nmodel=Dense(2)(model)\n\nmyModel=Model(x,model)\n
\n", + "system": "" + }, + { + "instruction": "Converting Tensor to np.array using K.eval() in Keras returns InvalidArgumentError", + "input": "", + "output": "

The loss function is compiled with the model. At compile time, y_true and y_pred are only placeholder tensors, so they do not have a value yet and can therefore not be evaluated. This is why you get the error message.

\n\n

Your loss function should use Keras tensors, not the numpy arrays they evaluate to. If you need to use additional numpy arrays, convert them to tensors via the variable method of keras.backend (Keras Backend Documentation).

\n\n

Edit:

\n\n

You will still need to stay inside the Keras function space to make your loss work. If this is the concrete loss function that you want to implement, and assuming that your values are in {0,1}, you can try something like this:

\n\n
import keras.backend as K\n\ndef custom_loss_function(y_true, y_pred):\n\n    y_true = y_true*2 - K.ones_like(y_true) # re-codes values of y_true from {0,1} to {-1,+1}\n    y_true = y_true*y_pred # makes the values that you are not interested in equal to zero\n    classification_score = K.abs(K.sum(y_true))\n
\n", + "system": "" + }, + { + "instruction": "Non linear Regression: Why isn't the model learning?", + "input": "", + "output": "

Your learning rate is way too high.

\n\n

Also, irrelevant to your issue, but you should not ask for metrics=['accuracy'], as this is a regression setting and accuracy is meaningless.

\n\n

So, with these changes:

\n\n\n\n
sgd = SGD(lr=0.001);\nmodel.compile(loss='mse', optimizer=sgd)\n\nplt.legend([ 'Predicted Y' ,'Actual Y']) # typo in legend :)\n
\n\n

here are some outputs (results will be different among runs, due to the random element of your y):

\n\n

\"enter

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "IOPub Error on Google Colaboratory in Jupyter Notebook", + "input": "", + "output": "

IoPub Error is occurring in Colab because you are trying to display the output on the console itself(Eg. print() statements) which is very large.

\n\n

So the work around would be, Write the output to the file(instead of print statements) and download the file from the Colab

\n\n

Note: Please do avoid writing simple print statements(just to print your contents in datasets) when you are training with large dataset

\n", + "system": "" + }, + { + "instruction": "Word-level Seq2Seq with Keras", + "input": "", + "output": "

Recently I was also facing this problem. There is no other solution then creating small batches say batch_size=64 in a generator and then instead of model.fit do model.fit_generator. I have attached my generate_batch code below:

\n\n
def generate_batch(X, y, batch_size=64):\n    ''' Generate a batch of data '''\n    while True:\n        for j in range(0, len(X), batch_size):\n            encoder_input_data = np.zeros((batch_size, max_encoder_seq_length),dtype='float32')\n            decoder_input_data = np.zeros((batch_size, max_decoder_seq_length+2),dtype='float32')\n            decoder_target_data = np.zeros((batch_size, max_decoder_seq_length+2, num_decoder_tokens),dtype='float32')\n\n            for i, (input_text_seq, target_text_seq) in enumerate(zip(X[j:j+batch_size], y[j:j+batch_size])):\n                for t, word_index in enumerate(input_text_seq):\n                    encoder_input_data[i, t] = word_index # encoder input seq\n\n                for t, word_index in enumerate(target_text_seq):\n                    decoder_input_data[i, t] = word_index\n                    if (t>0)&(word_index<=num_decoder_tokens):\n                        decoder_target_data[i, t-1, word_index-1] = 1.\n\n            yield([encoder_input_data, decoder_input_data], decoder_target_data)\n
\n\n

And then training like this:

\n\n
batch_size = 64\nepochs = 2\n\n# Run training\nmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.fit_generator(\n    generator=generate_batch(X=X_train_sequences, y=y_train_sequences, batch_size=batch_size),\n    steps_per_epoch=math.ceil(len(X_train_sequences)/batch_size),\n    epochs=epochs,\n    verbose=1,\n    validation_data=generate_batch(X=X_val_sequences, y=y_val_sequences, batch_size=batch_size),\n    validation_steps=math.ceil(len(X_val_sequences)/batch_size),\n    workers=1,\n    )\n
\n\n

X_train_sequences is list of lists like [[23,34,56], [2, 33544, 6, 10]].
\nSimilarly others.

\n\n

Also took help from this blog - word-level-english-to-marathi-nmt

\n", + "system": "" + }, + { + "instruction": "Can I add images to Tensorboard through Keras?", + "input": "", + "output": "

I fixed my problem by creating my own Keras callback based on the TensorBoard callback, where I could use the tf.summary.image feature

\n", + "system": "" + }, + { + "instruction": "Keras Batchnormalization, differing results in trainin and evaluation on training dataset", + "input": "", + "output": "

This is really annoying. When you set the learning_phase to be True - a BatchNormalization layer is getting normalization statistics straight from data what might be a problem when you have a small batch_size. I came across similar issue some time ago - and here you have my solution:

\n\n
    \n
  1. When building a model - add an option if the model would predict in either learning or not-learning phase and in this used in learning phase use the following class instead of BatchNormalization:

    \n\n
    class NonTrainableBatchNormalization(BatchNormalization):\n    \"\"\"\n    This class makes possible to freeze batch normalization while Keras \n    is in training phase.\n    \"\"\"\n    def call(self, inputs, training=None):\n        return super(\n            NonTrainableBatchNormalization, self).call(inputs, training=False)\n
  2. \n
  3. Once you train your model - reset its weights to a NonTrainable copy:

    \n\n
    learning_phase_model.set_weights(learned_model.get_weights())\n
  4. \n
\n\n

Now you can fully enjoy using BatchNormalization in a learning_phase.

\n", + "system": "" + }, + { + "instruction": "Is there a momentum option for Adam optimizer in Keras?", + "input": "", + "output": "

Short answer: no, neither in Keras nor in Tensorflow [EDIT: see UPDATE at the end]

\n\n

Long answer: as already mentioned in the comments, Adam already incorporates something like momentum. Here is some relevant corroboration:

\n\n

From the highly recommended An overview of gradient descent optimization algorithms (available also as a paper):

\n\n
\n

In addition to storing an exponentially decaying average of past squared gradients u[t] like Adadelta and RMSprop, Adam also keeps an exponentially decaying average of past gradients m[t], similar to momentum

\n
\n\n

From Stanford CS231n: CNNs for Visual Recognition:

\n\n
\n

Adam is a recently proposed update that looks a bit like RMSProp with momentum

\n
\n\n

Notice that some frameworks actually include a momentum parameter for Adam, but this is actually the beta1 parameter; here is CNTK:

\n\n
\n

momentum (float, list, output of momentum_schedule()) \u2013 momentum schedule. Note that this is the beta1 parameter in the Adam paper. For additional information, please refer to the this CNTK Wiki article.

\n
\n\n

That said, there is an ICLR 2016 paper titled Incorporating Nesterov momentum into Adam, along with an implementation skeleton in Tensorflow by the author - cannot offer any opinion on this, though.

\n\n

UPDATE: Keras indeed includes now an optimizer called Nadam, based on the ICLR 2016 paper mentioned above; from the docs:

\n\n
\n

Much like Adam is essentially RMSprop with momentum, Nadam is Adam RMSprop with Nesterov momentum.

\n
\n\n

It is also included in Tensorflow as a contributed module NadamOptimizer.

\n", + "system": "" + }, + { + "instruction": "Is it possible to have dynamic batchsize in keras?", + "input": "", + "output": "

It is possible if you train on a loop vs training with fit. an example

\n\n
from random import shuffle    \n\ndataSlices = [(0,104),(104,186),(186,218)]\n\nfor epochs in range(0,10):\n    shuffle(dataSlices)\n    for i in dataSlices:\n        x,y = X[i[0]:i[1],:],Y[i[0]:i[1],:]\n        model.fit(x,y,epochs=1,batchsize=x.shape[0])\n        #OR as suggest by Daniel Moller\n        #model.train_on_batch(x,y)\n
\n\n

This would assume your data is 2d numpy arrays. This idea can be further expanded to use a fit_generator() inplace of the for loop if you so choose (see docs).

\n", + "system": "" + }, + { + "instruction": "How to approximate the determinant with keras", + "input": "", + "output": "

I tested your code and got the same result. But let's go into basic understanding of matrix determinant (DET). DET consists of n! products, so you cannot really approximate it with n*n weights in few layers of neural network. This requires number of weights that would not scale to n=15, since 15! is 1307674368000 terms for multiplication in the DET.

\n", + "system": "" + }, + { + "instruction": "Intuition behind Stacking Multiple Conv2D Layers before Dropout in CNN", + "input": "", + "output": "

The main difference between these two approaches is that the later (2 conv) has more flexibility in expressing non-linear transformations without loosing information. Maxpool removes information from the signal, dropout forces distributed representation, thus both effectively make it harder to propagate information. If, for given problem, highly non-linear transformation has to be applied on raw data, stacking multiple convs (with relu) will make it easier to learn, that's it. Also note that you are comparing a model with 3 max poolings with model with only 2, consequently the second one will potentially loose less information. Another thing is it has way bigger fully connected bit at the end, while the first one is tiny (64 neurons + 0.5 dropout means that you effectively have at most 32 neurons active, that is a tiny layer!). To sum up:

\n\n
    \n
  1. These architectures differe in many aspects, not just stacking conv nets.
  2. \n
  3. Stacking convnets usually leads to less information being lost in processing; see for example \"all convolutional\" architectures.
  4. \n
\n", + "system": "" + }, + { + "instruction": "Constant Output and Prediction Syntax with LSTM Keras Network", + "input": "", + "output": "

This is because you have not normalised input data.

\n

Any neural network model will initially have weights normalised around zero. Since your training dataset has all positive values, the model will try to adjust its weights to predict only positive values. However, the activation function (in your case softsign) will map it to 1. So the model can do nothing except adding the bias. That is why you are getting an almost constant line around the average value of the dataset.

\n

For this, you can use a general tool like sklearn to pre-process your data. If you are using pandas dataframe, something like this will help

\n
data_df = (data_df - data_df.mean()) / data_df.std()\n
\n

Or to have the parameters in the model, you can consider adding batch normalization layer to your model

\n", + "system": "" + }, + { + "instruction": "Keras NN loss not decreasing", + "input": "", + "output": "

Problem is the output layers activation function. You would only use softmax for a two class structure if you output array looked like this

\n\n
[[1,0],\n [1,0],\n [0,1]]\n
\n\n

softmax assumes the output will sum to 1, so if you predict on your input values you will notice that they are all 1, hence the loss never decreases. You can setup your output like above, or you can keep it the same, as a single binary column like this

\n\n
[[1],\n [1],\n [0]]\n
\n\n

and use the sigmoid activation function, which optimizes the output on a range from 0-1. So your last layer will be

\n\n
t = Dense((num_classes),activation='sigmoid',name='s_den1') (t)\n
\n", + "system": "" + }, + { + "instruction": "How to extract relevant information from receipt", + "input": "", + "output": "

My answer isn't as fancy as what's in fashion right now, but I think it works in your case, specially if this is for a product (not for research & publication purposes).

\n\n

\"Example

\n\n

I would implement the paper Text/Graphics Separation Revisited. I have already implemented it in both Matlab & C++ and I guarantee from your description it won't take you long. In summary:

\n\n
    \n
  1. Get all connected components with stats. You're specially interested in the bounding box for each character.

  2. \n
  3. The paper obtains thresholds from histograms on the properties of your connected components, which makes it a bit robust. Using these thresholds (that work surprisingly well) on the geometrical properties of your connected components, discard anything that's not a character.

  4. \n
  5. For your characters, get the centroid for all of their bounding boxes and group the close centroids by your own criteria (height, vertical position, euclidean distance, etc.). Use the obtained centroid clusters to create rectangular text regions.

  6. \n
  7. Associate text regions of same height and vertical position.

  8. \n
  9. Run OCR on your text regions and look for keywords like \"Cash\". I honestly think you can get away with having dictionaries with text files, and from having done computer vision for mobile I know your resources are limited (by privacy too).

  10. \n
\n\n

I honestly don't think a neural net will be much better than some kind of keyword matching (e.g. using Levenshtein distance or something similar to add a bit of robustness) because you will need to manually create and label these words anyway to create your training dataset, so... Why not just write them down instead?

\n\n

That's basically it. You end up with something very fast (specially if you want to use a phone and you can't send images to a server) and it just works. No machine learning needed, so no dataset needed either.

\n\n

But if this is for school... Sorry I was so rude. Please use TensorFlow with 10,000 manually labeled receipt images and natural language processing methods, your professor will be happy.

\n", + "system": "" + }, + { + "instruction": "Use hidden states instead of outputs in LSTMs of keras", + "input": "", + "output": "

I think your confusion possibly stems from the Keras documentation being a little unclear.

\n\n
return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.\nreturn_state: Boolean. Whether to return the last state in addition to the output.\n
\n\n

The docs on return_state are especially confusing because they imply that the hidden states are different from the outputs, but they are one in the same. For LSTMs this gets a little murky because in addition to the hidden (output) states, there is the cell state. We can confirm this by looking at the LSTM step function in the Keras code:

\n\n
class LSTM(Recurrent):\n    def step(...):\n        ...\n        return h, [h, c]\n
\n\n

The return type of this step function is output, states. So we can see that the hidden state h is actually the output, and for the states we get both the hidden state h and the cell state c. This is why you see the Wiki article you linked using the terms \"hidden\" and \"output\" interchangeably.

\n\n

Looking at the paper you linked a little closer, it seems to me your original implementation is what you want.

\n\n
my_lstm = LSTM(128, input_shape=(a, b), return_sequences=True)\nmy_lstm = AttentionWithContext()(my_lstm)\nout = Dense(2, activation='softmax')(my_lstm)\n
\n\n

This will pass the hidden state at each timestep to your attention layer. The only scenario where you are out of luck is the one where you actually want to pass the cell state from each timestep to your attention layer (which is what I thought initially), but I do not think this is what you want. The paper you linked actually uses a GRU layer, which has no concept of a cell state, and whose step function also returns the hidden state as the output.

\n\n
class GRU(Recurrent):\n    def step(...):\n        ...\n        return h, [h]\n
\n\n

So the paper is almost certainly referring to the hidden states (aka outputs) and not the cell states.

\n", + "system": "" + }, + { + "instruction": "Multidimensional Input to Keras", + "input": "", + "output": "

I would suggest that you one-hot encoded your output classes, i.e. using:

\n\n
# Convert labels to categorical one-hot encoding\nlabels = np.array([1, 2]) # 0 - num_classes - 1\ny_train = keras.utils.to_categorical(labels, num_classes=3)\n
\n\n

And then used 'categorical_crossentropy' instead of 'sparse_categorical_crossentropy':

\n\n
model.compile(loss='categorical_crossentropy',\n              optimizer=sgd,\n              metrics=['accuracy'])\n
\n\n

Finally you should flatten your 3D-input at some point if you want 2D-output (samples x classes). Number of units in the output layer (you have only one) should match number of classes and use an appropriate activation function (e.g. 'softmax')

\n\n
model.add(Flatten(input_shape=X_train.shape[1:]))\nmodel.add(Dense(3, activation='softmax'))\n
\n\n

Try to have a look at the Multilayer Perceptron (MLP) for multi-class softmax classification at:\nhttps://keras.io/getting-started/sequential-model-guide/

\n", + "system": "" + }, + { + "instruction": "hybrid of max pooling and average pooling", + "input": "", + "output": "

I now use a different solution for combining both pooling variations.

\n\n\n\n

This approach, of course, has a higher computational cost but is also more flexible.\nThe conv layer after the concatenation can learn to simply blend the two pooling results with an alpha, but it can also end up using different alphas for different features and of course - as conv layers do - combine the pooled features in a completely new way.

\n\n

The code (Keras functional API) looks as follows:

\n\n
import numpy as np\nfrom tensorflow.keras.layers import Input, MaxPooling2D, Conv2D\nfrom tensorflow.keras.layers import Concatenate, AveragePooling2D\nfrom tensorflow.keras.models import Model\n\n# implementation of the described custom pooling layer\ndef hybrid_pool_layer(pool_size=(2,2)):\n    def apply(x):\n        return Conv2D(int(x.shape[-1]), (1, 1))(\n            Concatenate()([\n                MaxPooling2D(pool_size)(x),\n                AveragePooling2D(pool_size)(x)]))\n    return apply\n\n# usage example\ninputs = Input(shape=(256, 256, 3))\nx = inputs\nx = Conv2D(8, (3, 3))(x)\nx = hybrid_pool_layer((2,2))(x)\nmodel = Model(inputs=inputs, outputs=x)\nmodel.compile(loss='categorical_crossentropy', optimizer='nadam')\n
\n\n

Surely one could also leave out the Conv2D and just return the concatenation of the two poolings and let the next layer do the merging work. But the implementation above makes sure that tensor resulting from this hybrid pooling has the shape one would also expect from a normal single pooling operation.

\n", + "system": "" + }, + { + "instruction": "keras 1D convolution input shape", + "input": "", + "output": "

Input to keras.layers.Conv1D should be 3-d with dimensions (nb_of_examples, timesteps, features). I assume that you have a sequence of length 6000 with 1 feature. In this case:

\n\n
X = X.reshape((-1, 9000, 1))\n
\n\n

Should do the job.

\n", + "system": "" + }, + { + "instruction": "Keras 2, TypeError: can't pickle _thread.lock objects", + "input": "", + "output": "

OK the problem seems to be passing method as an argument, in this line:

\n\n
model = KerasClassifier(build_fn=create_model(input_dim), verbose=0)\n
\n\n

create_model is the argument to be passed to build_fn, but I wanted to pass another argument to create_model. But this is not the right way of doing it and hence causing the error.

\n\n

The error message is not informative in this case, unfortunately.

\n", + "system": "" + }, + { + "instruction": "How to get list of values in ImageDataGenerator.flow_from_directory Keras?", + "input": "", + "output": "

The ImageDataGenerator is a python generator, it would yield a batch of data with the shape same with your model inputs(like(batch_size,width,height,channels)) each time. The benefit of the generator is when your data set is too big, you can't put all the data to your limited memory, but, with the generator you can generate one batch data each time. and the ImageDataGenerator works with model.fit_generator(), model.predict_generator().

\n\n

If you want to get the numeric data, you can use the next() function of the generator:

\n\n
import numpy as np    \n\ndata_gen = ImageDataGenerator(rescale = 1. / 255)\n\ndata_generator = datagen.flow_from_directory(\n    data_dir,\n    target_size=(img_height, img_width),\n    batch_size=batch_size,\n    class_mode='categorical')\ndata_list = []\nbatch_index = 0\n\nwhile batch_index <= data_generator.batch_index:\n    data = data_generator.next()\n    data_list.append(data[0])\n    batch_index = batch_index + 1\n\n# now, data_array is the numeric data of whole images\ndata_array = np.asarray(data_list)\n
\n\n

Alternatively, you can use PIL and numpy process the image by yourself:

\n\n
from PIL import Image\nimport numpy as np\n\ndef image_to_array(file_path):\n    img = Image.open(file_path)\n    img = img.resize((img_width,img_height))\n    data = np.asarray(img,dtype='float32')\n    return data\n    # now data is a tensor with shape(width,height,channels) of a single image.\n
\n\n

Then, you can loop all your images with this function to get the numeric data.

\n\n

Notice, I recommend you to use generator instead of get all the data directly, or, you might run out of memory.

\n", + "system": "" + }, + { + "instruction": "Weird accuracy in multilabel classification keras", + "input": "", + "output": "

I think the problem with the accuracy is that your output are sparse.

\n\n

Keras computes accuracy using this formula:

\n\n
K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)\n
\n\n

So, in your case, having only 1~10 non zero labels, a prediction of all 0 will yield an accuracy of 99.9% ~ 99%.

\n\n

As far as the problem not learning, I think the problem is that you are using a sigmoid as last activation and using 0 or 1 as output value. This is bad practice since, in order for the sigmoid to return 0 or 1 the values it gets as input must be very large or very small, which reflects on the net having very large (in absolute value) weights. Furthermore, since in each training output there are far less 1 than 0 the network will soon get to a stationary point in which it simply outputs all zeros (the loss in this case is not very large either, should be around 0.016~0.16).

\n\n

What you can do is scale your output labels so that they are between (0.2, 0.8) for example so that the weights of the net won't become too big or too small. Alternatively you can use a relu as activation function.

\n", + "system": "" + }, + { + "instruction": "What should be the output shape of the loss function in keras?", + "input": "", + "output": "

Edit: My first answer was written thinking your input was (299,299,3), not your output. My apologies! \"Image prediction\" is rather vague, but if your intention really is to output a 3D tensor as your y_pred, you still will probably want to create a loss function that produces a scalar. This is because the loss of a sample needs to be pooled with the error of all the other samples. The joint loss allows the model to generalize its behavior. See this wiki snippet. Essentially, a multidimensional loss function would be equivalent to setting the batch size to 1 in stochastic gradient descent.

\n\n

First Answer:
\nUsually you want your loss function to output a single number. If you're doing image classification, you almost certainly want your loss function to output a single number.

\n\n

Assuming \"image prediction\" means \"image classification\" your inputs are likely going to be your images and your y_pred will most often be a batch of one dimensional arrays with a length equal to the number of possible classes.

\n\n

Your y_true will be batch of one_hot encoded arrays of the same size as y_pred. This means that they are arrays with a length equal to the number of classes your images can be. The difference is that the y_true vectors contain all zeros except for a single 1 at the index of the corresponding image's category.

\n\n

For example, if your images are only of dogs, cats, or sheeps, then there are 3 possible classes. Let's arbitrarily say that a 0 means dog, 1 means cat, and 2 means sheep. Then if an image is of a sheep, the corresponding y_true would be [0,0,1]. If the image is of a cat, y_true would be [0,1,0]. If your image is of a bear, your classifier will be confused...

\n\n

As for the loss function, usually you would somehow figure out how far off each y_pred is from the corresponding y_true and sum up all of the differences in the batch. This results in a single number representing the total loss for the batch.

\n\n

Keras is set up nicely to handle the loss function for you. When you have a model, you call model.compile(loss='some_loss_fxn_here', optimizer='some_optimizer_here') and specify the loss function you want to use as a string. You will probably want to use 'categorical_crossentropy'.

\n\n

Given the way you worded your question, you may need to create a model before worrying about all of this.

\n\n

You might try something like this:

\n\n
from keras.models import Sequential, Model\nfrom keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout\nfrom keras.layers.normalization import BatchNormalization\n\ndef conv_block(x, depth, filt_shape=(3,3), padding='same', act='relu'):\n    x = Conv2D(depth, filt_shape, padding=padding, activation=act)(x)\n    x = BatchNormalization()(x)\n    pool_filt, pool_stride = (2,2), (2,2)\n    return MaxPooling2D(pool_filt, strides=pool_stride, padding='same')(x)\n\n# Don't forget to preprocess your images!\ninputs = Input(shape(299,299,3))\nlayer1 = conv_block(inputs, 64) # shape = [batch_size,150,150,64]\nlayer1 = Dropout(.05)(layer1) # Note early dropout should be low\nlayer2 = conv_block(layer1, 64) # shape = [batch_size,75,75,64]\nlayer2 = Dropout(.1)(layer2)\nlayer3 = conv_block(layer2, 64) # shape = [batch_size,38,38,64]\nlayer3 = Dropout(.2)(layer3)\nlayer4 = conv_block(layer3, 64) # shape = [batch_size,19,19,64]\nlayer4 = Dropout(.25)(layer4)\nlayer5 = conv_block(layer4, 64) # shape = [batch_size,10,10,30]\n\nflat_layer = Flatten()(layer5) # shape = [batch_size, 3000]\nflat_layer = Dropout(.4)(flat_layer)\n\ndef dense_block(x, output_dim, act='relu'):\n    x = Dense(output_dim, activation=act)(x)\n    return BatchNormalization()(x)\n\nlayer6 = dense_block(flat_layer, 300)\nlayer7 = dense_block(layer6, 50)\n\nn_labels = 10 # change this number depending on the number of image classes\noutputs = dense_block(layer7, n_labels, 'softmax')\n\nmodel = Model(inputs=inputs, outputs=outputs)\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\n\n# Be sure to make your own images and y_trues arrays!\nmodel.fit(x=images,y=y_trues,batch_size=124)\n
\n\n

If none of this helped, check out a tutorial or try the fast.ai course. The fast.ai course is probably the best thing in the world, so I'd say start there.

\n", + "system": "" + }, + { + "instruction": "Keras LSTM autoencoder with embedding layer", + "input": "", + "output": "

You can first convert the word to embeddings and pass them to the fit()

\n\n
expected_output = np.array([[embedding_matrix[word_index] for word_index in encoded_sequence] for encoded_sequence in padded_sequences])\nhistory = lstm_autoencoder.fit(padded_sequences, expected_output, epochs=15, verbose=1)\n
\n", + "system": "" + }, + { + "instruction": "Keras + Tensorflow : Debug NaNs", + "input": "", + "output": "

I've been running into the exact same problem, and found an alternative to the check_add_numerics_ops() function. Instead of going that route, I use the TensorFlow Debugger to walk through my model, following the example in https://www.tensorflow.org/guide/debugger to figure out exactly where my code produces nans. This snippet should work for replacing the TensorFlow Session that Keras is using with a debugging session, allowing you to use tfdbg.

\n\n
from tensorflow.python import debug as tf_debug\nsess = K.get_session()\nsess = tf_debug.LocalCLIDebugWrapperSession(sess)\nK.set_session(sess)\n
\n", + "system": "" + }, + { + "instruction": "Keras dot product layer by last axis", + "input": "", + "output": "

From the keras documentation of the merge.Dot() function, is this what you are looking for?

\n\n
from keras.layers import Input, dot\n\ninput_a = Input(input_size)\ninput_b = Input(input_size)\nfe_net_a = model(input_a)\nfe_net_b = model(input_b)\n\nfe_ab = dot([fe_net_a,fe_net_b], axes = -1)\n
\n\n

Does that work for you ?

\n", + "system": "" + }, + { + "instruction": "ValueError: Error when checking target: expected dense_2 to have shape (None, 2) but got array with shape (1, 1)", + "input": "", + "output": "

Specifically for \"two classes\", there are two ways of doing it:

\n\n\n\n

Each one requires a different model output:

\n\n\n\n

You seem to be in the first case, so, change your last layer.

\n\n

What does that error mean?

\n\n

Your model outputs things with shape (BatchSize, 2), but your class labels have shape (BatchSize,1).

\n", + "system": "" + }, + { + "instruction": "Warning from keras: "Update your Conv2D call to the Keras 2 API"", + "input": "", + "output": "

As it says, it's not an issue. It still works fine although they might change it any day and the code will not work.

\n

In Keras 2 Convolution2D has been replaced by Conv2d along with some changes in the parameters.

\n
\n

Convolution* layers are renamed Conv*.

\n

Conv2D(10, 3, 3) becomes Conv2D(10, (3, 3))

\n
\n", + "system": "" + }, + { + "instruction": "Limit neural network output to subset of trained classes", + "input": "", + "output": "

First of all, I will loosely go through available options you have listed and add some viable alternatives with the pros and cons. It's kinda hard to structure this answer but I hope you'll get what I'm trying to put out:

\n\n

1. Multiply restricted before sending it through softmax.

\n\n

Obviously may give higher chance to the zeroed-out entries as you have written, at seems like a false approach at the beginning.

\n\n

Alternative: replace impossible values with smallest logit value. This one is similar to softmax(output[1:]), though the network will be even more uncertain about the results. Example pytorch implementation:

\n\n
import torch\n\nlogits = torch.Tensor([5.39413513, 3.81445419, 3.75369546, 1.02716988, 0.39189373])\nminimum, _ = torch.min(logits, dim=0)\nlogits[0] = minimum\nprint(torch.nn.functional.softmax(logits))\n
\n\n

which yields:

\n\n
tensor([0.0158, 0.4836, 0.4551, 0.0298, 0.0158])\n
\n\n

Discussion

\n\n\n\n

Yes, and you would be in the right when doing that. Even more so, the actual probabilities for this class are actually far lower, around 14% (tensor([0.7045, 0.1452, 0.1366, 0.0089, 0.0047])). By manually changing the output you are essentially destroying the properties this NN has learned (and it's output distribution) rendering some part of your computations pointless. This points to another problem stated in the bounty this time:

\n\n

2. NN are known to be overconfident for classification problems

\n\n

I can imagine this being solved in multiple ways:

\n\n

2.1 Ensemble

\n\n

Create multiple neural networks and ensemble them by summing logits taking argmax at the end (or softmax and then `argmax). Hypothetical situation with 3 different models with different predictions:

\n\n
import torch\n\npredicted_logits_1 = torch.Tensor([5.39413513, 3.81419, 3.7546, 1.02716988, 0.39189373])\npredicted_logits_2 = torch.Tensor([3.357895, 4.0165, 4.569546, 0.02716988, -0.189373])\npredicted_logits_3 = torch.Tensor([2.989513, 5.814459, 3.55369546, 3.06988, -5.89473])\n\ncombined_logits = predicted_logits_1 + predicted_logits_2 + predicted_logits_3\nprint(combined_logits)\nprint(torch.nn.functional.softmax(combined_logits))\n
\n\n

This would gives us the following probabilities after softmax:

\n\n

[0.11291057 0.7576356 0.1293983 0.00005554 0.]

\n\n

(notice the first class is now the most probable)

\n\n

You can use bootstrap aggregating and other ensembling techniques to improve predictions. This approach makes the classifying decision surface smoother and fixes mutual errors between classifiers (given their predictions vary quite a lot). It would take many posts to describe in any greater detail (or separate question with specific problem would be needed), here or here are some which might get you started.

\n\n

Still I would not mix this approach with manual selection of outputs.

\n\n

2.2 Transform the problem into binary

\n\n

This approach might yield better inference time and maybe even better training time if you can distribute it over multiple GPUs.

\n\n

Basically, each class of yours can either be present (1) or absent (0). In principle you could train N neural networks for N classes, each outputting a single unbounded number (logit). This single number tells whether the network thinks this example should be classified as it's class or not.

\n\n

If you are sure certain class won't be the outcome for sure you do not run network responsible for this class detection.\nAfter obtaining predictions from all the networks (or subset of networks), you choose the highest value (or highest probability if you use sigmoid activation, though it would be computationally wasteful).

\n\n

Additional benefit would be simplicity of said networks (easier training and fine-tuning) and easy switch-like behavior if needed.

\n\n

Conclusions

\n\n

If I were you I would go with the approach outlined in 2.2 as you could save yourself some inference time easily and would allow you to \"choose outputs\" in a sensible manner.

\n\n

If this approach is not enough, you may consider N ensembles of networks, so a mix of 2.2 and 2.1, some bootstrap or other ensembling techniques. This should improve your accuracy as well.

\n", + "system": "" + }, + { + "instruction": "Keras does not utilize 100% cpu", + "input": "", + "output": "

It turned out, my model was too simple to utilize 100% cpu. After adding more neurons to the network, cpu utilization increased to almost 100%.

\n\n

I dont know, why keras don't use availible cpu power to compute simple models faster.

\n", + "system": "" + }, + { + "instruction": "How can I clip the values returned by a layer in Keras?", + "input": "", + "output": "

It actually has to be implemented as loss, at the model.compile step.

\n\n
from keras import backend as K\n\ndef clipped_mse(y_true, y_pred):\n    return K.mean(K.square(K.clip(y_pred, 0., 1900.) - K.clip(y_true, 0., 1900.)), axis=-1)\n\nmodel.compile(loss=clipped_mse)\n
\n\n

EDIT: Actually, now in hindsight I think that this might not be the right approach. This actually means we do not add penalty for going over too high of a values - it's in a way the opposite of what we want.

\n", + "system": "" + }, + { + "instruction": "Custom RMSPE loss function in keras", + "input": "", + "output": "

It's good that you're clipping the denominator. But epsilon in the tensorflow backend is 1e-7 when I check on my machine. So you can still blow up your gradient by ten million when you divide by the clipped value. What you want to do is clip your gradient which you can do with either the clipvalue or clipnorm arguments to your optimizer:

\n
optimizer = SGD(clipvalue=10.0)\n
\n

or

\n
optimizer = SGD(clipnorm=2.0)\n
\n

You have to play with the value a bit depending on how many output variables you have and how much noise is in your data.

\n", + "system": "" + }, + { + "instruction": "How filters are initialized in convnet", + "input": "", + "output": "

You can either initialize the filters randomly or pretrain them on some other data set.

\n\n
\n\n

Some references:

\n\n

http://deeplearning.net/tutorial/lenet.html:

\n\n
\n

Notice that a randomly initialized filter acts very much like an edge\n detector!

\n \n

Note that we use the same weight initialization formula as with the\n MLP. Weights are sampled randomly from a uniform distribution in the\n range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a\n hidden unit. For MLPs, this was the number of units in the layer\n below. For CNNs however, we have to take into account the number of\n input feature maps and the size of the receptive fields.

\n
\n\n

http://cs231n.github.io/transfer-learning/ :

\n\n
\n

Transfer Learning

\n \n

In practice, very few people train an entire Convolutional Network\n from scratch (with random initialization), because it is relatively\n rare to have a dataset of sufficient size. Instead, it is common to\n pretrain a ConvNet on a very large dataset (e.g. ImageNet, which\n contains 1.2 million images with 1000 categories), and then use the\n ConvNet either as an initialization or a fixed feature extractor for\n the task of interest. The three major Transfer Learning scenarios look\n as follows:

\n \n \n \n

Pretrained models. Since modern ConvNets take 2-3 weeks to train across multiple GPUs on ImageNet, it is common to see people release\n their final ConvNet checkpoints for the benefit of others who can use\n the networks for fine-tuning. For example, the Caffe library has a\n Model Zoo where people\n share their network weights.

\n \n

When and how to fine-tune? How do you decide what type of transfer learning you should perform on a new dataset? This is a function of\n several factors, but the two most important ones are the size of the\n new dataset (small or big), and its similarity to the original dataset\n (e.g. ImageNet-like in terms of the content of images and the classes,\n or very different, such as microscope images). Keeping in mind that\n ConvNet features are more generic in early layers and more\n original-dataset-specific in later layers, here are some common rules\n of thumb for navigating the 4 major scenarios:

\n \n
    \n
  1. New dataset is small and similar to original dataset. Since the data is small, it is not a good idea to fine-tune the ConvNet due to\n overfitting concerns. Since the data is similar to the original data,\n we expect higher-level features in the ConvNet to be relevant to this\n dataset as well. Hence, the best idea might be to train a linear\n classifier on the CNN codes.
  2. \n
  3. New dataset is large and similar to the original dataset. Since we have more data, we can have more confidence that we won't overfit\n if we were to try to fine-tune through the full network.
  4. \n
  5. New dataset is small but very different from the original dataset. Since the data is small, it is likely best to only train a\n linear classifier. Since the dataset is very different, it might not\n be best to train the classifier form the top of the network, which\n contains more dataset-specific features. Instead, it might work better\n to train the SVM classifier from activations somewhere earlier in the\n network.
  6. \n
  7. New dataset is large and very different from the original dataset. Since the dataset is very large, we may expect that we can\n afford to train a ConvNet from scratch. However, in practice it is\n very often still beneficial to initialize with weights from a\n pretrained model. In this case, we would have enough data and\n confidence to fine-tune through the entire network.
  8. \n
\n \n

Practical advice. There are a few additional things to keep in mind when performing Transfer Learning:

\n \n \n \n

\n \n

Additional References

\n \n \n
\n", + "system": "" + }, + { + "instruction": "from keras.layers import Dense -- cannot import name 'Dense'", + "input": "", + "output": "

The error is telling you that it found nothing named Dense in that module.

\n\n

Perhaps you meant from keras.layers.core import Dense?

\n", + "system": "" + }, + { + "instruction": "Loading weights in TH format when keras is set to TF format", + "input": "", + "output": "

I asked Francois Chollet about this (he doesn't have an SO account) and he kindly passed along this reply:

\n\n
\n\n

\"th\" format means that the convolutional kernels will have the shape (depth, input_depth, rows, cols)

\n\n

\"tf\" format means that the convolutional kernels will have the shape (rows, cols, input_depth, depth)

\n\n

Therefore you can convert from the former to the later via np.transpose(x, (2, 3, 1, 0)) where x is the value of the convolution kernel.

\n\n

Here's some code to do the conversion:

\n\n
from keras import backend as K\n\nK.set_image_dim_ordering('th')\n\n# build model in TH mode, as th_model\nth_model = ...\n# load weights that were saved in TH mode into th_model\nth_model.load_weights(...)\n\nK.set_image_dim_ordering('tf')\n\n# build model in TF mode, as tf_model\ntf_model = ...\n\n# transfer weights from th_model to tf_model\nfor th_layer, tf_layer in zip(th_model.layers, tf_model.layers):\n   if th_layer.__class__.__name__ == 'Convolution2D':\n      kernel, bias = layer.get_weights()\n      kernel = np.transpose(kernel, (2, 3, 1, 0))\n      tf_layer.set_weights([kernel, bias])\n  else:\n      tf_layer.set_weights(tf_layer.get_weights())\n
\n\n

In case the model contains Dense layers downstream of the Convolution2D layers, then the weight matrix of the first Dense layer would need to be shuffled as well.

\n", + "system": "" + }, + { + "instruction": "Keras VGG extract features", + "input": "", + "output": "

It confused me a lot:

\n\n
    \n
  1. After layers = [K.function([model.layers[0].input],[model.layers[li].output])([instance])[0] for li in layer_indexes], layers is list of extracted feature.
  2. \n
  3. And then you send that list into feature_maps = get_features(model,layers,instance).
  4. \n
  5. In def get_features(model, layer, X_batch):, they second parameter, namely layer, is used to index in model.layers[layer].output.
  6. \n
\n\n

What you want is:

\n\n
    \n
  1. feature_maps = get_features(model,layer_indexes,instance): passing layer indices rather than extracted features.
  2. \n
  3. get_features = K.function([model.layers[0].input, K.learning_phase()], [model.layers[l].output for l in layer]): list cannot be used to indexing list.
  4. \n
\n\n

Still, your feature abstracting function is horribly written. I suggest you to rewrite everything rather than mixing codes.

\n", + "system": "" + }, + { + "instruction": "How to use Keras' multi layer perceptron for multi-class classification", + "input": "", + "output": "

This is a pretty common beginner's mistake with Keras. Unlike other Deep Learning frameworks, Keras does not use integer labels for the usual crossentropy loss, instead it expects a binary vector (called \"one-hot\"), where the vector is just 0's and a 1 over the index of the right class.

\n\n

You can easily convert your labels to this format with the following code:

\n\n
from keras.utils.np_utils import to_categorical\ny_train = to_categorical(y_train)\ny_test = to_categorical(y_test)\n
\n\n

Before model.fit. An alternative is to change the loss to \"sparse_categorical_crossentropy\", which does expect integer labels.

\n", + "system": "" + }, + { + "instruction": "Python/Keras - Creating a callback with one prediction for each epoch", + "input": "", + "output": "

You are overwriting the prediction for each epoch, that is why it doesn't work. I would do it like this:

\n\n
class prediction_history(Callback):\n    def __init__(self):\n        self.predhis = []\n    def on_epoch_end(self, epoch, logs={}):\n        self.predhis.append(model.predict(predictor_train))\n
\n\n

This way self.predhis is now a list and each prediction is appended to the list at the end of each epoch.

\n", + "system": "" + }, + { + "instruction": "How Can I use my GPU on Ipython Notebook?", + "input": "", + "output": "

I'm using theano on an ipython notebook making use of my system's GPU. This configuration seems to work fine on my system.(Macbook Pro with GTX 750M)

\n\n

My ~/.theanorc file :

\n\n
[global]\ncnmem = True\nfloatX = float32\ndevice = gpu0\n
\n\n

Various environment variables (I use a virtual environment(macvnev):

\n\n
echo $LD_LIBRARY_PATH\n/opt/local/lib:\n\necho $PATH\n/Developer/NVIDIA/CUDA-7.5/bin:/opt/local/bin:/opt/local/sbin:/Developer/NVIDIA/CUDA-7.0/bin:/Users/Ramana/projects/macvnev/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin\n\necho $DYLD_LIBRARY_PATH\n/Developer/NVIDIA/CUDA-7.5/lib:/Developer/NVIDIA/CUDA-7.0/lib:\n
\n\n

How I run ipython notebook (For me, the device is gpu0) :

\n\n
$THEANO_FLAGS=mode=FAST_RUN,device=gpu0,floatX=float32 ipython notebook\n
\n\n

Output of $nvcc -V :

\n\n
nvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2015 NVIDIA Corporation\nBuilt on Thu_Sep_24_00:26:39_CDT_2015\nCuda compilation tools, release 7.5, V7.5.19\n
\n\n

From your post, probably you've set the $PATH variable wrong.

\n", + "system": "" + }, + { + "instruction": "Keras loss and metrics values do not match with same function in each", + "input": "", + "output": "

This blog post suggests that keras adds any regularisation used in the training when calculating the validation loss. And obviously, when calculating the metric of choice no regularisation is applied. This is why it occurs with any loss function of choice as stated in the question.

\n

This is something I could not find any documentation on from Keras. However, it seems to hold up since when I remove all regularisation hyperparameters, the val_loss and val_custom_fn match exactly in each epoch.

\n

An easy workaround is to either use the custom_fn as a metric and save the best model based on the metric (val_custom_fn) than on the val_loss. Or else Loop through each epoch manually and calculate the correct val_loss manually after training each epoch. The latter seems to make more sense since there is no reason to include custom_fn both as a metric and as a loss function.

\n

If anyone can find any evidence of this in the Keras documentation that would be helpful.

\n", + "system": "" + }, + { + "instruction": "Error during training: layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4", + "input": "", + "output": "

I have a very similar error message coming from the first epoch of my trainer:

\n
tensorflow/core/grappler/optimizers/meta_optimizer.cc:961] layout failed: INVALID_ARGUMENT: Size of values 0 does not match size of permutation 4 @ fanin shape insequential_2/efficientnetv2-b0/block2b_drop/dropout/SelectV2-2-TransposeNHWCToNCHW-LayoutOptimizer\n
\n

I learnt that in my case I can ignore that one. The fit-routine continues working and works correctly...

\n", + "system": "" + }, + { + "instruction": "clipnorm vs clipvalue vs global_clipnorm in tf.keras.optimizers.Adam", + "input": "", + "output": "

well, you are right if we have a 1-dimensional gradient then the clip_norm and clip_value do the same job. But if we go deep inside then clip_norm controls the rate or speed at which the gradient is flowing across different dimensions, and the other foremost factor is the magnitude of a gradient in different dimensions. There clip_norm will only clip the values between two bounds when it reaches the threshold while gradient flowing in a different direction may vary in magnitude. So, here clip_norm can act as a regularizer; As the magnitude of the gradient is large in some directions and small in other dimensions may affect the learning. So, here clip_norm can act as a regularizer so it will clip the magnitude of the gradient which are causing the zip zag effect during training.

\n

For your other question during training when you are training the Model you can use clip_norm or clip_value functions before passing it to optimizer.update() function in tensorflow. This is one way to do it.

\n", + "system": "" + }, + { + "instruction": "How to train on a tensorflow_datasets dataset", + "input": "", + "output": "

Set the batch size when loading your data:

\n
import tensorflow_datasets as tfds\nimport tensorflow as tf\n\nds = tfds.load('iris', split='train', shuffle_files=True, as_supervised=True, batch_size=10)\nmodel = tf.keras.Sequential([\n    tf.keras.layers.Dense(10,activation="relu"),\n    tf.keras.layers.Dense(10,activation="relu"),\n    tf.keras.layers.Dense(3, activation="softmax")\n])\n\nmodel.compile(\n    optimizer=tf.keras.optimizers.Adam(0.001),\n    loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],\n)\nmodel.fit(ds, epochs=100)\n
\n

Also regarding model.fit, the docs state:

\n
\n

Integer or None. Number of samples per gradient update. If\nunspecified, batch_size will default to 32. Do not specify the\nbatch_size if your data is in the form of datasets, generators, or\nkeras.utils.Sequence instances (since they generate batches).

\n
\n", + "system": "" + }, + { + "instruction": "Is it possible to use a collection of hyperspectral 1x1 pixels in a CNN model purposed for more conventional datasets (CIFAR-10/MNIST)?", + "input": "", + "output": "

If the hyperspectral dataset is given to you as a large image with many channels, I suppose that the classification of each pixel should depend on the pixels around it (otherwise I would not format the data as an image, i.e. without grid structure). Given this assumption, breaking up the input picture into 1x1 parts is not a good idea as you are loosing the grid structure.

\n

I further suppose that the order of the channels is arbitrary, which implies that convolution over the channels is probably not meaningful (which you however did not plan to do anyways).

\n

Instead of reformatting the data the way you did, you may want to create a model that takes an image as input and also outputs an "image" containing the classifications for each pixel.\nI.e. if you have 10 classes and take a (145, 145, 200) image as input, your model would output a (145, 145, 10) image. In that architecture you would not have any fully-connected layers. Your output layer would also be a convolutional layer.

\n

That however means that you will not be able to keep your current architecture. That is because the tasks for MNIST/CIFAR10 and your hyperspectral dataset are not the same. For MNIST/CIFAR10 you want to classify an image in it's entirety, while for the other dataset you want to assign a class to each pixel (while most likely also using the pixels around each pixel).

\n
\n

Some further ideas:

\n\n", + "system": "" + }, + { + "instruction": "Error while import keras: AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'register_clear_session_function'", + "input": "", + "output": "

you should use import tensorflow.keras instead of import keras.\nMore info here.

\n", + "system": "" + }, + { + "instruction": "pipenv - Pipfile.lock is not being generated due to the 'Could not find a version that matches keras-nightly~=2.5.0.dev' error", + "input": "", + "output": "

By looking at the pypi site for keras-nightly library, I could see that there are no versions named 2.5.0.dev. Check which package is generating the error and try downgrading that package.

\n", + "system": "" + }, + { + "instruction": "Is there a way to ensemble two keras (h5) models trained for same classes", + "input": "", + "output": "

You can average-ensemble them like this:

\n
models = [keras_model, keras_model2]\nmodel_input = tf.keras.Input(shape=(125, 125, 3))\nmodel_outputs = [model(model_input) for model in models]\nensemble_output = tf.keras.layers.Average()(model_outputs)\nensemble_model = tf.keras.Model(inputs=model_input, outputs=ensemble_output)\n
\n

\"enter

\n
\n

Edit: If you want to create weighted ensemble you can create a custom layer like this:

\n
class WeightedAverageLayer(tf.keras.layers.Layer):\n    def __init__(self, w1, w2, **kwargs):\n        super(WeightedAverageLayer, self).__init__(**kwargs)\n        self.w1 = w1\n        self.w2 = w2\n\n    def call(self, inputs):\n        return self.w1 * inputs[0] + self.w2 * inputs[1]\n
\n

So the output layer should look like this:

\n
ensemble_output = WeightedAverageLayer(0.6, 0.4)(model_outputs)\n
\n

Here, first model's output is scaled with a factor of 0.6. Same thing applies for the second model, with a factor of 0.4.

\n

Note: WeightedAverageLayer(0.5, 0.5) will be identical to tf.keras.layers.Average().

\n", + "system": "" + }, + { + "instruction": "Keras custom loss function to ignore false negatives of a specific class during semantic segmentation?", + "input": "", + "output": "

If I understand your question correctly, you are looking for something like this:

\n
import tensorflow as tf\n\n\n# batch of true labels\ny_true = tf.constant([5, 0, 1, 3, 4, 0, 2, 0], dtype=tf.int64)\n\n# batch of class probabilities\ny_pred = tf.constant(\n  [\n    [0.34670502, 0.04551039, 0.14020428, 0.14341979, 0.21430719, 0.10985339],\n    [0.25681055, 0.14013883, 0.19890164, 0.11124421, 0.14526634, 0.14763844],\n    [0.09199252, 0.21889475, 0.1170236 , 0.1929019 , 0.20311192, 0.17607528],\n    [0.3246354 , 0.23257554, 0.15549366, 0.17282239, 0.00000001, 0.11447308],\n    [0.16502093, 0.13163856, 0.14371352, 0.19880624, 0.23360236, 0.12721846],\n    [0.27362782, 0.21408406, 0.10917682, 0.13135742, 0.10814326, 0.16361059],\n    [0.20697299, 0.23721898, 0.06455399, 0.11071447, 0.18990229, 0.19063729],\n    [0.10320242, 0.22173141, 0.2547973 , 0.2314068 , 0.07063974, 0.11822232]\n  ], dtype=tf.float32)\n\n# find the indices in the batch where the true label is the class 0\nindices = tf.where(tf.math.equal(y_true, 0))\n\n# create a tensor with the number of updates you want to replace in `y_pred`\nupdates = tf.repeat(\n    [[1.0, 0.0, 0.0, 0.0, 0.0, 0.0]],\n    repeats=indices.shape[0],\n    axis=0)\n\n# insert the updates into `y_pred` at the specified indices\nmodified_y_pred = tf.tensor_scatter_nd_update(y_pred, indices, updates)\n\nprint(modified_y_pred)\n# tf.Tensor(\n#   [[0.34670502, 0.04551039, 0.14020428, 0.14341979, 0.21430719, 0.10985339],\n#    [1.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000],\n#    [0.09199252, 0.21889475, 0.1170236 , 0.1929019 , 0.20311192, 0.17607528],\n#    [0.3246354 , 0.23257554, 0.15549366, 0.17282239, 0.00000001, 0.11447308],\n#    [0.16502093, 0.13163856, 0.14371352, 0.19880624, 0.23360236, 0.12721846],\n#    [1.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000],\n#    [0.20697299, 0.23721898, 0.06455399, 0.11071447, 0.18990229, 0.19063729],\n#    [1.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000, 0.00000000]], \n#    shape=(8, 6), dtype=tf.float32)\n
\n

This final tensor, modified_y_pred, can be used in differentiation.

\n

EDIT:

\n

It might be easier to do this with masks.

\n

Example:

\n
# these arent normalized to 1 but you get the point\nprobs = tf.random.normal([2, 4, 4, 6])\n\n# raw labels per pixel\nlabels = tf.random.uniform(\n    shape=[2, 4, 4],\n    minval=0,\n    maxval=6,\n    dtype=tf.int64)\n\n# your labels are already one-hot encoded\nlabels = tf.one_hot(labels, 6)\n\n# boolean mask where classes are `0`\n# converting back to int labels with argmax for purposes of\n# using `tf.math.equal`. Matching on `[1, 0, 0, 0, 0, 0]` is\n# potentially buggy; matching on an integer is a lot more\n# explicit.\nmask = tf.math.equal(tf.math.argmax(labels, -1), 0)[..., None]\n\n# flip the mask to zero out the pixels across channels where\n# labels are zero\nprobs *= tf.cast(tf.math.logical_not(mask), tf.float32)\n\n# multiply the mask by the one-hot labels, and add back\n# to the already masked probabilities.\nprobs += labels * tf.cast(mask, tf.float32)\n
\n", + "system": "" + }, + { + "instruction": "How to design a neural network to predict arrays from arrays", + "input": "", + "output": "

What you are trying to build is called a De-noising autoencoder. The goal here is to be able to reconstruct a noise-less sample by artificially introducing noise in a dataset, feed it to an encoder, then try to regenerate it without noise using a decoder.

\n

\"enter

\n

This can be done with any form of data, including image and text.

\n

I would recommend reading more about this. There are various concepts that ensure proper training of the model including understanding the requirement of a bottleneck in the middle to ensure proper compression and information loss, else the model just learns to multiply by 1 and returns the output.

\n

Here is a sample piece of code. You can read more about this type of architecture here, written by the author of Keras himself.

\n
from tensorflow.keras import layers, Model, utils, optimizers\n\n#Encoder\nenc = layers.Input((99,))\nx = layers.Dense(128, activation='relu')(enc)\nx = layers.Dense(56, activation='relu')(x)\nx = layers.Dense(8, activation='relu')(x) #Compression happens here\n\n#Decoder\nx = layers.Dense(8, activation='relu')(x)\nx = layers.Dense(56, activation='relu')(x)\nx = layers.Dense(28, activation='relu')(x)\ndec = layers.Dense(99)(x)\n\nmodel = Model(enc, dec)\n\nopt = optimizers.Adam(learning_rate=0.01)\n\nmodel.compile(optimizer = opt, loss = 'MSE')\n\nmodel.fit(x_train, y_train, epochs = 20)\n
\n

Please beware, Autoencoders assume that the input data has some underlying structure to it and therefore can be compressed into a lower-dimensional space, which the decoder can use to regenerate the data. Using randomly generated sequences as data may not show any good results because its compression is not going to work without massive loss of information which itself has no structure to it.

\n

As most of the other answers suggest, you are not using the activations properly. Since the goal is to regenerate a 99-dimensional vector with continuous values, it would make sense NOT to use sigmoid, instead, work with tanh as it compresses (-1,1) or no final layer activation, and not gates (0-1) the values.

\n
\n

Here is a Denoising autoencoder with conv1d and deconv1d layers. The issue here is that the input is too simple. See if you can generate more complex parametric functions for input data.

\n
from tensorflow.keras import layers, Model, utils, optimizers\n\n#Encoder with conv1d\ninp = layers.Input((99,))\nx = layers.Reshape((99,1))(inp)\nx = layers.Conv1D(5, 10)(x)\nx = layers.MaxPool1D(10)(x)\nx = layers.Flatten()(x)\nx = layers.Dense(4, activation='relu')(x) #<- Bottleneck!\n\n#Decoder with Deconv1d\nx = layers.Reshape((-1,1))(x)\nx = layers.Conv1DTranspose(5, 10)(x)\nx = layers.Conv1DTranspose(2, 10)(x)\nx = layers.Flatten()(x)\nout = layers.Dense(99)(x)\n\nmodel = Model(inp, out)\n\nopt = optimizers.Adam(learning_rate=0.001)\nmodel.compile(optimizer = opt, loss = 'MSE')\nmodel.fit(x_train, y_train, epochs = 10, validation_data=(x_test, y_test))\n
\n
Epoch 1/10\n188/188 [==============================] - 1s 7ms/step - loss: 2.1205 - val_loss: 0.0031\nEpoch 2/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0032 - val_loss: 0.0032\nEpoch 3/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0032 - val_loss: 0.0030\nEpoch 4/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0031 - val_loss: 0.0029\nEpoch 5/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0030 - val_loss: 0.0030\nEpoch 6/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0029 - val_loss: 0.0027\nEpoch 7/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0028 - val_loss: 0.0029\nEpoch 8/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0028 - val_loss: 0.0025\nEpoch 9/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0028 - val_loss: 0.0025\nEpoch 10/10\n188/188 [==============================] - 1s 5ms/step - loss: 0.0026 - val_loss: 0.0024\n
\n
utils.plot_model(model, show_layer_names=False, show_shapes=True)\n
\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Keras vertical ensemble model with condition in between", + "input": "", + "output": "

Just define your own model. I'm surprised your other models are outputting strings instead of numbers, but without more info this is about all I can give you, so I will assume the output of model A is a string.

\n
import tensorflow as tf\n\nclass ModelC(tf.keras.Model):\n\n  def __init__(self, A, B):\n    super(ModelC, self).__init__()\n    self.A = A\n    self.B = B\n\n  def call(self, inputs, training=False):\n    x = self.A(inputs, training)\n    if x == 'not-related':\n        return x\n    return self.B(inputs, training)\n
\n", + "system": "" + }, + { + "instruction": "InvalidArgumentError: Specified a list with shape [60,9] from a tensor with shape [56,9]", + "input": "", + "output": "

For a stateful LSTM, the batch size should be chosen in a way, so that the number of samples is divisible by the batch size. See also here:

\n

Keras: What if the size of data is not divisible by batch_size?

\n

In your case, considering that you take 20% from your training data as a validation set, you have 1136 samples remaining. So you should choose a batch size by which 1136 is divisible.

\n

Additionally, you could for example remove some samples or reuse samples to be able to choose various batch sizes.

\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'keras.backend' has no attribute 'common'", + "input": "", + "output": "

Yes. It is okay to use k.image_data_format()

\n

In Keras v2 the method has been renamed to image_data_format

\n", + "system": "" + }, + { + "instruction": "keras accuracy doesn't improve more than 59 percent", + "input": "", + "output": "

It seems to me that for a neural network your data is not variate enough. You have a lot of similar values in your dataset. That might be a reason of the low accuracy. Try a simple regressor and not a neural network.

\n

If you want to use a neural network at any rate, you should change the followings:

\n

Generally for regression you should set the activation function for your last layer to 'relu' or 'linear', sigmoid is usually used for the hiden layers.

\n

Try to change these first. If it does not work, try also different strategies as :

\n
    \n
  1. Increase the batch size
  2. \n
  3. Increase the number of epochs
  4. \n
  5. Apply whitening to your dataset before running (pre-processing stage).
  6. \n
  7. Decrease the learning rate, you should use scheduler.
  8. \n
\n

For whitening you can do:

\n
from sklearn.decomposition import PCA\n\npca = PCA(whiten=True)\npca.fit(X)\nX = pca.transform(X)\n\n# make here train test split ...\n\nX_test = pca.transform(X_test) # use the same pca model for the test set.\n\n
\n

You have a lot of zeros in your dataset. Here you have a list of percentage of zero values per column (between 0 and 1):

\n
0.6611697598907094 WORK_EDUCATION\n0.5906196483663051 SHOP\n0.15968546556987515 OTHER\n0.4517919980835284 AM\n0.3695455825652879 PM\n0.449195697003247 MIDDAY\n0.8160996565242585 NIGHT\n0.03156998520561604 AVG_VEH_CNT\n1.618641571247746e-05 work_traveltime\n2.2660981997468445e-05 shop_traveltime\n0.6930343378622924 work_tripmile\n0.605410795044367 shop_tripmile\n0.185622578107549 TRPMILES_sum\n3.237283142495492e-06 TRVL_MIN_sum\n0.185622578107549 TRPMILES_mean\n0.469645614614391 HBO\n0.5744850291841075 HBSHOP\n0.8137429143965219 HBW\n0.5307266729469959 NHB\n0.2017960446874565 DWELTIME_mean\n1.618641571247746e-05 TRVL_MIN_mean\n0.6959996892208183 work_dweltime\n0.6099365168775757 shop_dweltime\n0.0009258629787537107 firsttrip_time\n0.002949164942813393 lasttrip_time\n0.7442934791405661 age_2.0\n0.7541995655566023 age_3.0\n0.7081200773063214 age_4.0\n0.9401296855626884 age_5.0\n0.3490503429901489 KNN_result\n
\n", + "system": "" + }, + { + "instruction": "Keras callback AttributeError: 'ModelCheckpoint' object has no attribute '_implements_train_batch_hooks'", + "input": "", + "output": "

I also encountered this problem recently.

\n

What did I find out: recently the keras or tensorflow version was updated by developers and this cause the problem.

\n

Solution: since the developers of keras require everyone to switch to the version tf.keras, you need to replace of your code import section.

\n

From:

\n
import keras\n
\n

To:

\n
import tensorflow.keras as keras\n
\n

After that everything worked for me.

\n", + "system": "" + }, + { + "instruction": "Java Tensorflow + Keras Equivalent of model.predict()", + "input": "", + "output": "

TensorFlow Python automatically convert your NumPy array to a tf.Tensor. In TensorFlow Java, you manipulate tensors directly.

\n\n

Now the SavedModelBundle does not have a predict method. You need to obtain the session and run it, using the SessionRunner and feeding it with input tensors.

\n\n

For example, based on the next generation of TF Java (https://github.com/tensorflow/java), your code endup looking like this (note that I'm taking a lot of assumptions here about x_test_maxabs since your code sample does not explain clearly where it comes from):

\n\n
try (SavedModelBundle model = SavedModelBundle.load(path, \"serve\")) {\n    try (Tensor<TFloat32> input = TFloat32.tensorOf(...);\n        Tensor<TFloat32> output = model.session()\n            .runner()\n            .feed(\"input_name\", input)\n            .fetch(\"output_name\")\n            .run()\n            .expect(TFloat32.class)) {\n\n        float prediction = output.data().getFloat();\n        System.out.println(\"prediction = \" + prediction);\n    }        \n}\n
\n\n

If you are not sure what is the name of the input/output tensor in your graph, you can obtain programmatically by looking at the signature definition:

\n\n
model.metaGraphDef().getSignatureDefMap().get(\"serving_default\")\n
\n", + "system": "" + }, + { + "instruction": "How to use multiple text features for NLP classifier?", + "input": "", + "output": "

Here are some of the things you could try:

\n

1.) Combine research_paper, book_name and comment into one string.

\n

2.) Treat comment_by as a categorical variable and encode it using one hot encoder\nor label encoder.

\n

3.) Whatever model you are using, tune the hyperparameters to improve the results.

\n

Do let me know the results you got!

\n", + "system": "" + }, + { + "instruction": "Sequence to Sequence - for time series prediction", + "input": "", + "output": "

the attention layer in Keras is not a trainable layer (unless we use the scale parameter). it only computes matrix operation. In my opinion, this layer can result in some mistakes if applied directly on time series, but let proceed with order...

\n\n

the most natural choice to replicate the attention mechanism on our time-series problem is to adopt the solution presented here and explained again here. It's the classical application of attention in enc-dec structure in NLP

\n\n

following TF implementation, for our attention layer, we need query, value, key tensor in 3d format. we obtain these values directly from our recurrent layer. more specifically we utilize the sequence output and the hidden state. these are all we need to build an attention mechanism.

\n\n

query is the output sequence [batch_dim, time_step, features]

\n\n

value is the hidden state [batch_dim, features] where we add a temporal dimension for matrix operation [batch_dim, 1, features]

\n\n

as the key, we utilize as before the hidden state so key = value

\n\n

In the above definition and implementation I found 2 problems:

\n\n\n\n

the example:

\n\n
def attention_keras(query_value):\n\n    query, value = query_value # key == value\n    score = tf.matmul(query, value, transpose_b=True) # (batch, timestamp, 1)\n    score = tf.nn.softmax(score) # softmax on -1 axis ==> score always = 1 !!!\n    print((score.numpy()!=1).any()) # False ==> score always = 1 !!!\n    score = tf.matmul(score, value) # (batch, timestamp, feat)\n    return score\n\nnp.random.seed(33)\ntime_steps = 20\nfeatures = 50\nsample = 5\n\nX = np.random.uniform(0,5, (sample,time_steps,features))\nstate = np.random.uniform(0,5, (sample,features))\nattention_keras([X,tf.expand_dims(state,1)]) # ==> the same as Attention(dtype='float64')([X,tf.expand_dims(state,1)])\n
\n\n

so for this reason, for time series attention I propose this solution

\n\n
def attention_seq(query_value, scale):\n\n    query, value = query_value\n    score = tf.matmul(query, value, transpose_b=True) # (batch, timestamp, 1)\n    score = scale*score # scale with a fixed number (it can be finetuned or learned during train)\n    score = tf.nn.softmax(score, axis=1) # softmax on timestamp axis\n    score = score*query # (batch, timestamp, feat)\n    return score\n\nnp.random.seed(33)\ntime_steps = 20\nfeatures = 50\nsample = 5\n\nX = np.random.uniform(0,5, (sample,time_steps,features))\nstate = np.random.uniform(0,5, (sample,features))\nattention_seq([X,tf.expand_dims(state,1)], scale=0.05)\n
\n\n

query is the output sequence [batch_dim, time_step, features]

\n\n

value is the hidden state [batch_dim, features] where we add a temporal dimension for matrix operation [batch_dim, 1, features]

\n\n

the weights are calculated with softmax(scale*dot(sequence, hidden)). the scale parameter is a scalar value that can be used to scale the weights before applying the softmax operation. the softmax is calculated correctly on the time dimension. the attention output is the weighted product of input sequence and scores. I use the scalar parameter as a fixed value, but it can be tuned or insert as a learnable weight in a custom layer (as scale parameter in Keras attention).

\n\n

In term of network implementation these are the two possibilities available:

\n\n
######### KERAS #########\ninp = Input((time_steps,features))\nseq, state = GRU(32, return_state=True, return_sequences=True)(inp)\natt = Attention()([seq, tf.expand_dims(state,1)])\n\n######### CUSTOM #########\ninp = Input((time_steps,features))\nseq, state = GRU(32, return_state=True, return_sequences=True)(inp)\natt = Lambda(attention_seq, arguments={'scale': 0.05})([seq, tf.expand_dims(state,1)])\n
\n\n

CONCLUSION

\n\n

I don't know how much added-value an introduction of an attention layer in simple problems can have. If you have short sequences, I suggest you leave all as is. What I reported here is an answer where I express my considerations, I'll accept comment or consideration about possible mistakes or misunderstandings

\n\n
\n\n

In your model, these solutions can be embedded in this way

\n\n
######### KERAS #########\ninp = Input((n_features, n_steps))\nseq, state = GRU(n_units, activation='relu',\n                 return_state=True, return_sequences=True)(inp)\natt = Attention()([seq, tf.expand_dims(state,1)])\nx = GRU(n_units, activation='relu')(att)\nx = Dense(64, activation='relu')(x)\nx = Dropout(0.5)(x)\nout = Dense(n_steps_out)(x)\n\nmodel = Model(inp, out)\nmodel.compile(optimizer='adam', loss='mse', metrics=['mse'])\nmodel.summary()\n\n######### CUSTOM #########\ninp = Input((n_features, n_steps))\nseq, state = GRU(n_units, activation='relu',\n                 return_state=True, return_sequences=True)(inp)\natt = Lambda(attention_seq, arguments={'scale': 0.05})([seq, tf.expand_dims(state,1)])\nx = GRU(n_units, activation='relu')(att)\nx = Dense(64, activation='relu')(x)\nx = Dropout(0.5)(x)\nout = Dense(n_steps_out)(x)\n\nmodel = Model(inp, out)\nmodel.compile(optimizer='adam', loss='mse', metrics=['mse'])\nmodel.summary()\n
\n", + "system": "" + }, + { + "instruction": "is the accuracy printed by keras model.fit function related to validation set or training set?", + "input": "", + "output": "

If you want to visualize accuracy as a plot, you can get the list of validation accuracy and loss for each epoch as follows (I ran only 5 epochs, so i get 5 values for each val_accuracy and val_loss)

\n\n
model.fit(x_train, y_train, epochs=5,validation_data=(x_test,y_test))\nmodel.evaluate(x_test, y_test)\n\n\n# access validation accuracy for each epoch\nacc = model.history.history['val_accuracy']\nprint(acc) # [0.9573, 0.9696, 0.9754, 0.9762, 0.9784]\n\n# access validation loss for each epoch\nloss_val = model.history.history['val_loss']\nprint(loss_val) # [0.13892182569280268, 0.10223265058882534, 0.08262962606661022, 0.08026109958332964, 0.07378015918848105]\n
\n", + "system": "" + }, + { + "instruction": "Ignoring... Layers with arguments in `__init__` must override `get_config`", + "input": "", + "output": "

I overided the get_config() function as follows in my layer class and it seemed to have solved the issue:

\n
def get_config(self):\n    cfg = super().get_config()\n    return cfg    \n
\n

More concretely:

\n
class AttentionBlock(tf.keras.layers.Layer):\ndef __init__(self,units = 32,**kwargs):\n    super(AttentionBlock, self).__init__(**kwargs)\n    self.units = units\n    self.conv2d = tf.keras.layers.SeparableConv2D(self.units, 1,  padding='same',\n                                  use_bias=True,\n                                  kernel_initializer=tf.keras.initializers.VarianceScaling(), bias_initializer='zeros',\n                                  bias_regularizer=None)\n    self.relu = tf.keras.layers.Activation('swish')\n    self.batchnorm = tf.keras.layers.BatchNormalization()\n\n\ndef get_config(self):\n    cfg = super().get_config()\n    return cfg    \n\ndef call(self,inputs):\n  if type(inputs) is not list or len(inputs) <= 1:\n        raise Exception('Merge must be called on a list of tensors '\n                        '(at least 2). Got: ' + str(inputs))\n  else:\n      x,y = inputs\n      w = self.conv2d(x)\n      v = self.conv2d(y)\n      out = self.relu(w + v)\n      out = self.batchnorm(out)\n      out = tf.nn.sigmoid(out)\n      return out\n
\n", + "system": "" + }, + { + "instruction": "Not able to reproduce results with Tensorflow even with random seed", + "input": "", + "output": "

In tensorflow 2.x version

\n

this code can reproduce results if you use tf in cpu.

\n
seed_value = 42\nimport tensorflow as tf\ntf.random.set_seed(seed_value)\n
\n

But if you use tf in gpu (default), the NVIDIA problem issue will make your results unreproducible even you write tf.random.set_seed(seed_value)

\n

So the solution is:

\n

pip install tensorflow-determinism

\n

and then use the following code

\n
def setup_seed(seed):\n    random.seed(seed)  \n    np.random.seed(seed) \n    tf.random.set_seed(seed)  # tf cpu fix seed\n    os.environ['TF_DETERMINISTIC_OPS'] = '1'  # tf gpu fix seed, please `pip install tensorflow-determinism` first\n\n\nsetup_seed(42)\n
\n", + "system": "" + }, + { + "instruction": "Why plot_model in Keras does not plot the model correctly?", + "input": "", + "output": "

The reason is that the model has not been built because it does not know its input shape. Either specify the input shape of the model on the first layer using input_shape (or input_dim) argument, or alternatively start fitting the model on some data by calling fit method (so the input shape could be automatically inferred). Also, as mentioned by @xdurch0 in the comments section, another option is to call build method of the model and pass it the input shape as argument.

\n", + "system": "" + }, + { + "instruction": "Augmentation of a tf.data.Dataset", + "input": "", + "output": "
\n

is the augmentaion somehow being applied after each epoch in a way i can't understand?

\n
\n\n

No, In this tutorial the augmentation is only done once and not on every epoch. \nWhen we want to use Data Augmentation to train the network that generates augmented data per epoch, it is easier to use TF Keras Image Data Generator to generate it. This creates an iterator which you can feed the model directly the augmented data. You can read more about it in this link.

\n\n

The tutorial just introduces you to the basic concept and the benefits of Data Augmentation.

\n\n

And do take note this part on the tutorial:

\n\n
BATCH_SIZE = 64\n# Only use a subset of the data so it's easier to overfit, for this tutorial\nNUM_EXAMPLES = 2048\n
\n\n

It was intended only in this tutorial to use a subset of data that's why it is much more prone to overfit so this might the reason you are worried that the overfitting chances are higher.

\n\n

Augmentation is to get more data, we just need to make minor alterations to our existing dataset. Minor changes such as flips or translations or rotations where you can do using the tf.image and applying it into each item in the dataset using the map method .map(). Our neural network would think these are distinct images anyway.

\n\n

From the tutorial, training the non-augmented data separately from training the augmented data is just to compare and show how small the difference is.

\n\n
\n

In this example the augmented model converges to an accuracy ~95% on the validation set. This is slightly higher (+1%) than the model trained without data augmentation.

\n
\n\n

We can see clearly that there isn't a huge difference between the two. But normally augmentation is used for its purpose which is to provide more altered data into your dataset so the result may provide a bigger difference if you'll combine it to your original dataset and increase the number of the epoch.

\n", + "system": "" + }, + { + "instruction": "Convolutional Neural Network (CNN) input shape", + "input": "", + "output": "

In your case,Conv2D will be useful. Please refer below description for understanding input shape of Convolution Neural Network (CNN) using Conv2D.

\n\n

Let\u2019s see how the input shape looks like. The input data to CNN will look like the following picture. We are assuming that our data is a collection of images.

\n\n

\"enter

\n\n

Input shape has (batch_size, height, width, channels). Incase of RGB image would have a channel of 3 and the greyscale image would have a channel of 1.

\n\n

Let\u2019s look at the following code

\n\n
import tensorflow as tf\nfrom tensorflow.keras.layers import Conv2D\n\nmodel=tf.keras.models.Sequential()\nmodel.add(Conv2D(filters=64, kernel_size=1, input_shape=(10,10,3)))\nmodel.summary()\n
\n\n

Output:

\n\n
Model: \"sequential\"\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nconv2d (Conv2D)              (None, 10, 10, 64)        256       \n=================================================================\n
\n\n

Thought it looks like out input shape is 3D, but you have to pass a 4D array at the time of fitting the data which should be like (batch_size, 10, 10, 3). Since there is no batch size value in the input_shape argument, we could go with any batch size while fitting the data.

\n\n

The output shape is (None, 10, 10, 64). The first dimension represents the batch size, which is None at the moment. Because the network does not know the batch size in advance.

\n\n

Note: Once you fit the data, None would be replaced by the batch size you give while fitting the data.

\n\n

Let\u2019s look at another code with batch Size

\n\n
    import tensorflow as tf\n    from tensorflow.keras.layers import Conv2D\n\n    model=tf.keras.models.Sequential()\n    model.add(Conv2D(filters=64, kernel_size=1, batch_input_shape=(16,10,10,3)))\n    model.summary()\n
\n\n

Output:

\n\n
Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nconv2d_1 (Conv2D)            (16, 10, 10, 64)          256       \n=================================================================\n
\n\n

Here I have replaced input_shape argument with batch_input_shape. As the name suggests, this argument will ask you the batch size in advance, and you can not provide any other batch size at the time of fitting the data.

\n", + "system": "" + }, + { + "instruction": "Keras custom metric sum is wrong", + "input": "", + "output": "

Honestly, I have run into the same problem at a point and to me, the best solution was to use Recall and Precision from built-in metrics.

\n\n

Starting with TensorFlow 2.0, these two metrics are built-in tensorflow.keras.metrics, and they work well provided that you use binary_crossentropy with a Dense(1) at the final layer(in the end they are metrics for binary classification of course).

\n\n

The main thing(important to note) is that the implementation is completely different than what you try to achieve and what was in Keras before.

\n\n

In fact, in Keras 1.X version, all those metrics were available(F1-Score,Recall and Precision), but they were removed starting from Keras 2.X due to the fact that batch-wise estimation is not relevant for global estimation of these metrics.

\n\n

According to Francois Chollet (March 19th 2017) (https://github.com/keras-team/keras/issues/5794):

\n\n
\n

Basically these are all global metrics that were approximated\n batch-wise, which is more misleading than helpful. This was mentioned\n in the docs but it's much cleaner to remove them altogether. It was a\n mistake to merge them in the first place.

\n
\n\n

However, in TensorFlow 2.0(tensorflow.keras.metrics), they use specialised built-in accumulators, and the computations are made properly, thus being relevant for your dataset. You can find a more detail description here:

\n\n

https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Recall?version=stable

\n\n

My strong recommendation: use the built-in metrics, and skip implementing them by hand, particularly since you would naturally batch-wise implement them.

\n\n

If you have issues with loading the model, please ensure the following:

\n\n\n\n

Francois Chollet on release of Keras 2.3.0 :

\n\n
\n

Keras 2.3.0 is the first release of multi-backend Keras that supports\n TensorFlow 2.0. It maintains compatibility with TensorFlow 1.14, 1.13,\n as well as Theano and CNTK.

\n \n

This release brings the API in sync with the tf.keras API as of\n TensorFlow 2.0. However note that it does not support most TensorFlow\n 2.0 features, in particular eager execution. If you need these features, use tf.keras.

\n \n

This is also the last major release of multi-backend Keras. Going\n forward, we recommend that users consider switching their Keras code\n to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API\n (so switching should be as easy as changing the Keras import\n statements), but it has many advantages for TensorFlow users, such as\n support for eager execution, distribution, TPU training, and generally\n far better integration between low-level TensorFlow and high-level\n concepts like Layer and Model. It is also better maintained.

\n \n

Development will focus on tf.keras going forward. We will keep\n maintaining multi-backend Keras over the next 6 months, but we will\n only be merging bug fixes. API changes will not be ported

\n
\n\n

Therefore, even the creator of Keras recommends that we switch to tf.keras instead of plain keras. Please also switch in your code and check if the problems still persist. If you mix tf.keras and keras, you will get all sorts of odd errors; thus change all your imports to tf.keras. For more information w.r.t TensorFlow 2.0 and more changes, you can consult this: https://www.pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/

\n", + "system": "" + }, + { + "instruction": "Extract intermmediate variable from a custom Tensorflow/Keras layer during inference (TF 2.0)", + "input": "", + "output": "

After a little more research I've managed to cobble together a working solution. I'll summarize here for any future weary internet travelers that come across this post.

\n\n

The first clues came from this github thread. The attention layer defined there seems to build on the attention layer in the previously mentioned Kaggle kernel. The github user adds a return_attention flag to the layer init which, when enabled, includes the attention vector in addition to the weighted RNN output vector in the layer output.

\n\n

I also added a get_config function suggested by this user in the same github thread which enables us to save and reload trained models. I had to add the return_attention flag to get_config, otherwise TF would throw a list iteration error when trying to load a saved model with return_attention=True.

\n\n

With those changes made, the model definition needed to be updated to capture the additional layer outputs.

\n\n
inputs = Input(shape=(max_sequence_length,))\nlstm = Bidirectional(LSTM(lstm1_units, return_sequences=True))(inputs)\n# Added 'attention_vector' to capture the second layer output\nattention, attention_vector = Attention(max_sequence_length, return_attention=True)(lstm)\nx = Dense(dense_units, activation=\"softmax\")(attention)\n
\n\n

The final, and most important piece of the puzzle came from this Stackoverflow answer. The method described there allows us to output multiple results while only optimizing on one of them. The code changes are subtle, but very important. I've added comments below in the spots I made changes to implement this functionality.

\n\n
model = Model(\n    inputs=inputs,\n    outputs=[x, attention_vector] # Original value:  outputs=x\n    )\n\nmodel.compile(\n    loss=['categorical_crossentropy', None], # Original value: loss='categorical_crossentropy'\n    optimizer=optimizer,\n    metrics=[BinaryAccuracy(name='accuracy')])\n
\n\n

With those changes in place, I retrained the model and voila! The output of model.predict() is now a list containing the score and its associated attention vector.

\n\n

The results of the change were pretty dramatic. Running inference on 10k examples took about 20 minutes using this new method. The old method utilizing intermediate models took ~33 minutes to perform inference on the same dataset.

\n\n

And for anyone that's interested, here is my modified Attention layer:

\n\n
from tensorflow.python.keras.layers import Layer\nfrom tensorflow.keras import initializers, regularizers, constraints\nfrom tensorflow.keras import backend as K\n\n\nclass Attention(Layer):\n    def __init__(self, step_dim,\n                W_regularizer=None, b_regularizer=None,\n                W_constraint=None, b_constraint=None,\n                bias=True, return_attention=True, **kwargs):\n        self.supports_masking = True\n        self.init = initializers.get('glorot_uniform')\n\n        self.W_regularizer = regularizers.get(W_regularizer)\n        self.b_regularizer = regularizers.get(b_regularizer)\n\n        self.W_constraint = constraints.get(W_constraint)\n        self.b_constraint = constraints.get(b_constraint)\n\n        self.bias = bias\n\n        self.step_dim = step_dim\n        self.features_dim = 0\n        self.return_attention = return_attention\n        super(Attention, self).__init__(**kwargs)\n\n    def build(self, input_shape):\n        assert len(input_shape) == 3\n\n        self.W = self.add_weight(shape=(input_shape[-1],),\n                                 initializer=self.init,\n                                 name='{}_W'.format(self.name),\n                                 regularizer=self.W_regularizer,\n                                 constraint=self.W_constraint)\n        self.features_dim = input_shape[-1]\n\n        if self.bias:\n            self.b = self.add_weight(shape=(input_shape[1],),\n                                     initializer='zero',\n                                     name='{}_b'.format(self.name),\n                                     regularizer=self.b_regularizer,\n                                     constraint=self.b_constraint)\n        else:\n            self.b = None\n\n        self.built = True\n\n    def compute_mask(self, input, input_mask=None):\n        return None\n\n    def call(self, x, mask=None):\n        features_dim = self.features_dim\n        step_dim = self.step_dim\n\n        eij = K.reshape(K.dot(K.reshape(x, (-1, features_dim)),\n                              K.reshape(self.W, (features_dim, 1))), (-1, step_dim))\n\n        if self.bias:\n            eij += self.b\n\n        eij = K.tanh(eij)\n\n        a = K.exp(eij)\n\n        if mask is not None:\n            a *= K.cast(mask, K.floatx())\n\n        a /= K.cast(K.sum(a, axis=1, keepdims=True) + K.epsilon(), K.floatx())\n\n        a = K.expand_dims(a)\n        weighted_input = x * a\n        result = K.sum(weighted_input, axis=1)\n\n        if self.return_attention:\n            return [result, a]\n        return result\n\n    def compute_output_shape(self, input_shape):\n        if self.return_attention:\n            return [(input_shape[0], self.features_dim),\n                    (input_shape[0], input_shape[1])]\n        else:\n            return input_shape[0], self.features_dim\n\n    def get_config(self):\n        config = {\n            'step_dim': self.step_dim,\n            'W_regularizer': regularizers.serialize(self.W_regularizer),\n            'b_regularizer': regularizers.serialize(self.b_regularizer),\n            'W_constraint': constraints.serialize(self.W_constraint),\n            'b_constraint': constraints.serialize(self.b_constraint),\n            'bias': self.bias,\n            'return_attention': self.return_attention\n        }\n\n        base_config = super(Attention, self).get_config()\n        return dict(list(base_config.items()) + list(config.items()))\n
\n", + "system": "" + }, + { + "instruction": "Keras sees my GPU but doesn't use it when training a neural network", + "input": "", + "output": "

The logs are showing that the GPU does get used. You are almost certainly running into an IO bottleneck: your GPU is processing whatever the CPU is throwing at it way faster than the CPU can load and preprocess it. This is very common in deep learning, and there are ways to address it.

\n\n

We cannot provide a lot of help without knowing more about your data pipeline (byte size of a batch, preprocessing steps, ...), and how the data is stored. One typical way to speed things up is to store the data is a binary format, like TFRecords, so that the CPU can load it faster. See the official documentation for this.

\n\n
\n\n

Edit: I quickly went through your input pipeline. The issue is very likely to indeed by IO:

\n\n\n", + "system": "" + }, + { + "instruction": "Disable Tensorflow logging completely", + "input": "", + "output": "

For anyone out there dealing with the same issue, this seems to be an issue with another library.\nThis code right here should do the trick to disable Keras' deprecation warnings and Tensorflow's ugly logging, this is still ugly so if someone has a better way to do it I'd really like to hear about it.\nThis also fixed the double logging.

\n
import os\nimport sys\nimport logging\n\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n\n# Keras outputs warnings using `print` to stderr so let's direct that to devnull temporarily\nstderr = sys.stderr\nsys.stderr = open(os.devnull, 'w')\n\nimport keras\n\n# we're done\nsys.stderr = stderr\n\nfrom keras.backend.tensorflow_backend import tf\n\nimport absl.logging\nlogging.root.removeHandler(absl.logging._absl_handler)\nabsl.logging._warn_preinit_stderr = False\n\nlogger = tf.get_logger()\nlogger.setLevel(logging.FATAL)\n\ngraph = tf.get_default_graph()\n
\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: no module named efficientnet.tfkeras", + "input": "", + "output": "

To install segmentation-models use the following command: pip install git+https://github.com/qubvel/segmentation_models

\n", + "system": "" + }, + { + "instruction": "Can Keras' model.predict return a dictionary?", + "input": "", + "output": "

Use my_model.output_names

\n\n

Given

\n\n
my_model = tf.keras.models.Model(inputs=my_inputs_dict, outputs=my_outputs_dict)\n
\n\n

create the dict yourself from my_model.output_names, which is a list of name attributes of your output layers in the order of prediction

\n\n
prediction_list = my_model.predict(my_test_input_dict)\nprediction_dict = {name: pred for name, pred in zip(my_model.output_names, prediction_list)}\n
\n", + "system": "" + }, + { + "instruction": "How to insert certain values at certain indices of a tensor in tensorflow?", + "input": "", + "output": "

Because the tensor you have is immutable, you can't assign a new value to it nor change it in place. What you have to do is modify its value using standard operations. Below is how you can do it:

\n\n
input_array = np.array([2, 4, 7, 11, 3, 8, 9, 19, 11, 7])\ninplace_array = np.array([10, 20])\nindices_array = np.array([0, 0, 1, 0, 0, 0, 1, 0, 0, 0])\n# [[2], [6]] \nindices = tf.cast(tf.where(tf.equal(indices_array, 1)), tf.int32)\n# [0, 0, 10, 0, 0, 0, 20, 0, 0, 0]\nscatter = tf.scatter_nd(indices, inplace_array, shape=tf.shape(input_array))\n# [1, 1, 0, 1, 1, 1, 0, 1, 1, 1]\ninverse_mask = tf.cast(tf.math.logical_not(indices_array), tf.int32)\n# [2, 4, 0, 11, 3, 8, 0, 19, 11, 7]\ninput_array_zero_out = tf.multiply(inverse_mask, input_array)\n# [2, 4, 10, 11, 3, 8, 20, 19, 11, 7]\noutput = tf.add(input_array_zero_out, tf.cast(scatter, tf.int32))\n
\n", + "system": "" + }, + { + "instruction": "How to apply Guided BackProp in Tensorflow 2.0?", + "input": "", + "output": "

First of all, you have to change the computation of the gradient through a ReLU, i.e. \"Guided

\n\n

Here a graphic example from the paper.\"Graphical

\n\n

This formula can be implemented with the following code:

\n\n
@tf.RegisterGradient(\"GuidedRelu\")\ndef _GuidedReluGrad(op, grad):\n   gate_f = tf.cast(op.outputs[0] > 0, \"float32\") #for f^l > 0\n   gate_R = tf.cast(grad > 0, \"float32\") #for R^l+1 > 0\n   return gate_f * gate_R * grad\n
\n\n

Now you have to override the original TF implementation of ReLU with:

\n\n
with tf.compat.v1.get_default_graph().gradient_override_map({'Relu': 'GuidedRelu'}):\n   #put here the code for computing the gradient\n
\n\n

After computing the gradient, you can visualize the result.\nHowever, one last remark. You compute a visualization for a single class. This means, you take the activation of a choosen neuron and set all the activations of the other neurons to zero for the input of Guided BackProp.

\n", + "system": "" + }, + { + "instruction": "Keras - LeakyReLU has no attribute name error when saving model", + "input": "", + "output": "

Edited part( Thanks @NagabhushanSN for mentioning the remaining issue)

\n\n

There is a line of the code where we still have discriminator_model.add(Conv2D(64, 5, strides=2, input_shape=(28, 28, 1), padding='same', activation=LeakyReLU(alpha=0.2))) , it is the second line of the code.

\n\n

If we modify that line, the final corrected code should be like this:

\n\n
discriminator_model = Sequential()\ndiscriminator_model.add(Conv2D(64, 5, strides=2, input_shape=(28, 28, 1), padding='same'))\ndiscriminator_model.add(LeakyReLU(alpha=0.2))\ndiscriminator_model.add(Dropout(0.4))\ndiscriminator_model.add(Conv2D(128, 5, strides=2, padding='same'))\ndiscriminator_model.add(LeakyReLU(alpha=0.2))\ndiscriminator_model.add(Dropout(0.4))\ndiscriminator_model.add(Conv2D(256, 5, strides=2, padding='same'))\ndiscriminator_model.add(LeakyReLU(alpha=0.2))\ndiscriminator_model.add(Dropout(0.4))\ndiscriminator_model.add(Conv2D(512, 5, strides=2, padding='same'))\ndiscriminator_model.add(LeakyReLU(alpha=0.2))\ndiscriminator_model.add(Dropout(0.4))\ndiscriminator_model.add(Flatten())\ndiscriminator_model.add(Dense(1))\ndiscriminator_model.add(Activation('sigmoid'))\ndiscriminator_model.summary()\n
\n\n

And this one should work well on the most recent version of tensroflow, I tested on 1.8.0 and it works fine.\nHowever, if check your code with older version like tesnorflow1.1.0, we get the same error.

\n\n

For that case, I suggest update tensorflow to a higher version

\n\n\n", + "system": "" + }, + { + "instruction": "Assign indexed entry of Keras tensor", + "input": "", + "output": "

It's a lot easier if you transform everything into one_hot tensors and use switch:

\n

(remember to have all operations inside a lambda layer, or you will have problems)

\n
def replace_values(x):\n    outs, indices, values = x\n\n    #this is due to a strange bug between lambda and integers....\n    indices = K.cast(indices, 'int32')\n\n\n    #create one_hot indices\n    one_hot_indices = K.one_hot(indices, size) #size is the size of gen_out\n    one_hot_indices = K.batch_flatten(one_hot_indices)\n\n    #have the desired values at their correct positions\n    values_to_use = one_hot_indices * new_values\n\n\n    #if values are 0, use gen_out, else use values\n    return K.switch(K.equal(values_to_use, 0), outs, values_to_use)\n\n\nupdated_gen_out = Lambda(replace_values)([gen_out, indices_to_reset, new_values])\n
\n
\n

Warning: new_values cannot be integer, they must be the same type as gen_out.

\n
\n
\n

Dummy example:

\n
import numpy as np\nfrom keras.layers import *\nfrom keras.models import Model\n\nsize = 5\nbatch_size = 15\n\ngen_out = Input((size,))\nindices_to_reset = Input((1,), dtype='int32')\nnew_values = Input((1,))\n\ndef replace_values(x):\n    outs, indices, values = x\n    print(K.int_shape(outs))\n    print(K.int_shape(indices))\n\n    #this is due to a strange bug between lambda and integers....\n    indices = K.cast(indices, 'int32')\n    one_hot_indices = K.one_hot(indices, size)\n    print(K.int_shape(one_hot_indices))\n    one_hot_indices = K.batch_flatten(one_hot_indices)\n    print(K.int_shape(one_hot_indices))\n\n    values_to_use = one_hot_indices * new_values\n    print(K.int_shape(values_to_use))\n\n    return K.switch(K.equal(values_to_use, 0), outs, values_to_use)\n\nupdated_gen_out = Lambda(replace_values)([gen_out, indices_to_reset, new_values])\n\nmodel = Model([gen_out,indices_to_reset,new_values], updated_gen_out)\n\ngen_outs = np.arange(batch_size * size).reshape((batch_size, size))\nindices = np.concatenate([np.arange(5)]*3, axis=0)\nnew_vals = np.arange(15).reshape((15,1))\n\nprint('\\n\\ngen outs')\nprint(gen_outs)\n\nprint('\\n\\nindices')\nprint(indices)\n\nprint('\\n\\nvalues')\nprint(new_vals)\n\nprint('\\n\\n results')\nprint(model.predict([gen_outs, indices, new_vals]))\n
\n

Outputs:

\n
(None, 5)\n(None, 1)\n(None, 1, 5)\n(None, None)\n(None, None)\n(None, 5)\n(None, 1)\n(None, 1, 5)\n(None, None)\n(None, None)\n\n\ngen outs\n[[ 0  1  2  3  4]\n [ 5  6  7  8  9]\n [10 11 12 13 14]\n [15 16 17 18 19]\n [20 21 22 23 24]\n [25 26 27 28 29]\n [30 31 32 33 34]\n [35 36 37 38 39]\n [40 41 42 43 44]\n [45 46 47 48 49]\n [50 51 52 53 54]\n [55 56 57 58 59]\n [60 61 62 63 64]\n [65 66 67 68 69]\n [70 71 72 73 74]]\n\n\nindices\n[0 1 2 3 4 0 1 2 3 4 0 1 2 3 4]\n\n\nvalues\n[[ 0]\n [ 1]\n [ 2]\n [ 3]\n [ 4]\n [ 5]\n [ 6]\n [ 7]\n [ 8]\n [ 9]\n [10]\n [11]\n [12]\n [13]\n [14]]\n\n\n results\n[[ 0.  1.  2.  3.  4.]\n [ 5.  1.  7.  8.  9.]\n [10. 11.  2. 13. 14.]\n [15. 16. 17.  3. 19.]\n [20. 21. 22. 23.  4.]\n [ 5. 26. 27. 28. 29.]\n [30.  6. 32. 33. 34.]\n [35. 36.  7. 38. 39.]\n [40. 41. 42.  8. 44.]\n [45. 46. 47. 48.  9.]\n [10. 51. 52. 53. 54.]\n [55. 11. 57. 58. 59.]\n [60. 61. 12. 63. 64.]\n [65. 66. 67. 13. 69.]\n [70. 71. 72. 73. 14.]] \n
\n

Notice the diagonal values of gen_outs replaced with the values in new_vals.

\n", + "system": "" + }, + { + "instruction": "Using TFRecords with keras", + "input": "", + "output": "

Change:

\n\n
\"label\": tf.FixedLenSequenceFeature([1]...\n
\n\n

into:

\n\n
\"label\": tf.FixedLenSequenceFeature([]...\n
\n\n

This is unfortunately not explained in the documentation on the website, but some explanation can be found in the docstring of FixedLenSequenceFeature on github. Basically, if your data consists of a single dimension (+ a batch dimension), you don't need to specify it.

\n", + "system": "" + }, + { + "instruction": "AttributeError: Layer has no inbound nodes, or AttributeError: The layer has never been called", + "input": "", + "output": "

TL;DR

\n

How can I fix it? Define an input layer:

\n
x = tf.keras.layers.Input(tensor=tf.ones(shape=(1, 8)))\ndense = tf.layers.Dense(units=2)\n\nout = dense(x)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    res = sess.run(fetches=out)\n    print(dense.output_shape) # shape = (1, 2)\n
\n
\n

Accordint to Keras documentation, if a layer has a single node, you can get its input tensor, output tensor, input shape and output shape via:

\n\n

But in the above example, when we call layer.output_shape or other attributes, it throws exceptions that seem a bit strange.

\n

If we go deep in the source code, the error caused by inbound nodes.

\n
if not self._inbound_nodes:\n  raise AttributeError('The layer has never been called '\n                       'and thus has no defined output shape.')\n
\n

What these inbound nodes are?

\n
\n

A Node describes the connectivity between two layers. Each time a layer is connected to some new input,\na node is added to layer._inbound_nodes.\nEach time the output of a layer is used by another layer,\na node is added to layer._outbound_nodes.

\n
\n

As you can see in the above, when self._inbounds_nodes is None it throws an exception. This means when a layer is not connected to the input layer or more generally, none of the previous layers are connected to an input layer, self._inbounds_nodes is empty which caused the problem.

\n

Notice that x in your example, is a tensor and not an input layer. See another example for more clarification:

\n
x = tf.keras.layers.Input(shape=(8,))\ndense = tf.layers.Dense(units=2)\n\nout = dense(x)\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    res = sess.run(fetches=out, feed_dict={x: np.ones(shape=(1, 8))})\n    print(res)\n    print(res.shape)  # shape = (1,2)\n    print(dense.output_shape)  # shape = (None,2)\n
\n

It is perfectly fine because the input layer is defined.

\n
\n

Note that, in your example, out is a tensor. The difference between the tf.shape() function and the .shape =(get_shape()) is:

\n
\n

tf.shape(x) returns a 1-D integer tensor representing the dynamic\nshape of x. A dynamic shape will be known only at graph execution time.

\n

x.shape returns a Python tuple representing the static\nshape of x. A static shape, known at graph definition time.

\n
\n

Read more about tensor shape at: https://pgaleone.eu/tensorflow/2018/07/28/understanding-tensorflow-tensors-shape-static-dynamic/

\n", + "system": "" + }, + { + "instruction": "What does axis=[1,2,3] mean in K.sum in keras backend?", + "input": "", + "output": "

Just like in numpy, you can define the axis along you want to perform a certain operation. For example, for a 4d array, we can sum along a specific axis like this

\n\n
>>> a = np.arange(150).reshape((2, 3, 5, 5))\n>>> a.sum(axis=0).shape\n(3, 5, 5)\n>>> a.sum(axis=0, keepdims=True).shape\n(1, 3, 5, 5)\n>>> a.sum(axis=1, keepdims=True).shape\n(2, 1, 5, 5)\n
\n\n

If we feed a tuple, we can perform this operation along multiple axes.

\n\n
>>> a.sum(axis=(1, 2, 3), keepdims=True).shape\n(2, 1, 1, 1)\n
\n\n

If the argument is -1, it defaults to performing the operation over the last axis, regardless of how many there are.

\n\n
>>> a.sum(axis=-1, keepdims=True).shape\n(2, 3, 5, 1)\n
\n\n

This should have clarified points 1 and 2. Since the axis argument is (1, 2, 3), you need a minimum of 4 axes for the operation to be valid. Try changing your variables to something like val1 = np.arange(24).reshape((2, 2, 2, 3)) and it all works.

\n\n

The model seems to calculate the Binary Cross Entropy Dice loss and dice_coeff(), as the name suggests, calculates the Dice coefficient. I'm not sure what the purpose of smooth is, but if it was for the purpose of avoiding divisions by 0, you'd expect a small number, like 1e-6.

\n", + "system": "" + }, + { + "instruction": "ValueError: Cannot take the length of Shape with unknown rank", + "input": "", + "output": "

It seems this is a bug in tensorflow.keras module. A fix which works has been suggested in the github issue below.

\n\n

https://github.com/tensorflow/tensorflow/issues/24520

\n", + "system": "" + }, + { + "instruction": "Cannot take the length of Shape with unknown rank", + "input": "", + "output": "

Could you post a longer stack-trace? I think your problem might be related to this recent tensorflow issue:

\n\n

https://github.com/tensorflow/tensorflow/issues/24520

\n\n

There's also a simple PR that fixes it (not yet merged). Maybe try it out yourself?

\n\n

EDIT

\n\n

Here is the PR:\nopen tensorflow/python/keras/engine/training_utils.py

\n\n

replace the following (line 232 at the moment):

\n\n
  if (x.shape is not None\n      and len(x.shape) == 1\n
\n\n

with this:

\n\n
  if tensor_util.is_tensor(x):\n    x_shape_ndims = x.shape.ndims if x.shape is not None else None\n  else:\n    x_shape_ndims = len(x.shape)\n\n  if (x_shape_ndims == 1\n
\n", + "system": "" + }, + { + "instruction": "AttributeError when training CNN 1D with Python Keras", + "input": "", + "output": "

If you look carefully you will see this line right before stack trace output:

\n\n
Train on 0 samples, validate on 1 samples\n
\n\n

There is no training data! Why? That's because you have set the validation_split to 0.7 so at first 70% of the data points in X (and y) are put aside for validation and the remaining 30% is used for training. Probably the number of data points in X is less than 4 and therefore its 30% would amount to less than 1 which means zero data points remains for training. Either use more than 4 data points or remove the validation_split argument (or lower it such that at least one sample remains for training).

\n", + "system": "" + }, + { + "instruction": "How to add custom summaries to tensorboard when training with tf.keras.Model.fit", + "input": "", + "output": "

TensorFlow callback TensorBoardWithTime defined below logs cumulative training and evaluation batch time.\nIt relies on the private properties TensorBoard._train_writer and TensorBoard._val_writer.\nIt works with TensorFlow 2.4.0rc2.

\n
import tensorflow as tf\n\n\nclass TensorBoardWithTime(tf.keras.callbacks.TensorBoard):\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n        self.timer_test_batch = Timer()\n        self.timer_train_batch = Timer()\n\n    def on_epoch_end(self, epoch, logs=None):\n        super().on_epoch_end(epoch, logs=logs)\n        with self._train_writer.as_default():\n            tf.summary.scalar('time_batch_sum', self.timer_train_batch.pop_result(), step=epoch)\n        with self._val_writer.as_default():\n            tf.summary.scalar('time_batch_sum', self.timer_test_batch.pop_result(), step=epoch)\n\n    def on_test_batch_begin(self, batch, logs=None):\n        self.timer_test_batch.begin()\n        super().on_test_batch_begin(batch, logs=logs)\n\n    def on_test_batch_end(self, batch, logs=None):\n        super().on_test_batch_end(batch, logs=logs)\n        self.timer_test_batch.end()\n\n    def on_train_batch_begin(self, batch, logs=None):\n        self.timer_train_batch.begin()\n        super().on_train_batch_begin(batch, logs=logs)\n\n    def on_train_batch_end(self, batch, logs=None):\n        super().on_train_batch_end(batch, logs=logs)\n        self.timer_train_batch.end()\n\n\nclass Timer(tf.keras.metrics.Sum):\n    def __init__(self):\n        super().__init__()\n        self.time_begin = None\n\n    def pop_result(self):\n        result = self.result()\n        self.reset_states()\n        return result\n\n    def begin(self):\n        self.time_begin = tf.timestamp()\n\n    def end(self):\n        self.update_state(tf.timestamp() - self.time_begin)\n        self.time_begin = None\n
\n", + "system": "" + }, + { + "instruction": "How to compare the similarity of two keras models", + "input": "", + "output": "

Your approach is correct.

\n

You can iterate over the two models layers and compare one by one (since you don't care about the weights or how the model is compiled and optimized).

\n

You can do this:

\n
for l1, l2 in zip(mdl.layers, mdl2.layers):\n    print(l1.get_config() == l2.get_config())\n
\n

Or just:

\n
print(mdl.get_config() == mdl2.get_config())\n
\n", + "system": "" + }, + { + "instruction": "Write custom Data Generator for Keras", + "input": "", + "output": "
from tensorflow.python.keras.utils import Sequence\nimport numpy as np   \n\nclass Mygenerator(Sequence):\n    def __init__(self, x_set, y_set, batch_size):\n        self.x, self.y = x_set, y_set\n        self.batch_size = batch_size\n\n    def __len__(self):\n        return int(np.ceil(len(self.x) / float(self.batch_size)))\n\n    def __getitem__(self, idx):\n        batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]\n        batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]\n\n        # read your data here using the batch lists, batch_x and batch_y\n        x = [my_readfunction(filename) for filename in batch_x] \n        y = [my_readfunction(filename) for filename in batch_y]\n        return np.array(x), np.array(y)\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: 'Sequential' object has no attribute 'output_names'", + "input": "", + "output": "

As @Shinva said to set the \"compile\" attribute of the load_model function to \"False\".\nThen after loading the model, compile it separately.

\n\n
from tensorflow.keras.models import save_model, load_model\nsave_model(model,'124446.model')\n
\n\n

Then for loading the model again do:

\n\n
saved_model = load_model('124446.model', compile=False)\nsaved_model.compile(optimizer='adam',\n          loss='sparse_categorical_crossentropy',\n          metrics=['accuracy'])\nsaved_model.predict([x_test])\n
\n\n

Update: For some unknown reasons, I started to get the same errors as the question states. After trying to find different solutions it seems using the \"keras\" library directly instead of \"tensorflow.keras\" works properly.

\n\n

My setup is on \"Windows 10\" with python:'3.6.7', tensorflow:'1.11.0' and keras:'2.2.4'

\n\n

As per my knowledge, there are three different ways in which you can save and restore your model; provided you have used keras directly to make your model.

\n\n

Option1:

\n\n
import json\nfrom keras.models import model_from_json, load_model\n\n# Save Weights + Architecture\nmodel.save_weights('model_weights.h5')\nwith open('model_architecture.json', 'w') as f:\n    f.write(model.to_json())\n\n# Load Weights + Architecture\nwith open('model_architecture.json', 'r') as f:\n    new_model = model_from_json(f.read())\nnew_model.load_weights('model_weights.h5')\n
\n\n

Option2:

\n\n
from keras.models import save_model, load_model\n\n# Creates a HDF5 file 'my_model.h5' \nsave_model(model, 'my_model.h5') # model, [path + \"/\"] name of model\n\n# Deletes the existing model\ndel model  \n\n# Returns a compiled model identical to the previous one\nnew_model = load_model('my_model.h5')\n
\n\n

Option 3

\n\n
# using model's methods\nmodel.save(\"my_model.h5\")\n\n# deletes the existing model\ndel model\n\n# load the saved model back\nnew_model = load_model('my_model.h5')\n
\n\n

Option 1 requires the new_model to be compiled before using.

\n\n

Option 2 and 3 are almost similar in syntax.

\n\n

Codes used from:
\n 1. Saving & Loading Keras Models
\n 2. https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model

\n", + "system": "" + }, + { + "instruction": "Keras floods Jupyter cell output during fit (verbose=1)", + "input": "", + "output": "

After a few tests I found that the error is related to tqdm import. Tqdm was used in a piece of code which was later rewritten withoout it. Even though I was not using tqdm in this notebook, just having it imported affected the keras output.\nTo fix it I just commented out this line:\nfrom tqdm import tqdm\nand everything went fine, with nice keras progress bars. Not sure how exactly it conflicted with keras though...

\n", + "system": "" + }, + { + "instruction": "Why does tensorflow/keras choke when I try to fit multiple models in parallel?", + "input": "", + "output": "

Keras requires there is only one training in a given session. I would try to create a different session for each model.

\n\n

I would insert this part of the code inside the %dopar%, to create a different session per model

\n\n
sess <- tf$Session(config = session_conf)\nK <- backend()\nK$set_session(sess)\n
\n", + "system": "" + }, + { + "instruction": "use tf\u2013idf in keras Tokenizer", + "input": "", + "output": "

Until now (because Keras always updating its functions), there is nothing can produce what you want ..

\n\n

But it has a function that represent the sequences using Tf-Idf scheme instead of freq.:

\n\n
sequences = tokenizer.texts_to_matrix(df['Title'], mode='tfidf')\n
\n\n

instead of:

\n\n
sequences = tokenizer.texts_to_sequences(df['Title'])\n
\n\n

Also, as a suggestion, you can use sklearn TfidfVectorizer to filter the text from the low frequent words, then pass it to your Keras model ..

\n", + "system": "" + }, + { + "instruction": "ReduceLROnPlateau gives error with ADAM optimizer", + "input": "", + "output": "

As discussed in the question's comments, keras' ReduceLROnPleteau, does appear to work for its default parameters:

\n\n
# keras' ReduceLROnPlateau callback default parameters:\nfrom keras.callbacks import ReduceLROnPlateau\nreduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10,\n                              verbose=0, mode='auto', min_delta=0.0001,\n                              cooldown=0, min_lr=0)\n
\n\n

I tried to recreate the error to try and identify which parameter causes it, but I couldn't. Due to this, I believe that the error doesn't appear for all input shapes or models.

\n\n

However, I can say for sure that, with the correct parameters, ReduceLROnPlateau does work with Adam.

\n", + "system": "" + }, + { + "instruction": "Using pure numpy metric as metric in Keras/TensorFlow", + "input": "", + "output": "

To use your function you have to convert tensors and numpy arrays and the other way around.\nTo convert a tensor into a numpy array use tf.eval (see here):

\n\n
np_array = tensor.eval()\n
\n\n

If you want to convert a python object (also numpy array) into tensor you can use tf.convert_to_tensor (see here):

\n\n
tensor = tf.convert_to_tensor(np.mean(prec),dtype=tf.float32)\n
\n", + "system": "" + }, + { + "instruction": "Keras Model with Maxpooling1D and channel_first", + "input": "", + "output": "

Update: as mentioned by @HSK in the comments, the data_format argument is now supported in MaxPooling layers as a result of this PR.

\n\n
\n\n

Well, one alternative is to use the Permute layer (and remove the channels_first for the second conv layer):

\n\n
model = Sequential()\nmodel.add(Convolution1D(filters=16, kernel_size=35, activation='relu', input_shape=(1, 100), data_format='channels_first'))\nmodel.add(Permute((2, 1)))\nmodel.add(MaxPooling1D(pool_size=5))\nmodel.add(Convolution1D(filters=16, kernel_size=10, activation='relu'))\n\nmodel.summary()\n
\n\n

Model summary:

\n\n
Layer (type)                 Output Shape              Param #   \n=================================================================\nconv1d_7 (Conv1D)            (None, 16, 66)            576       \n_________________________________________________________________\npermute_1 (Permute)          (None, 66, 16)            0         \n_________________________________________________________________\nmax_pooling1d_2 (MaxPooling1 (None, 13, 16)            0         \n_________________________________________________________________\nconv1d_8 (Conv1D)            (None, 4, 16)              2096      \n=================================================================\nTotal params: 2,672\nTrainable params: 2,672\nNon-trainable params: 0\n_________________________________________________________________\n
\n", + "system": "" + }, + { + "instruction": "ImportError: cannot import name normalize_data_format", + "input": "", + "output": "

I update the keras contribute source code installed in my linux. Follow the changes:

\n\n

https://github.com/ekholabs/keras-contrib/commit/0dac2da8a19f34946448121c6b9c8535bfb22ce2

\n\n

Now, it works well.

\n", + "system": "" + }, + { + "instruction": "Assign new values to certain tensor elements in Keras", + "input": "", + "output": "

This answer is not really \"assign\", it's getting another tensor, but I believe it will do...

\n\n

Also, what you intend to do will totally break backpropagation for these elements.

\n\n

Knowing this:

\n\n
positives = k.backend.greater_equal(encoder_outputs, 0.5)\npositives = k.backend.cast(positives, k.backend.floatx())\n\nencoder_outputs = positives + ((1-positives)*encoder_outputs)\n
\n", + "system": "" + }, + { + "instruction": "How do I stop Keras showing "using XXX backend"?", + "input": "", + "output": "

Just using the code below.

\n\n
import os\nimport sys\nstderr = sys.stderr\nsys.stderr = open(os.devnull, 'w')\nimport keras\nsys.stderr = stderr\n
\n", + "system": "" + }, + { + "instruction": "Keras: functional API what should the Input layer be for the embedding layer?", + "input": "", + "output": "

Try this updated code: you have to use len(vocabulary) + 1 in Embedding layer! and weights=[word_weight_matrix]

\n\n
max_seq_length=100 #i.e., sentence has a max of 100 words \nword_weight_matrix = ... #this has a shape of 9825, 300, i.e., the vocabulary has 9825 words and each is a 300 dimension vector \ndeep_inputs = Input(shape=(max_seq_length,))\nembedding = Embedding(9826, 300, input_length=max_seq_length,\n                      weights=[word_weight_matrix], trainable=False)(deep_inputs) # line A\nhidden = Dense(targets, activation=\"softmax\")(embedding)\nmodel = Model(inputs=deep_inputs, outputs=hidden)\n
\n", + "system": "" + }, + { + "instruction": "keras LSTM layer takes too long to train", + "input": "", + "output": "

If you are using GPU please replace all LSTM layers with CuDNNLSTM layers. You can import it from keras.layers:

\n\n
from keras.layers import  CuDNNLSTM\n\ndef create_model(self):\n    inputs = {}\n    inputs['input'] = []\n    lstm = []\n    placeholder = {}\n    for tf, v in self.env.timeframes.items():\n        inputs[tf] = Input(shape = v['shape'], name = tf)\n        lstm.append(CuDNNLSTM(8)(inputs[tf]))\n        inputs['input'].append(inputs[tf])\n    account = Input(shape = (3,), name = 'account')\n    account_ = Dense(8, activation = 'relu')(account)\n    dt = Input(shape = (7,), name = 'dt')\n    dt_ = Dense(16, activation = 'relu')(dt)\n    inputs['input'].extend([account, dt])\n\n    data = Concatenate(axis = 1)(lstm)\n    data = Dense(128, activation = 'relu')(data)\n    y = Concatenate(axis = 1)([data, account, dt])\n    y = Dense(256, activation = 'relu')(y)\n    y = Dense(64, activation = 'relu')(y)\n    y = Dense(16, activation = 'relu')(y)\n    output = Dense(3, activation = 'linear')(y)\n\n    model = Model(inputs = inputs['input'], outputs = output)\n    model.compile(loss = 'mse', optimizer = 'adam', metrics = ['mae'])\n    return model\n
\n\n

Here is more information: https://keras.io/layers/recurrent/#cudnnlstm

\n\n

This will significantly speed up the model =)

\n", + "system": "" + }, + { + "instruction": "How to detect the epoch where Keras earlyStopping occurred?", + "input": "", + "output": "

It is set (code) as a field inside the callback:

\n\n
early_stopping_monitor.stopped_epoch\n
\n\n

will give you the epoch it stopped at after training or 0 if it didn't early stop.

\n", + "system": "" + }, + { + "instruction": "Crop the center of the image in Keras ImageDataGenerator or flow_from_directory", + "input": "", + "output": "

I tried to solve it in this way:

\n\n
def crop_generator(batches, crop_length):\n  while True:\n    batch_x, batch_y = next(batches)\n    start_y = (img_height - crop_length) // 2\n    start_x = (img_width - crop_length) // 2\n    if K.image_data_format() == 'channels_last':\n        batch_crops = batch_x[:, start_x:(img_width - start_x), start_y:(img_height - start_y), :]\n    else:\n        batch_crops = batch_x[:, :, start_x:(img_width - start_x), start_y:(img_height - start_y)]\n    yield (batch_crops, batch_y)\n
\n\n

still if you have better way please give your suggestions.

\n", + "system": "" + }, + { + "instruction": "How To Run Two Models In Parallel On Two Different GPUs In Keras", + "input": "", + "output": "

A solution to this problem can be found here. However, the softmax activation function runs on the CPU only as of now. It is necessary to direct the cpu to perform the dense layer:

\n\n
with tf.device('cpu:0')\n
\n\n

Switching between the cpu and the gpu does not seem cause noticeable slow down. With LSTM's though, it may be best to run the entire model on the cpu.

\n", + "system": "" + }, + { + "instruction": "How LSTM work with word embeddings for text classification, example in Keras", + "input": "", + "output": "

Shapes with the embedding:

\n\n\n\n

In the LSTM (after the embedding, or if you didn't have an embedding)

\n\n\n\n

Your code seems to be doing everything, so you don't have to do anything special to train this model. Use fit with a proper X_train and you will get y_train with shape (reviews,1).

\n\n

Questions:

\n\n
\n

If each Xt is a 100-dimension vector represent one word in a review, do I feed each word in a review to a LSTM at a time?

\n
\n\n

No, the LSTM layer is already doing everything by itself, including all recurrent steps, provided its input has shape (reviews, words, embedding_size)

\n\n
\n\n
\n

How is the 100 neurons interconnected?

\n
\n\n

They are sort of parallel (you can imagine 100 images like the one you posted, all parallel), almost the same as other kinds of usual layers.

\n\n

But during the recurrent steps, there is a matematical expression that make them conversate (unfortunately I can't explain exactly how).

\n\n
\n\n
\n

Why can't I just use 1 cell in the figure above for classification since it is a recurrent manner so it feeds the output back to itself in the next timestamp?

\n
\n\n

You can if you want, but the more cells, the smarter the layer (as happens with every other kind of layer)

\n\n

There is nothing special about the number 100 chosen. It's probably a coincidence or a misunderstanding. It can be any number, 50 cells, 200 cells, 1000 cells...

\n\n
\n\n

Understanding LSTMs deeply:

\n\n\n", + "system": "" + }, + { + "instruction": "Regression Activation Mapping for a given CNN", + "input": "", + "output": "

it seems that there is not a very big difference between RAM and CAM. They all use the weight of the global average pooling layer.

\n", + "system": "" + }, + { + "instruction": "I cannot understand the skipgrams() function in keras", + "input": "", + "output": "

That's because your vocabulary is very small: it's the same (\"love\", \"i\", \"money\") words. That's why \"random word from the vocabulary\" is always from the same sentence and moreover from the same context.

\n\n

As an experiment, do this:

\n\n
text = \"I love money\" #My test sentence\ntext2 = \"Lorem ipsum dolor sit amet, consectetur adipiscing elit, \" \\\n        \"sed do eiusmod tempor incididunt ut labore et dolore magna aliqua\"\ntokenizer = Tokenizer()\ntokenizer.fit_on_texts([text, text2])\n...\n
\n\n

Basically, let the tokenizer know that there are more words in the text. You should see that the negative examples are now generated mostly from the second sentence, for example:

\n\n
(i , sit ) -> 0\n(love , i ) -> 1\n(love , money ) -> 1\n(love , ut ) -> 0\n(love , sit ) -> 0\n(money , consectetur ) -> 0\n(money , love ) -> 1\n(i , love ) -> 1\n
\n", + "system": "" + }, + { + "instruction": "Pixel-wise loss weight for image segmentation in Keras", + "input": "", + "output": "

In case it is still relevant: I needed to solve this recently. You can paste the code below into a Jupyter notebook to see how it works.

\n\n
%matplotlib inline\nimport numpy as np\nfrom skimage.io import imshow\nfrom skimage.measure import label\nfrom scipy.ndimage.morphology import distance_transform_edt\nimport numpy as np\n\ndef generate_random_circles(n = 100, d = 256):\n    circles = np.random.randint(0, d, (n, 3))\n    x = np.zeros((d, d), dtype=int)\n    f = lambda x, y: ((x - x0)**2 + (y - y0)**2) <= (r/d*10)**2\n    for x0, y0, r in circles:\n        x += np.fromfunction(f, x.shape)\n    x = np.clip(x, 0, 1)\n\n    return x\n\ndef unet_weight_map(y, wc=None, w0 = 10, sigma = 5):\n\n    \"\"\"\n    Generate weight maps as specified in the U-Net paper\n    for boolean mask.\n\n    \"U-Net: Convolutional Networks for Biomedical Image Segmentation\"\n    https://arxiv.org/pdf/1505.04597.pdf\n\n    Parameters\n    ----------\n    mask: Numpy array\n        2D array of shape (image_height, image_width) representing binary mask\n        of objects.\n    wc: dict\n        Dictionary of weight classes.\n    w0: int\n        Border weight parameter.\n    sigma: int\n        Border width parameter.\n\n    Returns\n    -------\n    Numpy array\n        Training weights. A 2D array of shape (image_height, image_width).\n    \"\"\"\n\n    labels = label(y)\n    no_labels = labels == 0\n    label_ids = sorted(np.unique(labels))[1:]\n\n    if len(label_ids) > 1:\n        distances = np.zeros((y.shape[0], y.shape[1], len(label_ids)))\n\n        for i, label_id in enumerate(label_ids):\n            distances[:,:,i] = distance_transform_edt(labels != label_id)\n\n        distances = np.sort(distances, axis=2)\n        d1 = distances[:,:,0]\n        d2 = distances[:,:,1]\n        w = w0 * np.exp(-1/2*((d1 + d2) / sigma)**2) * no_labels\n    else:\n        w = np.zeros_like(y)\n    if wc:\n        class_weights = np.zeros_like(y)\n        for k, v in wc.items():\n            class_weights[y == k] = v\n        w = w + class_weights\n    return w\n\ny = generate_random_circles()\n\nwc = {\n    0: 1, # background\n    1: 5  # objects\n}\n\nw = unet_weight_map(y, wc)\n\nimshow(w)\n
\n", + "system": "" + }, + { + "instruction": "Test data predictions yield random results when making predictions from a saved model", + "input": "", + "output": "

I highly suspect this is due to mismatched preprocessing, i.e. you apply different preprocessing for x_train and x_test.

\n\n

Since you didn't show how x_train is obtained, so I can't verify for you. However, it is known that the pretrained VGG16 model uses caffe-like normalization (see preprocess_input), which normalizes an input image by subtracting channel-wise mean. Note, this is different from what you did for testing images,

\n\n
\n

x_test = np.array([cv2.imread(tile) for tile in tiles], np.float16) /\n 255.

\n
\n\n

Instead, you need to do something in the keras VGG16 example

\n\n
#Extract features with VGG16\nfrom keras.applications.vgg16 import VGG16\nfrom keras.preprocessing import image\nfrom keras.applications.vgg16 import preprocess_input\nimport numpy as np\n\nmodel = VGG16(weights='imagenet', include_top=False)\n\nimg_path = 'elephant.jpg'\nimg = image.load_img(img_path, target_size=(224, 224))\nx = image.img_to_array(img)\nx = np.expand_dims(x, axis=0)\nx = preprocess_input(x) <---- use this function to preprocess your image\n\nfeatures = model.predict(x)\n
\n", + "system": "" + }, + { + "instruction": "Keras ConvLSTM2D: ValueError on output layer", + "input": "", + "output": "

If you want a prediction per frame, then you should definitely set return_sequences=True in your last ConvLSTM2D layer.

\n\n

For the ValueError on target shape, replace the GlobalAveragePooling2D() layer with AveragePooling3D((1, 135, 240)) plus Reshape((-1, 40)) to make the output shape compatible with your target array.

\n", + "system": "" + }, + { + "instruction": "Keras custom function: implementing Jaccard", + "input": "", + "output": "

Careful with Artur's answer!

\n\n
\n
intersection = K.sum(K.abs(y_true * y_pred), axis=-1)\nsum_ = K.sum(K.abs(y_true) + K.abs(y_pred), axis=-1)\n
\n
\n\n

Loss function in the link is incorrect! \n|X| denotes the cardinality of the set, not the absolute value! Also the summation runs through classes ?

\n\n

The corrected version should look something like this: (tensorflow version, not tested yet)

\n\n
def jaccard_distance(y_true, y_pred, smooth=100):\n    \"\"\" Calculates mean of Jaccard distance as a loss function \"\"\"\n    intersection = tf.reduce_sum(y_true * y_pred, axis=(1,2))\n    sum_ = tf.reduce_sum(y_true + y_pred, axis=(1,2))\n    jac = (intersection + smooth) / (sum_ - intersection + smooth)\n    jd =  (1 - jac) * smooth\n    return tf.reduce_mean(jd)\n
\n\n

Inputs are image tensors of shape (batch, width, height, classes).\nCalculates the jaccard distance for each batch and class (shape=(batch, classes)) and returns mean value as a loss scalar.

\n", + "system": "" + }, + { + "instruction": "Keras - .flow_from_directory(directory)", + "input": "", + "output": "

According the Keras documentation.

\n\n

flow_from_directory(directory), Description:Takes the path to a directory, and generates batches of augmented/normalized data. Yields batches indefinitely, in an infinite loop.

\n\n

With shuffle = False, it takes the same batch indefinitely. leading to these accuracy values. I changed shuffle = True and it works fine now.

\n", + "system": "" + }, + { + "instruction": "Why is validation accuracy higher than training accuracy when applying data augmentation?", + "input": "", + "output": "

The following is just a theory, but it is one that you can test!

\n\n

One possible explanation why your validation accuracy is better than your training accuracy, is that the data augmentation you are applying to the training data is making the task significantly harder for the network. (It's not totally clear from your code sample. but it looks like you are applying the augmentation only to your training data, not your validation data).

\n\n

To see why this might be the case, imagine you are training a model to recognise whether someone in the picture is smiling or frowning. Most pictures of faces have the face the \"right way up\" so the model could solve the task by recognising the mouth and measuring if it curves upwards or downwards. If you now augment the data by applying random rotations, the model can no longer focus just on the mouth, as the face could be upside down. In addition to recognising the mouth and measuring its curve, the model now also has to work out the orientation of the face as a whole and compare the two.

\n\n

In general, applying random transformations to your data is likely to make it harder to classify. This can be a good thing as it makes your model more robust to changes in the input, but it also means that your model gets an easier ride when you test it on non-augmented data.

\n\n

This explanation might not apply to your model and data, but you can test it in two ways:

\n\n
    \n
  1. If you decrease the range of the augmentation transformations you are using you should see the training and validation loss get closer together.
  2. \n
  3. If you apply the exact same augmentation transformations to the validation data as you do the training data, then you should see the validation accuracy drop below the training accuracy as you expected.
  4. \n
\n", + "system": "" + }, + { + "instruction": "Keras : AttributeError: 'int' object has no attribute 'ndim' when using model.fit", + "input": "", + "output": "

If train_x and train_y are normal Python lists, they don't have the attribute .ndim. Only Numpy arrays have this attribute representing the number of dimensions.

\n\n

(https://docs.scipy.org/doc/numpy-1.12.0/reference/generated/numpy.ndarray.ndim.html)

\n", + "system": "" + }, + { + "instruction": "How do I create a Keras Embedding layer from a pre-trained word embedding dataset?", + "input": "", + "output": "

You will need to pass an embeddingMatrix to the Embedding layer as follows:

\n\n

Embedding(vocabLen, embDim, weights=[embeddingMatrix], trainable=isTrainable)

\n\n\n\n

The glove.6B.50d.txt is a list of whitespace-separated values: word token + (50) embedding values. e.g. the 0.418 0.24968 -0.41242 ...

\n\n

To create a pretrainedEmbeddingLayer from a Glove file:

\n\n
# Prepare Glove File\ndef readGloveFile(gloveFile):\n    with open(gloveFile, 'r') as f:\n        wordToGlove = {}  # map from a token (word) to a Glove embedding vector\n        wordToIndex = {}  # map from a token to an index\n        indexToWord = {}  # map from an index to a token \n\n        for line in f:\n            record = line.strip().split()\n            token = record[0] # take the token (word) from the text line\n            wordToGlove[token] = np.array(record[1:], dtype=np.float64) # associate the Glove embedding vector to a that token (word)\n\n        tokens = sorted(wordToGlove.keys())\n        for idx, tok in enumerate(tokens):\n            kerasIdx = idx + 1  # 0 is reserved for masking in Keras (see above)\n            wordToIndex[tok] = kerasIdx # associate an index to a token (word)\n            indexToWord[kerasIdx] = tok # associate a word to a token (word). Note: inverse of dictionary above\n\n    return wordToIndex, indexToWord, wordToGlove\n\n# Create Pretrained Keras Embedding Layer\ndef createPretrainedEmbeddingLayer(wordToGlove, wordToIndex, isTrainable):\n    vocabLen = len(wordToIndex) + 1  # adding 1 to account for masking\n    embDim = next(iter(wordToGlove.values())).shape[0]  # works with any glove dimensions (e.g. 50)\n\n    embeddingMatrix = np.zeros((vocabLen, embDim))  # initialize with zeros\n    for word, index in wordToIndex.items():\n        embeddingMatrix[index, :] = wordToGlove[word] # create embedding: word index to Glove word embedding\n\n    embeddingLayer = Embedding(vocabLen, embDim, weights=[embeddingMatrix], trainable=isTrainable)\n    return embeddingLayer\n\n# usage\nwordToIndex, indexToWord, wordToGlove = readGloveFile(\"/path/to/glove.6B.50d.txt\")\npretrainedEmbeddingLayer = createPretrainedEmbeddingLayer(wordToGlove, wordToIndex, False)\nmodel = Sequential()\nmodel.add(pretrainedEmbeddingLayer)\n...\n
\n", + "system": "" + }, + { + "instruction": "Keras : Why does Sequential and Model give different outputs?", + "input": "", + "output": "

You setup the random seed only in numpy and not in tensorflow (in case it's the backend of keras in your case). Try to add this in your code:

\n\n
from numpy.random import seed\nseed(1337)\nfrom tensorflow import set_random_seed\nset_random_seed(1337)\n
\n\n

the detailed article about this topic here

\n", + "system": "" + }, + { + "instruction": "Keras: difference between test_on_batch and predict_on_batch", + "input": "", + "output": "

So basically:

\n\n\n\n

The suffix on_batch comes from the fact that model is performing all computations at once. Sometimes it's not feasible so it's better to divide data in small chunks and perform multiple computations. keras provides you functions to do this automatically. So predict is equivalent to predict_on_batch and evaluate - to test_on_batch.

\n\n

In order to choose function used during cross-validation - you need to use:

\n\n\n\n

So - as you may see - you can actually combine these two functions while evaluating your model. If you simply want metrics - try test_on_batch / evaluate as it may save you a lot of computations.

\n", + "system": "" + }, + { + "instruction": "Keras Multiply() layer in functional API", + "input": "", + "output": "

With keras > 2.0:

\n\n
from keras.layers import multiply\noutput = multiply([dense_all, dense_att])\n
\n", + "system": "" + }, + { + "instruction": "How to give variable size images as input in keras", + "input": "", + "output": "

You can train variable sizes, as long as you don't try to put variable sizes in a numpy array.

\n\n

But some layers do not support variable sizes, and Flatten is one of them. It's impossible to train models containing Flatten layers with variable sizes.

\n\n

You can try, though, to replace the Flatten layer with either a GlobalMaxPooling2D or a GlobalAveragePooling2D layer. But these layers may condense too much information into a small data, so it might be necessary to add more convolutions with more channels before them.

\n\n

You must make sure that your generator will produce batches containing images of the same size, though. The generator will fail when trying to put two or more images with different sizes in the same numpy array.

\n", + "system": "" + }, + { + "instruction": "Implement perceptual loss with pretrained VGG using keras", + "input": "", + "output": "

Number of channels

\n\n

Well, the first problem is significant.

\n\n

VGG models were made to color images with 3 channels... so, it's quite not the right model for your case. I'm not sure if there are models for black & white images, but you should search for them.

\n\n

A workaround for that, which I don't know if will work well, is to make 3 copies of mainModel's output.

\n\n
tripleOut = Concatenate()([mainModel.output,mainModel.output,mainModel.output])\n
\n\n

Graph disconnected

\n\n

This means that nowhere in your code, you created a connection between the input and output of fullModel. You must connect the output of mainModel to the input of lossModel

\n\n

But first, let's prepare the VGG model for multiple outputs.

\n\n

Preparing lossModel for multiple outputs

\n\n

You must select which layers of the VGG model will be used to calculate the loss. If you use only the final output there won't be really a good perceptual loss because the final output is made more of concepts than of features.

\n\n

So, after you select the layers, make a list of their indices or names:

\n\n
selectedLayers = [1,2,9,10,17,18] #for instance\n
\n\n

Let's make a new model from VGG16, but with multiple outputs:

\n\n
#a list with the output tensors for each selected layer:\nselectedOutputs = [lossModel.layers[i].output for i in selectedLayers]\n     #or [lossModel.get_layer(name).output for name in selectedLayers]\n\n#a new model that has multiple outputs:\nlossModel = Model(lossModel.inputs,selectedOutputs)\n
\n\n

Joining the models

\n\n

Now, here we create the connection between the two models.

\n\n

We call the lossModel (as if it were a layer) taking the output of the mainModel as input:

\n\n
lossModelOutputs = lossModel(tripleOut) #or mainModel.output if not using tripeOut\n
\n\n

Now, with the graph entirely connected from the input of mainModel to the output of lossModel, we can create the fullModel:

\n\n
fullModel = Model(mainModel.input, lossModelOutputs)\n\n#if the line above doesn't work due to a type problem, make a list with lossModelOutputs:\nlossModelOutputs = [lossModelOutputs[i] for i in range(len(selectedLayers))]\n
\n\n

Training

\n\n

Take the predictions of this new lossModel, just as you did. But for the workaround, let's make it triple channel as well:

\n\n
triple_Y_train = np.concatenate((Y_train,Y_train,Y_train),axis=-1)\nY_train_lossModel = lossModel.predict(triple_Y_train)\n#the output will be a list of numpy arrays, one for each of the selected layers   \n
\n\n

Make sure you make each layer of lossModel non trainable before fullModel.compile().

\n\n

If you want 'mse' for all outputs, you just do:

\n\n
fullModel.compile(loss='mse', ...)\n
\n\n

If you want a different loss for each layer, pass a list of losses:

\n\n
fullModel.compile(loss=[loss1,loss2,loss3,...], ...)\n
\n\n

Additional considerations

\n\n

Since VGG is supposed to work with images in the caffe format, you might want to add a few layers after mainModel to make the output suitable. It's not absolutely required, but it would use the best performance from VGG.

\n\n

See how keras transforms an input image ranging from 0 to 255 into a caffe format here at line 15 or 44

\n", + "system": "" + }, + { + "instruction": "Keras crossentropy", + "input": "", + "output": "

Keras backend functions such K.categorical_crossentropy expect tensors.

\n\n

It's not obvious from your question what type label is. However, we know that model.predict always returns NumPy ndarrays, so we know label_pred is not a tensor. It is easy to convert, e.g. (assuming label is already a tensor),

\n\n
custom_entropy(label, K.constant(label_pred))\n
\n\n

Since the output of this function is a tensor, to actually evaluate it, you'd call

\n\n
K.eval(custom_entropy(label, K.constant(label_pred)))\n
\n\n

Alternatively, you can just use model as an op, and calling it on a tensor results in another tensor, i.e.

\n\n
label_pred = model(K.constant(mfsc_train[:,:,5]))\ncc = custom_entropy(label, label_pred)\nce = K.categorical_crossentropy(label, label_pred)\n
\n\n

Now label_pred, cc and ce will all be tensors.

\n", + "system": "" + }, + { + "instruction": "What is the meaning of rank 4 of data In the flow method of ImageDataGenerator (Keras) which has argument x", + "input": "", + "output": "

The forth dimension is the number of samples in a batch.\nLook at https://keras.io/preprocessing/image/ at the data_format explanation

\n", + "system": "" + }, + { + "instruction": "Use keras(TensorFlow) to build a Conv2D+LSTM model", + "input": "", + "output": "

A Conv2D layer requires four dimensions, not three:

\n\n\n\n

And the TimeDistributed will require an additional dimension:

\n\n\n\n

So, if you're really going to work with TimeDistributed+Conv2D, you need 5 dimensions. Your input_shape=(86,28,28,3), or your batch_input_shape=(batch_size,86,28,28,3), where I assumed you've got an RGB video (3 color channels).

\n\n

Usually, you just pass an input shape to the TimeDistributed.

\n\n
model.add(TimeDistributed(Dense(....), input_shape=(86,28,28,3))\n
\n\n

You will need the batch_input_shape only in the case of using stateful=True LSTM's. Then you just replace the input_shape with the batch_input_shape.

\n\n
\n\n

Notice that only the convolutional 2D layers will see images in terms of height and width. When you add the LSTM's, you will need to reshape the data to bring height, width and channels into a single dimension.

\n\n

For a shape (frames, h, w, ch):

\n\n
model.add(Reshape((frames,h*w*ch)))\n
\n\n

And you should not use TimeDistributed with these LSTMs, only with the convolutional layers.

\n\n

Your approach of using model.add(TimeDistributed(Flatten())) is ok instead of the reshape.

\n\n
\n\n

Notice also that Keras has recently implemented a ConvLSTM2D layer, which might be useful in your case: https://keras.io/layers/recurrent/#convlstm2d

\n", + "system": "" + }, + { + "instruction": "Keras-vis gives following error: AttributeError: Multiple inbound nodes", + "input": "", + "output": "

I had a similar error for a very similar network with two nodes of output, dense_1_1/Relu:0 and sequential_2/dense_1/Relu:0 . The solution for me was to go to losses.py and change layer_output = self.layer.output to layer_output = self.layer.get_output_at(-1). This is more of a workaround than a solution. When there is one output node, taking the last one [-1] is fine, and when there are two nodes taking the last one worked for me. But this should give you the lead. Try also layer_output = self.layer.get_output_at(0) or other nodes if you have them.\nThere is a relevant open issue here.

\n", + "system": "" + }, + { + "instruction": "how use grid search with fit generator in keras", + "input": "", + "output": "

I 'm using this implementation, I hope it could help you.

\n\n
from keras.preprocessing.image import ImageDataGenerator\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint, CSVLogger\nfrom keras.wrappers.scikit_learn import KerasClassifier\n\nimport types\n\n\nclass KerasBatchClassifier(KerasClassifier):\n\n    def fit(self, X, y, **kwargs):\n\n        # taken from keras.wrappers.scikit_learn.KerasClassifier.fit ###################################################\n        if self.build_fn is None:\n            self.model = self.__call__(**self.filter_sk_params(self.__call__))\n        elif not isinstance(self.build_fn, types.FunctionType) and not isinstance(self.build_fn, types.MethodType):\n            self.model = self.build_fn(**self.filter_sk_params(self.build_fn.__call__))\n        else:\n            self.model = self.build_fn(**self.filter_sk_params(self.build_fn))\n\n        loss_name = self.model.loss\n        if hasattr(loss_name, '__name__'):\n            loss_name = loss_name.__name__\n\n        if loss_name == 'categorical_crossentropy' and len(y.shape) != 2:\n            y = to_categorical(y)\n\n        ################################################################################################################\n\n\n        datagen = ImageDataGenerator(\n            rotation_range=45,\n            width_shift_range=0.2,\n            height_shift_range=0.2,\n            shear_range=0.2,\n            zoom_range=0.2,\n            horizontal_flip=True,\n            fill_mode='nearest'\n        )\n\n        if 'X_val' in kwargs and 'y_val' in kwargs:\n            X_val = kwargs['X_val']\n            y_val = kwargs['y_val']\n\n            val_gen = ImageDataGenerator(\n                horizontal_flip=True\n            )\n            val_flow = val_gen.flow(X_val, y_val, batch_size=32)\n            val_steps = len(X_val) / 32\n\n            early_stopping = EarlyStopping( patience=5, verbose=5, mode=\"auto\")\n            model_checkpoint = ModelCheckpoint(\"results/best_weights.{epoch:02d}-{loss:.5f}.hdf5\", verbose=5, save_best_only=True, mode=\"auto\")\n        else:\n            val_flow = None\n            val_steps = None\n            early_stopping = EarlyStopping(monitor=\"acc\", patience=3, verbose=5, mode=\"auto\")\n            model_checkpoint = ModelCheckpoint(\"results/best_weights.{epoch:02d}-{loss:.5f}.hdf5\", monitor=\"acc\", verbose=5, save_best_only=True, mode=\"auto\")\n\n        callbacks = [early_stopping, model_checkpoint]\n\n        epochs = self.sk_params['epochs'] if 'epochs' in self.sk_params else 100\n\n        self.__history = self.model.fit_generator(\n            datagen.flow(X, y, batch_size=32),  \n            steps_per_epoch=len(X) / 32,\n            validation_data=val_flow, \n            validation_steps=val_steps, \n            epochs=epochs,\n            callbacks=callbacks\n        )\n\n        return self.__history\n\n    def score(self, X, y, **kwargs):\n        kwargs = self.filter_sk_params(Sequential.evaluate, kwargs)\n\n        loss_name = self.model.loss\n        if hasattr(loss_name, '__name__'):\n            loss_name = loss_name.__name__\n        if loss_name == 'categorical_crossentropy' and len(y.shape) != 2:\n            y = to_categorical(y)\n        outputs = self.model.evaluate(X, y, **kwargs)\n        if type(outputs) is not list:\n            outputs = [outputs]\n        for name, output in zip(self.model.metrics_names, outputs):\n            if name == 'acc':\n                return output\n        raise Exception('The model is not configured to compute accuracy. '\n                        'You should pass `metrics=[\"accuracy\"]` to '\n                        'the `model.compile()` method.')\n\n    @property\n    def history(self):\n        return self.__history\n
\n\n

As you can see, it's specific to images, but you can adapt it to your specific needs.

\n\n

I'm using it like this:

\n\n
from sklearn.model_selection import GridSearchCV\n\nmodel = KerasBatchClassifier(build_fn=create_model, epochs=epochs)\n\nlearn_rate   = [0.001, 0.01, 0.1]\nepsilon      = [None, 1e-2, 1e-3]\ndropout_rate = [0.25, 0.5]\n\nparam_grid   = dict(learn_rate=learn_rate, epsilon=epsilon, dropout_rate=dropout_rate)\n\ngrid = GridSearchCV(estimator=model, param_grid=param_grid)\n\ngrid_result = grid.fit(X_train, Y_train, X_val = X_test, y_val = Y_test)\n
\n", + "system": "" + }, + { + "instruction": "Speeding up inference of Keras models", + "input": "", + "output": "

VGG16 / VGG19 architecture is very slow since it has lots of parameters. Check this answer.

\n\n

Before any other optimization, try to use a simpler network architecture.

\n\n

Google's MobileNet seems like a good candidate since it's implemented on Keras and it was designed for more limited devices.

\n\n

If you can't use a different network, you may compress the network with pruning. This blog post specifically do pruning with Keras.

\n", + "system": "" + }, + { + "instruction": "Keras image augmentation: How to choose "steps per epoch" parameter and include specific augmentations during training?", + "input": "", + "output": "

Some time ago I raised myself the same questions and I think a possible explanation is here:

\n\n

Consider this example:

\n\n
    aug = ImageDataGenerator(rotation_range=90, width_shift_range=0.1, \n                             height_shift_range=0.1, shear_range=0.2, \n                             zoom_range=0.2, horizontal_flip=True, \n                             fill_mode=\"nearest\")\n
\n\n

For question 1): I specify a rotation_range=90, which means that while you flow (retrieve) the data, the generator will randomly rotate the image by a degree between 0 and 90 deg. You can not specify an exact angle cause that's what ImageDataGenerator does: generate randomly the rotation. It is also very important concerning your second question.

\n\n

For question 2): Yes it still applies to the data augmentation method. I was also confused in the beginning. The reason is that since the image is generated randomly, for each epoch, the network sees the images all different from those in previous epoch. That's why the data is \"augmented\" - the augmentation is not done within an epoch, but throughout the entire training process. However, I have seen other people specifying 2x value of the original steps_per_epoch.

\n\n

Hope this helps

\n", + "system": "" + }, + { + "instruction": "Structure a Keras Tensorboard graph", + "input": "", + "output": "

You can create a name scope to group layers in your model using with K.name_scope('name_scope').

\n\n

Example:

\n\n
with K.name_scope('CustomLayer'):\n  # add first layer in new scope\n  x = GlobalAveragePooling2D()(x)\n  # add a second fully connected layer\n  x = Dense(1024, activation='relu')(x)\n
\n\n

Thanks to \nhttps://github.com/fchollet/keras/pull/4233#issuecomment-316954784

\n", + "system": "" + }, + { + "instruction": "Why does the call method gets called at build time in Keras layers", + "input": "", + "output": "

It will be called at the time you added layer to model to check if the shape is valid or not.

\n\n

If you want to see where it is called, change your code:

\n\n
import sys\nimport traceback\n\nclass MyLayer(Layer):\n    ....\n    def call(self, x):\n         traceback.print_stack(file=sys.stdout)\n         return K.dot(x, self.kernel)\n
\n", + "system": "" + }, + { + "instruction": "Keras - use part of the input at later stage in sequential model", + "input": "", + "output": "

You need to use the Functional API with a multi-input model.

\n\n

An example could be:

\n\n
from keras.layers import Input, Conv1D, Dense, concatenate\n#Image data\nconv_input = Input(shape=conv_input_shape)\nconv_output = Conv1D(nfilters,kernel_shape)(conv_input)\n\n#Metadata\nmetadata_input = Input(shape=metadata_shape)\n\n#Merge and add dense layer\nmerge_layer = concatenate([metadata_input, conv_output])\nmain_output = Dense(dense_size)(merge_layer)\n\n# Define model with two inputs\nmodel = Model(inputs=[conv_input, metadata_input], outputs=[main_output])\n
\n\n

Hope this helps!

\n", + "system": "" + }, + { + "instruction": "the same model converged in keras but not tensorflow, how is that possible?", + "input": "", + "output": "

Check the initial values of your operations. In my case the adadelta optimizer in keras had initial learning rate of 1.0 and in tf.keras it had 0.001 so in the mnist dataset it converged much slowly.

\n", + "system": "" + }, + { + "instruction": "Matrix multiplication in Keras", + "input": "", + "output": "

You need to use a variable instead of a place holder.

\n\n
import keras.backend as K\nimport numpy as np\nA = np.random.rand(10,500)\nB = np.random.rand(500,6000)\n\nx = K.variable(value=A)\ny = K.variable(value=B)\n\nz = K.dot(x,y)\n\n# Here you need to use K.eval() instead of z.eval() because this uses the backend session\nK.eval(z)\n
\n", + "system": "" + }, + { + "instruction": "Keras: Why my val_acc suddenly drops at Epoch 42/50?", + "input": "", + "output": "

Thanks Marcin Mo\u017cejkofor pointing me to the right direction.

\n\n

This can happen at very high learning rates\nloss can start increasing after some epochs as described here\nIt worked reducing the learning rate as described in the keras callbacks documentation.

\n\n

Example:

\n\n
 reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,\n                  patience=5, min_lr=0.001)\n    model.fit(X_train, Y_train, callbacks=[reduce_lr])\n
\n", + "system": "" + }, + { + "instruction": "Does LSTM in Keras support dynamic sentence length or not?", + "input": "", + "output": "

Of course - as you mentioned it's possible to feed a varying length sequences to Keras model. The case is that in a single batch all sequences should have the same length. In this sense - it's impossible to implement a dynamic sequence length support.

\n\n

If you consider this case more deeply - you'll see that even though you could have a varying length sequences fed in different batches - this could actually introduce some bias to your training process - as sequences of the same / similiar length are always fed together to a training process.

\n", + "system": "" + }, + { + "instruction": "Calling a stateful LSTM as a functional model?", + "input": "", + "output": "

Following Note on using statefulness in RNNs from this link and Keras implementation the answer is yes if:

\n\n
    \n
  1. The batch_size in both models is the same (it's important due to the way Keras computes the inner states).
  2. \n
  3. You would first build and compile both models and then use them - for some reason Keras is resetting the inner states during the build of a layer (you can check it here by looking for reset_states method).
  4. \n
\n\n

If you want to reset states you could call reset_states method on each recurrent layer you want ot reset states on.

\n", + "system": "" + }, + { + "instruction": "tensorflow theano.tensor.set_subtensor equivalent", + "input": "", + "output": "

TensorFlow tensors are read-only. In order to modify things you need to use variables and .assign (= can not be overriden in Python)

\n\n
tensor = tf.Variable(tf.ones((3,3)))\nsess.run(tf.initialize_all_variables())\nsess.run(tensor[1:, 1:].assign(2*tensor[1:,1:]))\nprint(tensor.eval())\n
\n\n

Output

\n\n
[[ 1.  1.  1.]\n [ 1.  2.  2.]\n [ 1.  2.  2.]]\n
\n", + "system": "" + }, + { + "instruction": "Keras LSTM input dimensions with one hot text embedding", + "input": "", + "output": "

I have two suggestions.

\n\n
    \n
  1. Use one hot representation for the target variable(y) also. If you give Y as integer, it will become a regression problem. Only if you give a one hot encoding, it becomes a classification problem.
  2. \n
  3. Try word2vec embedding when you have large amount of text, instead of one hot embedding.
  4. \n
\n\n
\noptimizer = RMSprop(lr=0.01) \nembedding_vecor_length = 32\nmax_review_length = 28\nnb_classes= 8\nmodel = Sequential()\nmodel.add(Embedding(input_dim=900, output_dim=embedding_vecor_length,\n                    input_length=max_review_length)) \n\nmodel.add(LSTM(150))\n\n#output_dim is a categorical variable with 8 classes\nmodel.add(Dense(output_dim=nb_classes, activation='softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])\nprint(model.summary())\n\nmodel.fit(X_train, y_train, nb_epoch=3, batch_size=64)\n\n# Final evaluation of the model\nscores = model.evaluate(X_test, y_test, verbose=0)\nprint(\"Accuracy: %.2f%%\" % (scores[1]*100))\n\n
\n", + "system": "" + }, + { + "instruction": "L2 normalised output with keras", + "input": "", + "output": "

I found the problem!

\n

So I am using tensorflow as a backed and K.l2_normalize(x, axis) calls the tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None). Notice that this method has one extra parameter epsilon. And this method looks as follows:

\n
with ops.name_scope(name, "l2_normalize", [x]) as name:\n   x = ops.convert_to_tensor(x, name="x")\n   square_sum = math_ops.reduce_sum(math_ops.square(x), dim, keep_dims=True)\n   x_inv_norm = math_ops.rsqrt(math_ops.maximum(square_sum, epsilon))\nreturn math_ops.mul(x, x_inv_norm, name=name)\n
\n

So if the output of the net contains numbers lower then epsilon (which is set to 1e-12 by default) then it is not normalized correctly which is what happens in my case.

\n", + "system": "" + }, + { + "instruction": "Keras model.compile: metrics to be evaluated by the model", + "input": "", + "output": "

There are two types of metrics that you can provide.
\nFirst are the one provided by keras which you can find here which you provide in single quotes like 'mae' or also you can define like

\n\n
from keras import metrics\nmodel.compile(loss='mean_squared_error',\n              optimizer='sgd',\n              metrics=[metrics.mae, metrics.categorical_accuracy]) \\\\or like \n              metrics=['mae', 'categorical_accuracy']\n
\n

Second is custom metrics like this

\n
import keras.backend as K\n\ndef mean_pred(y_true, y_pred):\n    return K.mean(y_pred)\n\nmodel.compile(optimizer='rmsprop',\n              loss='binary_crossentropy',\n              metrics=['accuracy', mean_pred])\n
\n

Here mean_pred is the custom metric. See the difference in defining the already available metrics and custom defined metrics. So fmeasure is not readily available. You have to define it as custom function.

\n", + "system": "" + }, + { + "instruction": "How to perform deconvolution in Keras/ Theano?", + "input": "", + "output": "

You can easily use Deconvolution2D layer.

\n\n

Here is what you are trying to achieve:

\n\n
batch_sz = 1\noutput_shape = (batch_sz, ) + X_train.shape[1:]\nconv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)\n\ndeconv_func = K.function([model.input, K.learning_phase()], [conv_out])\n\ntest_x = np.random.random(output_shape)\nX_deconv = deconv_func([test_x, 0 ])\n
\n\n

But its better to create a functional model which will help both for training and prediction..

\n\n
batch_sz = 10\noutput_shape = (batch_sz, ) + X_train.shape[1:]\nconv_out = Deconvolution2D(3, 3, 3, output_shape, border_mode='same')(model.layers[0].output)\n\nmodel2 = Model(model.input, [model.output, conv_out])\nmodel2.summary()\nmodel2.compile(loss=['categorical_crossentropy', 'mse'], optimizer='adam')\nmodel2.fit(X_train, [Y_train, X_train], batch_size=batch_sz)\n
\n", + "system": "" + }, + { + "instruction": "Using Keras, how can I input an X_train of images (more than a thousand images)?", + "input": "", + "output": "

This Keras blog post, Building powerful image classification models using very little data, is an excellent tutorial for training a model on images stored in directories. It also introduces the ImageDataGenerator class, which has the member function flow_from_directory referenced in @isaac-moore's answer. flow from directory can be used train on images, where the directory structure is used to deduce the value of Y_train.

\n\n

The three python scripts that accompany the tutorial blog post can be found at the links below:

\n\n
    \n
  1. classifier_from_little_data_script_1.py
  2. \n
  3. classifier_from_little_data_script_2.py
  4. \n
  5. classifier_from_little_data_script_3.py
  6. \n
\n\n

(Of course, these links are in the blog post itself, but the links are not centrally located.) Note that scripts 2 and 3 build on the output of the previous. Also, note that additional files will need to be downloaded from Kaggle and Github.

\n", + "system": "" + }, + { + "instruction": "Keras Neural Networks and SKlearn SVM.SVC", + "input": "", + "output": "

This is completely possible and actually quite common. You just select the output of a layer of the neural network and use that as a feature vector to train a SVM. Generally one normalizes the feature vectors as well.

\n\n

Features learned by (Convolutional) Neural Networks are powerful enough that they generalize to different kinds of objects and even completely different images. For examples see the paper CNN Features off-the-shelf: an Astounding Baseline for Recognition.

\n\n

About implementation, you just have to train a neural network, then select one of the layers (usually the ones right before the fully connected layers or the first fully connected one), run the neural network on your dataset, store all the feature vectors, then train an SVM with a different library (e.g sklearn).

\n", + "system": "" + }, + { + "instruction": "How can I implement max norm constraints in an MLP in tensorflow?", + "input": "", + "output": "

tf.nn.dropout does not impose any norm constraint. I believe what you're looking for is to \"process the gradients before applying them\" using tf.clip_by_norm.

\n\n

For example, instead of simply:

\n\n
# Create an optimizer + implicitly call compute_gradients() and apply_gradients()\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)\n
\n\n

You could:

\n\n
# Create an optimizer.\noptimizer = tf.train.GradientDescentOptimizer(learning_rate)\n# Compute the gradients for a list of variables.\ngrads_and_vars = optimizer.compute_gradients(loss, [weights1, weights2, ...])\n# grads_and_vars is a list of tuples (gradient, variable).\n# Do whatever you need to the 'gradient' part, for example cap them, etc.\ncapped_grads_and_vars = [(tf.clip_by_norm(gv[0], clip_norm=123.0, axes=0), gv[1])\n                         for gv in grads_and_vars]\n# Ask the optimizer to apply the capped gradients\noptimizer = optimizer.apply_gradients(capped_grads_and_vars)\n
\n\n

I hope this helps. Final notes about tf.clip_by_norm's axes parameter:

\n\n
    \n
  1. If you're calculating tf.nn.xw_plus_b(x, weights, biases), or equivalently matmul(x, weights) + biases, when the dimensions of x and weights are (batch, in_units) and (in_units, out_units) respectively, then you probably want to set axes == [0] (because in this usage each column details all incoming weights to a specific unit).
  2. \n
  3. Pay attention to the shape/dimensions of your variables above and whether/how exactly you want to clip_by_norm each of them! E.g. if some of [weights1, weights2, ...] are matrices and some aren't, and you call clip_by_norm() on the grads_and_vars with the same axes value like in the List Comprehension above, this doesn't mean the same thing for all the variables! In fact, if you're lucky, this will result in a weird error like ValueError: Invalid reduction dimension 1 for input with 1 dimensions, but otherwise it's a very sneaky bug.
  4. \n
\n", + "system": "" + }, + { + "instruction": "How to configure a very simple LSTM with Keras / Theano for Regression", + "input": "", + "output": "

In your model definition you placed a Dense layer before LSTM layer. You need to use TimeDistributed layer on Dense layer.

\n\n

Try to change

\n\n
model = Sequential([\n    Dense(8, input_dim=nb_features),\n    Activation('softmax'),\n    LSTM(4, dropout_W=0.2, dropout_U=0.2),\n    Dense(1),\n    Activation('sigmoid')\n])\n
\n\n

to

\n\n
model = Sequential([\n    TimeDistributed(Dense(8, input_dim=nb_features, Activation='softmax')),\n    LSTM(4, dropout_W=0.2, dropout_U=0.2),\n    Dense(1),\n    Activation('sigmoid')\n])\n
\n", + "system": "" + }, + { + "instruction": "How to input data into Keras? Specifically what is the x_train and y_train if I have more than 2 columns?", + "input": "", + "output": "

It all depends on your need.

\n\n

It looks like that you want to predict the winner based on the parameters shown in column A - N. Then you should define input_dim to be 14, and X_train should be an (N,14) numpy array like this:

\n\n
[\n   [9278,  37.9, ...],\n   [18594, 36.3, ...],\n   ...\n]\n
\n\n

It seems that your prediction set only contains 2 items ( 2 president candidates LOL), so you should encode the answer Y_train in an (N,2) numpy array like this:

\n\n
[\n   [1, 0],\n   [1, 0],\n   ...\n   [0, 1],\n   [0, 1],\n   ...\n]\n
\n\n

where [1,0] indicates that Barack Obama is the winner and vice versa.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: The channel dimension of the inputs should be defined", + "input": "", + "output": "

Define the input shape directly in the normalization layer (or add an Input layer), since it cannot be inferred directly:

\n
import numpy as np\nimport tensorflow as tf\n\nx_train = np.random.rand(10, 128, 128, 1)\nnormalizer = tf.keras.layers.Normalization(input_shape=[128, 128, 1], axis=-1)\nnormalizer.adapt(x_train)\n\n\ndef build_and_compile_model(norm):\n    model = tf.keras.Sequential([\n        norm,\n        tf.keras.layers.Conv2D(64, 64, activation='relu'),\n        tf.keras.layers.Conv2D(3, 3, activation='relu'),\n        tf.keras.layers.Flatten(),\n        tf.keras.layers.Dense(units=32, activation='relu'),\n        tf.keras.layers.Dense(units=1)\n    ])\n\n    model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.001))\n\n    return model\n\n\ndnn_model = build_and_compile_model(normalizer)\ndnn_model.summary()\n
\n

Also, your model does not work as it is, you are using a kernel size of 128 in your first Conv2D layer and then another Conv2D layer with a kernel size of 3 but your data has the shape (10, 128, 128, 1). I changed it to make your code executable.

\n", + "system": "" + }, + { + "instruction": "module 'keras.utils.generic_utils' has no attribute 'get_custom_objects' when importing segmentation_models", + "input": "", + "output": "

Thanks, it works, but I just did this:

\n
import os\nos.environ["SM_FRAMEWORK"] = "tf.keras"\n\nfrom tensorflow import keras\nimport segmentation_models as sm\n
\n", + "system": "" + }, + { + "instruction": "Why GPU is 3.5 times slower than the CPU on Apple M1 Mac?", + "input": "", + "output": "

Last week I found the same issue and it confused me a lot too. In my case CPU training took ~7 sec and GPU ~100 sec. So GPU was 14 times slower! It was a simple ANN but on CNN I found that GPU was about 20% faster than CPU.

\n

I think it depends on your input size. GPU kernels are way more slower than CPU cores but the main advantage of GPU is that you can run thousands of threads simultaneously. On CPU you're limited by number of cores and besides even if M1 has 8 cores only 4 of them can work simultaneously.

\n

So if your batch for training is small enough you won't have benefits from GPU because a lot of threads will not be used. They can't process separate batch due to GPU architecture. I suggest you to test GPU and CPU performance on small number of epochs and then choose the faster unit.

\n

You don't need to uninstall tensorflow-metal to use only CPU. You can simply call

\n
tf.config.set_visible_devices([], 'GPU')\n
\n

before compiling the NN. This command will remove all the GPUs from visible devices for TensorFlow so training will use only CPU.

\n", + "system": "" + }, + { + "instruction": "How is the Keras Conv1D input specified? I seem to be lacking a dimension", + "input": "", + "output": "

Don't let the name confuse you. The layer tf.keras.layers.Conv1D needs the following shape: (time_steps, features). If your dataset is made of 10,000 samples with each sample having 64 values, then your data has the shape (10000, 64), which is not directly applicable to the tf.keras.layers.Conv1D layer. You are missing the time_steps dimension. What you can do is use the tf.keras.layers.RepeatVector, which repeats your array input n times, in the example 5. This way your Conv1D layer gets an input of the shape (5, 64). Check out the documentation for more information:

\n
\ntime_steps = 5\nmodel = tf.keras.Sequential()\nmodel.add(tf.keras.layers.Input(shape=(64,), name="input"))\nmodel.add(tf.keras.layers.RepeatVector(time_steps))\nmodel.add(tf.keras.layers.Conv1D(64, 2, activation="relu", padding="same", name="convLayer"))\n\n
\n

As a side note, you should ask yourself if using a tf.keras.layers.Conv1D layer is the right option for your use case. This layer is usually used for NLP and other time series tasks. For example, in sentence classification, each word in a sentence is usually mapped to a high-dimensional word vector representation, as seen in the image. This results in data with the shape (time_steps, features).

\n

\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"enter

\n

If you want to use character one hot encoded embeddings it would look something like this:

\n

\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"enter

\n

This is a simple example of one single sample with the shape (10, 10) --> 10 characters along the time series dimension and 10 features. It should help you understand the tutorial I mentioned a bit better.

\n", + "system": "" + }, + { + "instruction": "training = False in Keras Sequential API", + "input": "", + "output": "

You can try following, it worked for me:

\n

load any base model, include_top = False, then (as example):

\n
 flat1 = Flatten()(base_model.layers[-1].output, training=False)\n    \n class1 = Dense(5, activation='relu')(flat1)\n    \n output = Dense(1, activation='sigmoid')(class1) \n\n model = Model(inputs=base_model.inputs, outputs=output) \n
\n", + "system": "" + }, + { + "instruction": "Keras - no good way to stop and resume training?", + "input": "", + "output": "

You're right, there isn't builtin support for resumability - which is exactly what motivated me to create DeepTrain. It's like Pytorch Lightning (better and worse in different regards) for TensorFlow/Keras.

\n

Why another library? Don't we have enough? You have nothing like this; if there was, I'd not build it. DeepTrain's tailored for the "babysitting approach" to training: train fewer models, but train them thoroughly. Closely monitor each stage to diagnose what's wrong and how to fix.

\n

Inspiration came from my own use; I'd see "validation spikes" throughout a long epoch, and couldn't afford to pause as it'd restart the epoch or otherwise disrupt the train loop. And forget knowing which batch you were fitting, or how many remain.

\n

How's it compare to Pytorch Lightning? Superior resumability and introspection, along unique train debug utilities - but Lightning fares better in other regards. I have a comprehensive list comparison in working, will post within a week.

\n

Pytorch support coming? Maybe. If I convince the Lightning dev team to make up for its shortcomings relative to DeepTrain, then not - otherwise probably. In the meantime, you can explore the gallery of Examples.

\n
\n

Minimal example:

\n
from tensorflow.keras.layers import Input, Dense\nfrom tensorflow.keras.models import Model\nfrom deeptrain import TrainGenerator, DataGenerator\n\nipt = Input((16,))\nout = Dense(10, 'softmax')(ipt)\nmodel = Model(ipt, out)\nmodel.compile('adam', 'categorical_crossentropy')\n\ndg  = DataGenerator(data_path="data/train", labels_path="data/train/labels.npy")\nvdg = DataGenerator(data_path="data/val",   labels_path="data/val/labels.npy")\ntg  = TrainGenerator(model, dg, vdg, epochs=3, logs_dir="logs/")\n\ntg.train()\n
\n

You can KeyboardInterrupt at any time, inspect the model, train state, data generator - and resume.

\n", + "system": "" + }, + { + "instruction": "How to predict a single image with Keras ImageDataGenerator?", + "input": "", + "output": "

When you wish to predict a single image, you can use the following code.

\n

It returns a list of probabilities of each class based on how the folders (classes) in your train dataset were arranged. So the first index of the returned list is the first folder (or class) in your train dataset and so on. The index with the highest probability is your predicted class.

\n
from keras.preprocessing.image import load_img, img_to_array, ImageDataGenerator\nfrom keras.applications.vgg16 import preprocess_input\n\n#load the image\nmy_image = load_img('your_single_image.jpeg', target_size=(224, 224))\n\n#preprocess the image\nmy_image = img_to_array(my_image)\nmy_image = my_image.reshape((1, my_image.shape[0], my_image.shape[1], my_image.shape[2]))\nmy_image = preprocess_input(my_image)\n\n#make the prediction\nprediction = model.predict(my_image)\n
\n

You can return a much clearer result by rounding the results to whole numbers using the list comprehension below.

\n
import numpy as np\n[np.round(x) for x in prediction]\n
\n

The element with index 1 is your predicted class.

\n", + "system": "" + }, + { + "instruction": "How to scale target values of a Keras autoencoder model using a sklearn pipeline?", + "input": "", + "output": "

You can use TransformedTargetRegressor to apply arbitrary transformations on the target values (i.e. y) by providing either a function (i.e. using func argument) or a transformer (i.e. transformer argument).

\n

In this case (i.e. fitting an auto-encoder model), since you want to apply the same StandardScalar instance on the target values as well, you can use transformer argument. And it could be done in one of the following ways:

\n\n", + "system": "" + }, + { + "instruction": "Why it's necessary to frozen all inner state of a Batch Normalization layer when fine-tuning", + "input": "", + "output": "

During training, varying batch statistics act as a regularization mechanism that can improve ability to generalize. This can help to minimize overfitting when training for a high number of iterations. Indeed, using a very large batch size can harm generalization as there is less variation in batch statistics, decreasing regularization.

\n

When fine-tuning on a new dataset, batch statistics are likely to be very different if fine-tuning examples have different characteristics to examples in the original training dataset. Therefore, if batch normalization is not frozen, the network will learn new batch normalization parameters (gamma and beta in the batch normalization paper) that are different to what the other network paramaters have been optimised for during the original training. Relearning all the other network parameters is often undesirable during fine-tuning, either due to the required training time or small size of the fine-tuning dataset. Freezing batch normalization avoids this issue.

\n", + "system": "" + }, + { + "instruction": "Tensorflow model prediction is slow", + "input": "", + "output": "

The whole story lies behind the implementation of the LSTM layer in Keras. The Keras LSTM layer has a default argument unroll=False. This causes the LSTM to run a symbolic loop (loop causes more time). Try adding an extra argument to the LSTM as unroll=True.

\n
tf.keras.layers.LSTM(64, return_sequences=True, stateful=True, unroll=True)\n
\n

This may result in up to a 2x speed boost up (tested on my machine, using %timeit model(vec, training=False)). However, using unroll=True may cause taking more ram for larger sequences. For more inquiry, please have a look at the Keras LSTM documentation.

\n", + "system": "" + }, + { + "instruction": "How to create a custom PreprocessingLayer in TF 2.2", + "input": "", + "output": "

If you want to have a custom preprocessing layer, actually you don't need to use PreprocessingLayer. You can simply subclass Layer

\n\n

Take the simplest preprocessing layer Rescaling as an example, it is under the tf.keras.layers.experimental.preprocessing.Rescaling namespace. However, if you check the actual implementation, it is just subclass Layer class Source Code Link Here but has @keras_export('keras.layers.experimental.preprocessing.Rescaling')

\n\n
@keras_export('keras.layers.experimental.preprocessing.Rescaling')\nclass Rescaling(Layer):\n  \"\"\"Multiply inputs by `scale` and adds `offset`.\n  For instance:\n  1. To rescale an input in the `[0, 255]` range\n  to be in the `[0, 1]` range, you would pass `scale=1./255`.\n  2. To rescale an input in the `[0, 255]` range to be in the `[-1, 1]` range,\n  you would pass `scale=1./127.5, offset=-1`.\n  The rescaling is applied both during training and inference.\n  Input shape:\n    Arbitrary.\n  Output shape:\n    Same as input.\n  Arguments:\n    scale: Float, the scale to apply to the inputs.\n    offset: Float, the offset to apply to the inputs.\n    name: A string, the name of the layer.\n  \"\"\"\n\n  def __init__(self, scale, offset=0., name=None, **kwargs):\n    self.scale = scale\n    self.offset = offset\n    super(Rescaling, self).__init__(name=name, **kwargs)\n\n  def call(self, inputs):\n    dtype = self._compute_dtype\n    scale = math_ops.cast(self.scale, dtype)\n    offset = math_ops.cast(self.offset, dtype)\n    return math_ops.cast(inputs, dtype) * scale + offset\n\n  def compute_output_shape(self, input_shape):\n    return input_shape\n\n  def get_config(self):\n    config = {\n        'scale': self.scale,\n        'offset': self.offset,\n    }\n    base_config = super(Rescaling, self).get_config()\n    return dict(list(base_config.items()) + list(config.items()))\n
\n\n

So it proves that Rescaling preprocessing is just another normal layer.

\n\n

The main part is the def call(self, inputs) function. You can create whatever complicated logic to preprocess your inputs and then return.

\n\n

A easier documentation about custom layer can be find here

\n\n

In a nutshell, you can do the preprocessing by layer, either by Lambda which for simple operation or by subclassing Layer to achieve your goal.

\n", + "system": "" + }, + { + "instruction": "Keras Dense layer Output Shape", + "input": "", + "output": "

According to the official documentation of Keras, for Dense layer when you give input as input_shape=(input_units,) the modal take as input arrays of shape (*, input_units) and outputs arrays of shape (*, output_units) [in your case input_shape=(784,) is treated as input shape=(*, 784) and output is output_shape=(*,4)]

\n\n

In general for input dimension of (batch_size, ..., input_dim), the modal gives the output of size (batch_size, ..., units).

\n\n

So when you give input as input_shape=(784,) the modal take as input arrays of shape (*, 784), where * is the batch size and 784 as input_dim, giving output shape as (*, 4).

\n\n

When the input is (784,1), the modal takes it as (*, 784, 1) where * is the batch size, 784 is ... and 1 is input_dim =>(batch_size, ..., input_dim) and output as (*, 784, 4) => (batch_size, ..., units).

\n\n

Same goes for the input_shape=(32,28)=>(*,32,28), giving output (*,32,4) and for input with input_shape=(32,28,1)=>(*,32,28,1) where again * is the batch_size, 32,28 is ... and 1 is the input_dim =>(batch_size, ..., input_dim)

\n\n

On what does None means please check What is the meaning of the "None" in model.summary of KERAS?

\n", + "system": "" + }, + { + "instruction": "Training stability of Wasserstein GANs", + "input": "", + "output": "
    \n
  1. You can check Inception Score and Frechet Inception Distance for now. And also here. The problem is that GANs not having a unified objective functions(there are two networks) there's no agreed way of evaluating and comparing GAN models. INstead people devise metrics that's relating the image distributinos and generator distributions.

  2. \n
  3. wGAN could be faster due to having morestable training procedures as opposed to vanilla GAN(Wasserstein metric, weight clipping and gradient penalty(if you are using it) ) . I dont know if there's a literature analysis for speed and It may not always the case for WGAN faster than a simple GAN. WGAN cannot find the best Nash equlibirum like GAN.

  4. \n
  5. Think two distributions: p and q. If these distributions overlap, i.e. , their domains overlap, then KL or JS divergence are differentiable. The problem arises when p and q don't overlap. As in WGAN paper example, say two pdfs on 2D space, V = (0, Z) , Q = (K , Z) where K is different from 0 and Z is sampled from uniform distribution. If you try to take derivative of KL/JS divergences of these two pdfs well you cannot. This is because these two divergence would be a binary indicator function (equal or not) and we cannot take derivative of these functions. However, if we use Wasserstein loss or Earth-Mover distance, we can take it since we are approximating it as a distance between two points on space. Short story: Normal GAN loss function is continuous iff the distributions have an overlap, otherwise it is discrete.

  6. \n
\n\n

Hope this helps

\n", + "system": "" + }, + { + "instruction": "Tensorflow, Keras: In a multi-class classification, accuracy is high, but precision, recall, and f1-score is zero for most classes", + "input": "", + "output": "

There is a possibility that you are passing binary f1-score to compile function. This should fix the problem -

\n\n
pip install tensorflow-addons\n\n...\n\nimport tensorflow_addons as tfa \n\nf1 = tfa.metrics.F1Score(36,'micro' or 'macro')\n\nmodel.compile(...,metrics=[f1])\n
\n\n

You can read more about how f1-micro and f1-macro is calculated and which can be useful here.

\n", + "system": "" + }, + { + "instruction": "Save keras model weights directly to bytes/memory?", + "input": "", + "output": "

weights=model.get_weights() will get the model weights. model.set_weights(weights) will set the model weights.One of the issues though is WHEN do you save the model weights. Generally you want to save the model weights for the epoch in which you had the lowest validation loss.\nThe Keras callback ModelCheckpoint will save the weights with the lowest validation loss to a file. I found that saving to a file is inconvenient so I wrote a small custom callback to just save the weight with the lowest validation loss into a class variable then after training is complete load those weights into the model to make predictions. Code is shown below. Just add save_best_weights to the list of callbacks when you compile the model.

\n\n
class save_best_weights(tf.keras.callbacks.Callback):\nbest_weights=model.get_weights()    \ndef __init__(self):\n    super(save_best_weights, self).__init__()\n    self.best = np.Inf\ndef on_epoch_end(self, epoch, logs=None):\n    current_loss = logs.get('val_loss')\n    accuracy=logs.get('val_accuracy')* 100\n    if np.less(current_loss, self.best):\n        self.best = current_loss            \n        save_best_weights.best_weights=model.get_weights()\n        print('\\nSaving weights validation loss= {0:6.4f}  validation accuracy= {1:6.3f} %\\n'.format(current_loss, accuracy))   \n\n
\n", + "system": "" + }, + { + "instruction": "In Keras, is there any function similar to the zero_grad() in Pytorch?", + "input": "", + "output": "

In Pytorch the gradients are accumulated for every variables and the loss value is distribuited among them all. Then the optimizer is the one in charge of making the update to the model parameters (specified at the initialization) and since the update values are ever kept in memory you have to zero the value of update at start.

\n
optimizer = torch.optim.Adam(itertools.chain(*param_list), lr=opt.lr, ...)\n...\noptimizer.zero_grad()\nloss = ...\nloss.backward()\noptimizer.step()\n
\n

In keras with gradient tapes you are wrapping a bunch of operation for which variables you want to compute gradients. You call the gradient method on the tape to compute the update passing the loss value and the variables for which you have to compute the gradient update. The optimizer just apply a single update to a single parameter (for the entire list of updates-params you specified).

\n
with tf.GradientTape() as tape:\n    loss = ...\ngrads = tape.gradient(loss, model.trainable_variables)\noptimizer.apply_gradients(zip(grads, model.trainable_variables))\n
\n

you can use .fit() method instead, that does all of that under the hood.

\n

If your aim is to accumulate multiple times the update, in Keras there is no standard method but you can do it more easily with tapes, accumulating the update values before apply them (See this https://www.tensorflow.org/api_docs/python/tf/GradientTape#:~:text=To%20compute%20multiple%20gradients%20over%20the%20same%20computation).

\n

A good solution to do it with .fit() is explained here: How to accumulate gradients for large batch sizes in Keras

\n

If you want to know more about how the parameters gradients tracked efficiently to distribuite the loss value and understand the whole process better, have a look at (Wikipedia) Automatic differentiation

\n", + "system": "" + }, + { + "instruction": "tensorflow 2.0, model.fit() : Your input ran out of data", + "input": "", + "output": "

If you don't want to use .repeat() you need to have your model passing thought your entire data only one time per epoch.

\n\n

In order to do that you need to calculate how many steps it will take for your model to pass throught the entire dataset, the calcul is easy :

\n\n
steps_per_epoch = len(train_dataset) // batch_size\n
\n\n

So with a train_dataset of 60 000 sample and a batch_size of 128, you need to have 468 steps per epoch.

\n\n

By setting this parameter like that you make sure that you do not exceed the size of your dataset.

\n", + "system": "" + }, + { + "instruction": "How to set environment variable TF_Keras = 1 for onnx conversion?", + "input": "", + "output": "

Environment variables should be set before importing modules, as modules check the variables at import:

\n\n
import os\nos.environ['TF_KERAS'] = '1'\nimport onnx\n
\n", + "system": "" + }, + { + "instruction": "How can I use a stateful LSTM model to predict without specifying the same batch_size as I trained it?", + "input": "", + "output": "

When stateful=True, batch_size is indeed needed for the model's logic to work properly.

\n\n

However, the weights of your model don't need to know the batch_size at all. So, it would be nice if there was some set_batch_size() method, or even nicer, if fit() and predict() could derive it from the input. But unfortunately, this is not the case.

\n\n

But there is a workaround: Just define another instance of that model and specify batch_size=1 (or whatever number you wish). Then, just assign the trained model's weights to this new model with different batch size:

\n\n
model64 = bi_lstm(vocab_size, embedding_dim, batch_size=64, rnn_units=rnn_units)\nmodel64.fit(...)\n# optional: model64.save_weights('model64_weights.hdf5')\n\nmodel1 = bi_lstm(vocab_size, embedding_dim, batch_size=1, rnn_units=rnn_units)\nmodel1.set_weights(model64.get_weights()) # or: model1.load_weights('model64_weights.hdf5')\nmodel1.predict(...)\n
\n\n

This works because batch_size does not participate in the weights' shapes at all and therefore they are interchangeable.

\n", + "system": "" + }, + { + "instruction": "ValueError: All inputs to `ConcreteFunction`s must be Tensors", + "input": "", + "output": "

Since tensorflow works with Tensors only, so it will not accept a python list as input and as the error also says, you need to convert the list to a Tensor and then feed it.

\n\n

What you can do is define the list as numpy array with something like

\n\n

np_list = np.asarray(sentence_list) and then convert it to tensor using

\n\n

tensor_list = tf.convert_to_tensor(np_list).

\n\n

Read more about them here, np.asarray and convert_to_tensor

\n", + "system": "" + }, + { + "instruction": "How to make sure the training phase won't be facing an OOM?", + "input": "", + "output": "

There's a known memory leak [1] that happens if you train repeatedly in a loop. The solution to that is call tf.keras.backend.clear_session() and possibly gc.collect() every now and then in the loop.

\n\n

The behavior seems to be different in TF 1.15 and 2.0 though and this might not be enough to fix it. I find that tf.keras.backend.clear_session() in my training loop on CPU resets a gradual memory leak without hurting the training.

\n\n

[1] https://github.com/tensorflow/tensorflow/issues/30324

\n", + "system": "" + }, + { + "instruction": "UnboundLocalError: local variable 'batch_index' referenced before assignment", + "input": "", + "output": "

I checked in the training_arrays.py (here) the function in which you got the error and, as I can see, I think the problem could be in these statements (from lines 177 - 205):

\n\n
batches = make_batches(num_train_samples, batch_size)\nfor batch_index, (batch_start, batch_end) in enumerate(batches): # the problem is here\n    # do stuff\n    ...\nif batch_index == len(batches) - 1:\n    # do stuff\n    ...\n
\n\n

If batches is an empty list, you could get this error. Could be that your training set has some problem?

\n", + "system": "" + }, + { + "instruction": "Best practice for feeding spark dataframes for training Tensorflow network", + "input": "", + "output": "

I used tf.data.Dataset to handle this. I can buffer the data coming from spark, and then leave the job of batch creation to tensorflow dataset api. It is now much faster:

\n
class MyGenerator(object):\n    def __init__(\n        self, spark_df, buffer_size, feature_col="features", label_col="labels"\n    ):\n        w = Window().partitionBy(sf.lit("a")).orderBy(sf.lit("a"))\n        self.df = (\n            spark_df.withColumn("index", sf.row_number().over(w) - 1)\n            .sort("index")\n            .select([feature_col, "index", label_col])\n        )\n\n        self.feature_col = feature_col\n        self.label_col = label_col\n        self.buffer_size = buffer_size\n\n    def generate_data(self):\n        idx = 0\n        buffer_counter = 0\n        buffer = self.df.filter(\n            f"index >= {idx} and index < {self.buffer_size}"\n        ).toPandas()\n        while len(buffer) > 0:\n            if idx < len(buffer):\n                X = buffer.iloc[idx][self.feature_col].toArray() / 255.0\n                y = buffer.iloc[idx][self.label_col]\n\n                idx += 1\n\n                yield X.reshape((28, 28)), y\n            else:\n                buffer = self.df.filter(\n                    f"index >= {buffer_counter * self.buffer_size} "\n                    f"and index < {(buffer_counter + 1) * self.buffer_size}"\n                ).toPandas()\n                idx = 0\n                buffer_counter += 1\n
\n
batch_size = 128\nbuffer_size = 4*1024\n\nmy_gen = MyGenerator(feature_df, buffer_size)\ndataset = tf.data.Dataset.from_generator(my_gen.generate_data, output_types=(tf.float32, tf.int32))\ndataset = dataset.prefetch(tf.contrib.data.AUTOTUNE).batch(batch_size, drop_remainder=True)\n
\n", + "system": "" + }, + { + "instruction": "Cannot Import Name 'keras_export' From 'tensorflow.python.util.tf_export'", + "input": "", + "output": "

Tensorflow is not uninstalled when tensorflow-gpu is install. That is the problem in importing the keras_import. So if we uninstall the tensorflow then the problem is solved.

\n
\n

tensorflow_version 2.x
\n!pip uninstall -y tensorflow
\n!pip install tensorflow-gpu==1.14.0

\n
\n

Hope it works!!

\n", + "system": "" + }, + { + "instruction": "How is learning rate decay implemented by Adam in keras", + "input": "", + "output": "

The learning rate changes with every iteration, i.e., with every batch and not epoch. So, if you set the decay = 1e-2 and each epoch has 100 batches/iterations, then after 1 epoch your learning rate will be

\n\n
lr = init_lr * 1/(1 + 1e-2 * 100)\n
\n\n

So, if I want my learning rate to be 0.75 of the original learning rate at the end of each epoch, I would set the lr_decay to

\n\n
batches_per_epoch = dataset_size/batch_size\nlr_decay = (1./0.75 -1)/batches_per_epoch\n
\n\n

It seems to work for me. Also, since the new learning rate is calculated at every iteration, the optimizer doesn't change the value of the learning rate variable and always uses the initial learning rate to calculate the effective learning rate.

\n", + "system": "" + }, + { + "instruction": "Why there is sudden drop in loss after every epoch?", + "input": "", + "output": "

Note: This answer is assuming you are using Keras -- you might want to add this information to your post or at least add a relevant tag.

\n\n

Yes, this is because the displayed values are averaged over the epoch. Consider epoch 1. At the beginning of training, the loss will usually be quite large. It will then decrease, but the displayed value for epoch 1 will still include those large values from the beginning in the average. For example, let's say the loss in the beginning is 0.75 and decreases linearly to 0.25 until the end of the first epoch; this would mean an average of 0.5 which would be the value shown for epoch 1.

\n\n

Once epoch 2 starts, the average is reset and will be computed again for this epoch. Let's continue with the example, so the loss is 0.25 at the beginning of epoch 2 and decreases linearly to 0. This means the loss displayed for epoch 2 will be 0.125! More importantly however, it will start at 0.25, and so already at the beginning of the epoch you will see a large drop from the value of 0.5 shown for epoch 1.

\n", + "system": "" + }, + { + "instruction": "AttributeError when using callback Tensorboard on Keras: 'Model' object has no attribute 'run_eagerly'", + "input": "", + "output": "

I got the same error : AttributeError: 'Model' object has no attribute 'run_eagerly'

\n\n

After two minor changes my tensorboard is running now.

\n\n
    \n
  1. make sure you import tensorboard as follows: \nfrom keras.callbacks import TensorBoard

  2. \n
  3. change the log dir like this: \ntensorboard = TensorBoard(log_dir=\"logs\")

  4. \n
\n", + "system": "" + }, + { + "instruction": "Visualize TFLite graph and get intermediate values of a particular node?", + "input": "", + "output": "
\n

Note: this answer was written for Tensorflow 1.x and, while the concept and core idea remains the same in TensorFlow 2.x, the commands in this answer might be deprecated.

\n
\n

The mechanism of TF-Lite makes the whole process of inspecting the graph and getting the intermediate values of inner nodes a bit tricky. The get_tensor(...) method suggested by the other answer does not work.

\n

How to visualize TF-Lite inference graph?

\n

TensorFlow Lite models can be visualized using the visualize.py script in the TensorFlow Lite repository. You just need to:

\n\n

Does the nodes in my TF model have a equivalent one in TF-Lite?

\n

NO! In fact, TF-Lite can modify your graph so that it become more optimal. Here are some words about it from the TF-Lite documentation:

\n
\n

A number of TensorFlow operations can be processed by TensorFlow Lite even though they have no direct equivalent. This is the case for operations that can be simply removed from the graph (tf.identity), replaced by tensors (tf.placeholder), or fused into more complex operations (tf.nn.bias_add). Even some supported operations may sometimes be removed through one of these processes.

\n
\n

Moreover, the TF-Lite API currently doesn't allow to get node correspondence; it's hard to interpret the inner format of TF-Lite. So, you can't get the intermediate outputs for any nodes you want, even without the one more issue below...

\n

Can I get intermediate values of some TF-Lite nodes?

\n

NO! Here, I will explain why get_tensor(...) wouldn't work in TF-Lite. Suppose in the inner representation, the graph contains of 3 tensors, together with some dense operations (nodes) in-between (you can think of tensor1 as input and tensor3 as output of your model). During inference of this particular graph, TF-Lite only needs 2 buffers, let's show how.

\n

First, use tensor1 to compute tensor2 by applying dense operation. This only requires 2 buffers to store the values:

\n
           dense              dense\n[tensor1] -------> [tensor2] -------> [tensor3]\n ^^^^^^^            ^^^^^^^\n bufferA            bufferB\n
\n

Second, use the value of tensor2 stored in bufferB to compute tensor3... but wait! We don't need bufferA anymore, so let's use it to store the value of tensor3:

\n
           dense              dense\n[tensor1] -------> [tensor2] -------> [tensor3]\n                    ^^^^^^^            ^^^^^^^\n                    bufferB            bufferA\n
\n

Now is the tricky part. The "output value" of tensor1 will still point to bufferA, which now holds the values of tensor3. So if you call get_tensor(...) for the 1st tensor, you'll get incorrect values. The documentation of this method even states:

\n
\n

This function cannot be used to read intermediate results.

\n
\n

How to get around this?

\n\n", + "system": "" + }, + { + "instruction": "Difference between model.compile's parameter 'weighted_metrics' and model.fit_generator's parameter 'class_weight' in keras?", + "input": "", + "output": "

Yes, you can use them for your unbalanced dataset.

\n\n
\n

weighted_metrics

\n
\n\n

is a list of metrics that take into account the

\n\n
\n

class_weights

\n
\n\n

that you pass in fit_generator.

\n\n

So in your example, you can set

\n\n
weighted_metrics=['accuracy']\n
\n\n

and

\n\n

class_weight = {0 : 3, 1: 4}

\n\n

The purpose of weighted_metrics parameter is to give a list of metrics that will take into account the class_weights that you pass in fit_generator.

\n", + "system": "" + }, + { + "instruction": "Fitting a Keras model yields error "constant folding failed: Invalid argument: Unsupported type: 21"", + "input": "", + "output": "

There is one key difference between the Tutorial mentioned in the link, https://www.tensorflow.org/beta/tutorials/load_data/text and your Dataset.

\n

In the tutorial, Labels are 0, 1 and 2, i.e., all the sentences in cowper.txt are Labelled as 0, all the sentences in derby.txt are Labelled as 1, all the sentences in butler.txt are Labelled as 2. But in your Dataset, Labels are at the end of each sentence of the Text Files.

\n

I have executed the code taking part of your Dataset, as shown below:

\n
FILE_NAMES = ['001.dev', '001.test', '001.train', '002.dev', '002.test', '002.train']\n\nparent_dir = "Issue_55902068/OC"\n\nparent_dir\n
\n

In order to handle the difference mentioned above, the function, labeler should be modified as shown below:

\n
def labeler(example, index):\n  Label = tf.strings.split(example, sep="")[-1] #It will give 0 or 1 in Str format\n  Label = tf.strings.to_number(Label)\n  return example, tf.cast(Label, tf.int64)\n
\n

After that, I changed the loss function to binary_crossentropy and the optimizer to RMSprop as shown below:

\n
model.compile(optimizer='RMSprop', loss='binary_crossentropy', metrics=['accuracy'])\n
\n

It is working as expected. Screenshot of the output is shown below.

\n

\"enter

\n", + "system": "" + }, + { + "instruction": "How to convert the body-pix models for tfjs to keras h5 or tensorflow frozen graph", + "input": "", + "output": "\n\n

Eg: https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/resnet50/float/model-stride16.json

\n\n\n\n

https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/resnet50/float/manifest.json

\n\n\n\n

from https://github.com/ajaichemmanam/tfjs-to-tf

\n\n\n\n

tfjs_graph_converter path/to/js/model path/to/frozen/model.pb

\n\n

Here is an example of POSENET converted to keras h5 model. https://github.com/tensorflow/tfjs/files/3943875/posenet.zip\nSame way you can use the bodypix models and convert it .

\n", + "system": "" + }, + { + "instruction": "How to combine LSTM and CNN models in Keras", + "input": "", + "output": "
This is how you can merge two Deep learning models.\n\n    model1 = Sequential()\n    #input \n    model1.add(Dense(32, input_shape=(NUM_FEAT1,1)))\n    model1.add(Activation(\"elu\"))\n    model1.add(Dropout(0.5))\n    model1.add(Dense(16))\n    model1.add(Activation(\"elu\"))\n    model1.add(Dropout(0.25))\n    model1.add(Flatten())\n\n    model2 = Sequential()\n    #input \n    model2.add(Dense(32, input_shape=(NUM_FEAT1,1)))\n    model2.add(Activation(\"elu\"))\n    model2.add(Dropout(0.5))\n    model2.add(Dense(16))\n    model2.add(Activation(\"elu\"))\n    model2.add(Dropout(0.25))\n    model2.add(Flatten())\n\nmerged = Concatenate()([model1.output,model2.output])\nz = Dense(128, activation=\"relu\")(merged)\nz = Dropout(0.25)(z)\nz = Dense(1024, activation=\"relu\")(z)\nz = Dense(1, activation=\"sigmoid\")(z)\n\nmodel = Model(inputs=[model1.input, model2.input], outputs=z)\n\nmodel.compile(optimizer='adam',\n              loss='binary_crossentropy',\n              metrics=['accuracy'])\n\n\n        model.fit([x_train[train_index][:,:66], x_train[train_index][:,66:132], y_train[train_index], batch_size=100, epochs=100, verbose=2)\n
\n\n

By this you are able to feed 2 different types of data to your model like images in first model and Textual Data in the second model according to your need.

\n", + "system": "" + }, + { + "instruction": "Keras h5 to Tensorflow serving in 2019?", + "input": "", + "output": "

NOTE: This applies to TF 2.0+

\n\n

I'm assuming you have your Keras model in model.h5.\nFirstly, just load the model with tensorflow's implementation of Keras:

\n\n
from tensorflow import keras\nmodel = keras.models.load_model('model.h5')\n
\n\n

Then, simply export a SavedModel

\n\n
keras.experimental.export_saved_model(model, 'path_to_saved_model')\n
\n\n

Finally, apply any transformation you nomally d to go from SavedModel to the .pb inference file (e.g.: freezing, optimizing for inference, etc)

\n\n

You can hve more details and a full example in TF's official guide for saving and serializing models in TF 2.0

\n", + "system": "" + }, + { + "instruction": "Using pretrained glove word embedding with scikit-learn", + "input": "", + "output": "

You can simply use the Zeugma library.

\n\n

You can install it with pip install zeugma, then create and train your model with the following lines of code (assuming corpus_train and corpus_test are lists of strings):

\n\n
from sklearn.linear_model import LogisticRegresion\nfrom zeugma.embeddings import EmbeddingTransformer\n\nglove = EmbeddingTransformer('glove')\nx_train = glove.transform(corpus_train)\n\nmodel = LogisticRegression()\nmodel.fit(x_train, y_train)\n\nx_test = glove.transform(corpus_test)\nmodel.predict(x_test)\n
\n\n

You can also use different pre-trained embeddings (complete list here) or train your own (see Zeugma's documentation for how to do this).

\n", + "system": "" + }, + { + "instruction": "How are the output size of MaxPooling2D, Conv2D, UpSampling2D layers calculated?", + "input": "", + "output": "

With convolutional (2D here) layers, the important points to consider are the volume of the image (Width x Height x Depth) and the four parameters you give it. Those parameters are the

\n\n\n\n

The formula for the output shape is given as

\n\n
    \n
  1. Wnew = (W - F + 2*P)/S + 1
  2. \n
  3. Hnew = (H - F + 2*P)/S + 1
  4. \n
  5. Dnew = K
  6. \n
\n\n

This is taken from this thread what is the effect of tf.nn.conv2d() on an input tensor shape? , and more information about zero padding and such can be found there.

\n\n

As for maxpooling and upsampling, the size is just effected by the pool size and the stride. In your example, you had a pool size of (2,2) along with no stride defined (so it will be default to be the pool size, see here https://keras.io/layers/pooling/). Upsampling works the same. The pool size just takes a pool of 2x2 pixels, finds the sum of them and puts them into one pixel. Hence converting 2x2 pixels to 1x1 pixel, encoding it. Upsampling is the same thing, but instead of summing the pixel values, the values are just repeated over the pool.

\n\n

The reason why you don't have a maxpooling layer and why the image dimensions mess up in your case is due to the image size at that stage. Looking at the network, the image dimensions is already [7,7,8]. With a pool size and stride of (2,2) and 2 respectively, that would lower the resolution of the image to [3,3,8]. After the upsampling layers, the dimensionality will go from 3 -> 6 -> 12 -> 24, and you've lost 4 pixels in each row and column.

\n\n

The significance of None (correct me if I'm wrong I'm not 100% certain) is due to the network expecting multiple images normally at convolutional layers. Normally the dimensionality expected goes as

\n\n
[Number of images, Width, Height, Depth]\n
\n\n

Thus the reason why the first element is given as none is that your network is only expecting one image at a time, hence it's given as None (Again I'm very not sure about this point).

\n", + "system": "" + }, + { + "instruction": "LSTM having a systematic offset between predictions and ground truth", + "input": "", + "output": "

It looks like your model is overfitting and is simply always returning the value from the last timestep as a prediction. Your dataset is probably too small to have a model with this amount of parameters converge. You'll need to resort to techniques that combat overfitting: agressive dropout, adding more data, or try simpler, less overparameterized methods.

\n

This phenomenon (LSTMs returning a shifted version of the input) has been a recurring theme in many stackoverflow questions. The answers there might contain some useful information:

\n

LSTM Sequence Prediction in Keras just outputs last step in the input

\n

LSTM model just repeats the past in forecasting time series

\n

LSTM NN produces \u201cshifted\u201d forecast (low quality result)

\n

Keras network producing inverse predictions

\n

Stock price predictions of keras multilayer LSTM model converge to a constant value

\n

Keras LSTM predicted timeseries squashed and shifted

\n

LSTM Time series shifted predictions on stock market close price

\n

Interesting results from LSTM RNN : lagged results for train and validation data

\n

Finally, be aware that, depending on the nature of your dataset, there simply might be no pattern to be discovered in your data at all. You see this a lot with people trying to predict the stock market with LSTMs (there is a question on stackoverflow on how to predict the lottery numbers).

\n", + "system": "" + }, + { + "instruction": "sklearn pipeline + keras sequential model - how to get history?", + "input": "", + "output": "

The History callback records training metrics for each epoch. This includes the loss and the accuracy (for classification problems) as well as the loss and accuracy for the validation dataset, if one is set.

\n\n

The history object is returned from calls to the fit() function used to train the model. Metrics are stored in a dictionary in the history member of the object returned.

\n\n

This also means that the values have to be in the scope of the fit() function or the sequential model, so if it is in a sklearn pipeline, it doesn't have access to the final values, and it can't store, or return what it can't see.

\n\n

As of right now I an not aware of a history callback in sklearn so the only I see for you is to manually record the metrics you want to track. One way to do so would be to have pipeline return the data and then simply fit your model onto it. If you are not able to figure that out comment.

\n", + "system": "" + }, + { + "instruction": "Use TensorFlow loss Global Objectives (recall_at_precision_loss) with Keras (not metrics)", + "input": "", + "output": "

Similar to Martino's answer, but will infer shape from input (setting it to a fixed batch size did not work for me).

\n\n

The outside function isn't strictly necessary, but it feels a bit more natural to pass params as you configure the loss function, especially when your wrapper is defined in an external module.

\n\n
import keras.backend as K\nfrom global_objectives.loss_layers import precision_at_recall_loss\n\ndef get_precision_at_recall_loss(target_recall): \n    def precision_at_recall_loss_wrapper(y_true, y_pred):\n        y_true = K.reshape(y_true, (-1, 1)) \n        y_pred = K.reshape(y_pred, (-1, 1))   \n        return precision_at_recall_loss(y_true, y_pred, target_recall)[0]\n    return precision_at_recall_loss_wrapper\n
\n\n

Then, when compiling the model:

\n\n
TARGET_RECALL = 0.9\nmodel.compile(optimizer='adam', loss=get_precision_at_recall_loss(TARGET_RECALL))\n
\n", + "system": "" + }, + { + "instruction": "Define custom LSTM Cell in Keras?", + "input": "", + "output": "

First of all, you should define your own custom layer. If you need some intuition how to implement your own cell see LSTMCell in Keras repository. E.g. your custom cell will be:

\n\n
class MinimalRNNCell(keras.layers.Layer):\n\n    def __init__(self, units, **kwargs):\n        self.units = units\n        self.state_size = units\n        super(MinimalRNNCell, self).__init__(**kwargs)\n\n    def build(self, input_shape):\n        self.kernel = self.add_weight(shape=(input_shape[-1], self.units),\n                                      initializer='uniform',\n                                      name='kernel')\n        self.recurrent_kernel = self.add_weight(\n            shape=(self.units, self.units),\n            initializer='uniform',\n            name='recurrent_kernel')\n        self.built = True\n\n    def call(self, inputs, states):\n        prev_output = states[0]\n        h = K.dot(inputs, self.kernel)\n        output = h + K.dot(prev_output, self.recurrent_kernel)\n        return output, [output]\n
\n\n

Then, use tf.keras.layers.RNN to use your cell:

\n\n
cell = MinimalRNNCell(32)\nx = keras.Input((None, 5))\nlayer = RNN(cell)\ny = layer(x)\n\n# Here's how to use the cell to build a stacked RNN:\n\ncells = [MinimalRNNCell(32), MinimalRNNCell(64)]\nx = keras.Input((None, 5))\nlayer = RNN(cells)\ny = layer(x)\n
\n", + "system": "" + }, + { + "instruction": "`Check failed: cudnnSetTensorNdDescriptor` when transfer learning with pretrained Keras model", + "input": "", + "output": "

Check the requirements for an input to InceptionResnetV2 network:

\n\n
\n

It should have exactly 3 inputs channels, and width and height should\n be no smaller than 75

\n
\n\n

And you are trying to fit CIFAR10 images which are 32x32.

\n", + "system": "" + }, + { + "instruction": "Multiple-input multiple-output CNN with custom loss function", + "input": "", + "output": "

I think your design makes sense in general:

\n\n\n\n

Several things:

\n\n
    \n
  1. A, B, and C looking different does not necessarily mean you can't stack them together as a 3-channel input. The decision should be made upon the fact that whether the values in A, B, and C mean differently or not. For example, if A is a gray scale image, B is a depth map, C is a also a gray image captured by a different camera. Then A and B are better processed in your suggested way, but A and C can be concatenated as one single input before feeding it to your network.

  2. \n
  3. D and E are two outputs of the network and will be trained in the multi-task manner. Of course, they should share some latent feature, and one should split at this feature to apply a down-stream non-shared weight branch for each output. However, where to split is usually tricky.

  4. \n
\n", + "system": "" + }, + { + "instruction": "Keras Estimator + tf.data API", + "input": "", + "output": "

You need to add an input layer:

\n\n
model = tf.keras.models.Sequential()\nmodel.add(tf.keras.layers.InputLayer(input_shape=your_tensor_shape, name=your_feature_key))\nmodel.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))\n
\n", + "system": "" + }, + { + "instruction": "Masking zero inputs in LSTM in keras without using embedding", + "input": "", + "output": "

As @Today has suggested in the comment you can use the Masking layer. Here I added a toy problem.

\n\n
# lstm autoencoder recreate sequence\nfrom numpy import array\nfrom keras.models import Sequential\nfrom keras.layers import LSTM, Masking\nfrom keras.layers import Dense\nfrom keras.layers import RepeatVector\nfrom keras.layers import TimeDistributed\nfrom keras.utils import plot_model\nfrom keras.preprocessing.sequence import pad_sequences\n\n\n# define input sequence\nsequence = array([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], \n                  [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8],\n                  [0.3, 0.4, 0.5, 0.6]])\n# make sure to use dtype='float32' in padding otherwise with floating points\nsequence = pad_sequences(sequence, padding='post', dtype='float32')\n\n\n# reshape input into [samples, timesteps, features]\nn_obs = len(sequence)\nn_in = 9\nsequence = sequence.reshape((n_obs, n_in, 1))\n\n# define model\nmodel = Sequential()\nmodel.add(Masking(mask_value=0, input_shape=(n_in, 1)))\nmodel.add(LSTM(100, activation='relu', input_shape=(n_in,1) ))\nmodel.add(RepeatVector(n_in))\nmodel.add(LSTM(100, activation='relu', return_sequences=True))\nmodel.add(TimeDistributed(Dense(1)))\nmodel.compile(optimizer='adam', loss='mse')\n# fit model\nmodel.fit(sequence, sequence, epochs=300, verbose=0)\nplot_model(model, show_shapes=True, to_file='reconstruct_lstm_autoencoder.png')\n# demonstrate recreation\nyhat = model.predict(sequence, verbose=0)\nprint(yhat[0,:,0])\n
\n", + "system": "" + }, + { + "instruction": "Implementing a batch dependent loss in Keras", + "input": "", + "output": "

This autoencoder can be easily implemented using the Keras functional API. This will allow to have an additional input placeholder y_prec_input, which will be fed with the \"precision\" vector. The full source code can be found here.

\n\n
\n\n

Data generator

\n\n

First, let's reimplement your data generator as follows:

\n\n
class DataGenerator(Sequence):\n    def __init__(self, batch_size, y, prec, shuffle=True):\n        self.batch_size = batch_size\n        self.y = y\n        self.shuffle = shuffle\n        self.prec = prec\n        self.on_epoch_end()\n\n    def on_epoch_end(self):\n        self.indexes = np.arange(len(self.y))\n        if self.shuffle:\n            np.random.shuffle(self.indexes)\n\n    def __len__(self):\n        return int(np.floor(len(self.y) / self.batch_size))\n\n    def __getitem__(self, index):\n        indexes = self.indexes[index * self.batch_size: (index + 1) * self.batch_size]\n        y = self.y[indexes]\n        y_prec = self.prec[indexes]\n        return [y, y_prec], y\n
\n\n

Note that I got rid of the global variable. Now, instead, the precision vector P is provided as input argument (prec), and the generator yields an additional input that will be fed to the precision placeholder y_prec_input (see model definition).

\n\n
\n\n

Model

\n\n

Finally, your model can be defined and trained as follows:

\n\n
y_input = Input(shape=(input_dim,))\ny_prec_input = Input(shape=(1,))\nh_enc = Dense(dims[0], activation='relu')(y_input)\nh_enc = Dense(dims[1], activation='relu')(h_enc)\nh_enc = Dense(dims[2], activation='relu', name='bottleneck')(h_enc)\nh_dec = Dense(dims[1], activation='relu')(h_enc)\nh_dec = Dense(input_dim, activation='relu')(h_dec)\nmodel2 = Model(inputs=[y_input, y_prec_input], outputs=h_dec)\nmodel2.compile(optimizer='adam', loss=MAEpw_wrapper(y_prec_input))\n\n# Train model\nmodel2.fit_generator(DataGenerator(32, digits.data, P), epochs=100)\n
\n\n

where input_dim = digits.data.shape[1]. Note that I also changed the output dimension of the decoder to input_dim, since it must match the input dimension.

\n", + "system": "" + }, + { + "instruction": "Why do I fail to predict y=x**4 with Keras? (y=x**3 works)", + "input": "", + "output": "

The problem is that your input and output variables have too large values and hence are not compatible with the (initial) weights of the network. For Dense layer the default kernel initializer is glorot_uniform; the documentation states that:

\n\n
\n

It draws samples from a uniform distribution within [-limit, limit] where limit is sqrt(6 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.

\n
\n\n

For your example the weights of the first and last layer are therefore sampled on the interval [0.34, 0.34]. Now there are two issues that have to do with the magnitude of weights and input/outputs:

\n\n\n\n

So what can we do about it? On the one hand we could modify the weight initialization and on the other hand we can scale the inputs and outputs to a more moderate range. The latter is a much better idea since any numerical computation is much more accurate when performed in the order of magnitude 1. Also MSE loss is going to explode for orders of magnitude difference.

\n\n

Variable scaling

\n\n

The scikit-learn package provides various routines for data preparation as for example the StandardScaler. This will subtract the mean from the data and then divide by its standard deviation, i.e. x -> (x - mu) / sigma.

\n\n
x_scaler = StandardScaler()\ny_scaler = StandardScaler()\n\nx = x_scaler.fit_transform(x[:, None])  # Features are expected as columns vectors.\ny = y_scaler.fit_transform(y[:, None])\n\n... # Model definition and fitting goes here.\n\n# Invert the transformation before plotting.\nx = x_scaler.inverse_transform(x).ravel()\ny = y_scaler.inverse_transform(y).ravel()\npredictions = y_scaler.inverse_transform(predictions).ravel()\n
\n\n

After 2000 epochs of training (full batch size):

\n\n

\"Feature

\n\n

Weight initialization

\n\n

Not recommend! Instead feature scaling should be used, I just provide the example for the sake of completeness. So in order to make the weights compatible with the input/output we can specify custom initializers for the first and last layer of the network:

\n\n
model.add(Dense(50, input_shape=(1,),\n                kernel_initializer=RandomUniform(-0.001, 0.001)))\n\n... # Activations and intermediate layers.\n\nmodel.add(Dense(1, kernel_initializer=RandomUniform(-1e7, 1e7)))\n
\n\n

Note the small weights for the first layer (in order to prevent saturation of the sigmoid) and the large weights of the last layer (in order to help the network scaling the outputs by the required 7 orders of magnitude).

\n\n

Again, after 2000 epochs (full batch size):

\n\n

\"Custom

\n\n

As you can see, it works as well, but not as well as for the scaled feature approach. Furthermore, the larger the number, the larger the risk to run into numerical instabilities. A good rule of thumb is to try to always stay in the region around 1 (plus/minus a (very) few orders of magnitude).

\n", + "system": "" + }, + { + "instruction": "How to get the probability percentage in keras predicting model CNN", + "input": "", + "output": "
from keras.models import load_model\n\nfrom keras.preprocessing import image\n\nmodel=load_model(\"/blah/blah/blah\")\n\nimg = image.load_img(path, color_mode = \"grayscale\", target_size=(128, 128, 1))\n\ny = image.img_to_array(img)\n\ny = np.expand_dims(y, axis=0)\n\nimages = np.vstack([y])\n\nclasses = model.predict(images/255.0, batch_size=8, verbose=0)\n
\n", + "system": "" + }, + { + "instruction": "TF / Keras error: InputLayer not a Checkpointable", + "input": "", + "output": "

If you're using or import layers from Keras instead of TensorFlow like this:

\n\n
from keras import layers,models\nfrom keras.preprocessing.image import ImageDataGenerator\nimport tensorflow as tf\n
\n\n

You will get error like you mention above :

\n\n
TypeError: Checkpointable._track_checkpointable() passed type <class 'keras.engine.topology.InputLayer'>, not a Checkpointable.\n
\n\n

So, you can import layers directly from TensorFlow like my code below:

\n\n
from tensorflow.keras import layers,models\nfrom keras.preprocessing.image import ImageDataGenerator\nimport tensorflow as tf\n
\n\n

or you can see my full code here:\nhttps://gist.github.com/ilmimris/8218e397dd35ab693404e95db32dc574

\n", + "system": "" + }, + { + "instruction": "Keras "fit" input unclear", + "input": "", + "output": "
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)\n
\n\n

In the fit function x is your training data. Perhaps it is more appropriately called X_train or something similar rather than trainData. In this case if you have 32 training images of shape (350, 350, 3) they should be stacked in a four dimensional (32, 350, 350, 3) array. If you have an array of your images as objects like you seem to have you can reshape using the following code

\n\n
X_train = np.array([x for x in trainingData])\n
\n\n

Then y in the fit function is the values that you want your network to output for the corresponding input, usually a label and in this case the larger images. You call it validateData but it's more commonly called something closer to y_train. Validation data is data that is not used during training but only during evaluation of the model after every epoch. So shape y_train in the same way that X_train was shaped. Expected shape of y_train would be (32, 1400, 1400, 3).

\n", + "system": "" + }, + { + "instruction": "loss calculation over different batch sizes in keras", + "input": "", + "output": "

The code you have posted concerns multi-output models where each output may have its own loss and weights. Hence, the loss values of different output layers are summed together. However, The individual losses are averaged over the batch as you can see in the losses.py file. For example this is the code related to binary cross-entropy loss:

\n\n
def binary_crossentropy(y_true, y_pred):\n    return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)\n
\n\n

\n\n

Update: Right after adding the second part of the this answer (i.e. loss functions), as the OP, I was baffled by the axis=-1 in the definition of loss function and I thought to myself that it must be axis=0 to indicate the average over the batch?! Then I realized that all the K.mean() used in the definition of loss function are there for the case of an output layer consisting of multiple units. So where is the loss averaged over the batch? I inspected the code to find the answer: to get the loss value for a specific loss function, a function is called taking the true and predicted labels as well as the sample weights and mask as its inputs:

\n\n
weighted_loss = weighted_losses[i]\n# ...\noutput_loss = weighted_loss(y_true, y_pred, sample_weight, mask)\n
\n\n

what is this weighted_losses[i] function? As you may find, it is an element of list of (augmented) loss functions:

\n\n
weighted_losses = [\n    weighted_masked_objective(fn) for fn in loss_functions]\n
\n\n

fn is actually one of the loss functions defined in losses.py file or it may be a user-defined custom loss function. And now what is this weighted_masked_objective function? It has been defined in training_utils.py file:

\n\n
def weighted_masked_objective(fn):\n    \"\"\"Adds support for masking and sample-weighting to an objective function.\n    It transforms an objective function `fn(y_true, y_pred)`\n    into a sample-weighted, cost-masked objective function\n    `fn(y_true, y_pred, weights, mask)`.\n    # Arguments\n        fn: The objective function to wrap,\n            with signature `fn(y_true, y_pred)`.\n    # Returns\n        A function with signature `fn(y_true, y_pred, weights, mask)`.\n    \"\"\"\n    if fn is None:\n        return None\n\n    def weighted(y_true, y_pred, weights, mask=None):\n        \"\"\"Wrapper function.\n        # Arguments\n            y_true: `y_true` argument of `fn`.\n            y_pred: `y_pred` argument of `fn`.\n            weights: Weights tensor.\n            mask: Mask tensor.\n        # Returns\n            Scalar tensor.\n        \"\"\"\n        # score_array has ndim >= 2\n        score_array = fn(y_true, y_pred)\n        if mask is not None:\n            # Cast the mask to floatX to avoid float64 upcasting in Theano\n            mask = K.cast(mask, K.floatx())\n            # mask should have the same shape as score_array\n            score_array *= mask\n            #  the loss per batch should be proportional\n            #  to the number of unmasked samples.\n            score_array /= K.mean(mask)\n\n        # apply sample weighting\n        if weights is not None:\n            # reduce score_array to same ndim as weight array\n            ndim = K.ndim(score_array)\n            weight_ndim = K.ndim(weights)\n            score_array = K.mean(score_array,\n                                 axis=list(range(weight_ndim, ndim)))\n            score_array *= weights\n            score_array /= K.mean(K.cast(K.not_equal(weights, 0), K.floatx()))\n        return K.mean(score_array)\nreturn weighted\n
\n\n

As you can see, first the per sample loss is computed in the line score_array = fn(y_true, y_pred) and then at the end the average of the losses is returned, i.e. return K.mean(score_array). So that confirms that the reported losses are the average of per sample losses in each batch.

\n\n

Note that K.mean(), in case of using Tensorflow as backend, calls the tf.reduce_mean() function. Now, when K.mean() is called without an axis argument (the default value of axis argument would be None), as it is called in weighted_masked_objective function, the corresponding call to tf.reduce_mean() computes the mean over all the axes and returns one single value. That's why no matter the shape of output layer and the loss function used, only one single loss value is used and reported by Keras (and it should be like this, because optimization algorithms need to minimize a scalar value, not a vector or tensor).

\n", + "system": "" + }, + { + "instruction": "Get output from a non final keras model layer", + "input": "", + "output": "

There is a difference between layers and the outputs of those layers in Keras. You can think of layers as representing a computation and the outputs as the results of those computation. When you instantiate a Model object, it expects the results of a computation as it's output, instead of the computation itself, hence the error. To fix it, you can pass the output of the layer to the Model constructor:

\n\n
import numpy as np\nfrom keras.applications import vgg16, inception_v3, resnet50, mobilenet\nfrom keras import Model\n\na = np.random.rand(24, 224, 224, 3)\na = mobilenet.preprocess_input(a)\n\nmobilenet_model = mobilenet.MobileNet(weights='imagenet')\n\nmobilenet_model.summary()\n\nmodel_output = mobilenet_model.get_layer(\"conv_pw_13_relu\").output\nm = Model(inputs=mobilenet_model.input, outputs=model_output)\nprint(m.predict(a))\n
\n", + "system": "" + }, + { + "instruction": "Keras Conv1D or Convolution1D", + "input": "", + "output": "

They are just for different keras versions, before Keras 2.0 the layers were called ConvolutionalND for N = 1, 2, 3, and since Keras 2.0 they are just called ConvND.

\n", + "system": "" + }, + { + "instruction": "Keras: "must compile model before using it" despite compile() is used", + "input": "", + "output": "

Found my mistake - explanation for future reference.

\n

The error origniates back in compile() where the first if-statement says:

\n
if not self.built:\n    # Model is not compilable because\n    # it does not know its number of inputs\n    # and outputs, nor their shapes and names.\n    # We will compile after the first\n    # time the model gets called on training data.\nreturn\n
\n

So I specified input_shape= and input_format=in the first Conv2D layer and everything works fine.

\n", + "system": "" + }, + { + "instruction": "Getting some form of keras multi-processing/threading to work on Windows", + "input": "", + "output": "

In combination with a sequence, using multi_processing=False and workers=e.g. 4 does work.

\n\n

I just realized that in the example code in the question, I was not seeing the speed-up, because the data was being generated too fast. By inserting a time.sleep(2) this becomes evident.

\n\n
class DummySequence(Sequence):\ndef __init__(self, x_set, y_set, batch_size):\n    self.x, self.y = x_set, y_set\n    self.batch_size = batch_size\ndef __len__(self):\n    return int(np.ceil(len(self.x) / float(self.batch_size)))\ndef __getitem__(self, idx):        \n    batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]\n    batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]\n    time.sleep(2)\n    return np.array(batch_x), np.array(batch_y)\n\nx = np.random.random((100, 3))\ny = to_categorical(np.random.random(100) > .5).astype(int)\n\nseq = DummySequence(x, y, 10)\n\nmodel = Sequential()\nmodel.add(Dense(32, input_dim=3))\nmodel.add(Dense(2, activation='softmax'))\nmodel.compile(optimizer='rmsprop',\n              loss='categorical_crossentropy',\n              metrics=['accuracy'])\n\nprint('single worker')\nmodel.fit_generator(generator=seq, \n                    steps_per_epoch = 10,\n                    epochs = 2, \n                    verbose=2,\n                    workers=1)\n\nprint('achieves speed-up!')\nmodel.fit_generator(generator=seq, \n                    steps_per_epoch = 10,\n                    epochs = 2, \n                    verbose=2,\n                    workers=4,\n                    use_multiprocessing=False)\n
\n\n

This produced on my laptop the following:

\n\n
single worker\n>>> model.fit_generator(generator=seq,\n...                     steps_per_epoch = 10,\n...                     epochs = 2,\n...                     verbose=2,\n...                     workers=1)\nEpoch 1/2\n - 20s - loss: 0.6984 - acc: 0.5000\nEpoch 2/2\n - 20s - loss: 0.6955 - acc: 0.5100\n
\n\n

and

\n\n
achieves speed-up!\n>>> model.fit_generator(generator=seq,\n...                     steps_per_epoch = 10,\n...                     epochs = 2,\n...                     verbose=2,\n...                     workers=4,\n...                     use_multiprocessing=False)\nEpoch 1/2\n - 6s - loss: 0.6904 - acc: 0.5200\nEpoch 2/2\n - 6s - loss: 0.6900 - acc: 0.5000\n
\n\n

Important notes:\nYou will probably want self.lock = threading.Lock() in __init___ and then with self.lock: in __getitem__. Try to do the absolute bare minimum required within the with self.lock:, as far as I understand it, that would be any reference to self.xxxx (multi-threading is prevent while the with self.lock: block is running).

\n\n

Additionally, if you want multithreading to speed up calculations (i.e. CPU operations are the limit), do not expect any speed-up. The global-interpreter lock (GIL) will prevent that. Multithreading will only help you, if the limitation is in I/O operations. Apparently, to speed-up CPU computations we need true multiprocessing, which keras currently does not support on Windows 10. Perhaps it is possible to hand-craft a multi-processing generator (I have no idea).

\n", + "system": "" + }, + { + "instruction": "Keras dot/Dot layer behavior on 3D tensors", + "input": "", + "output": "

The dot layer is performing a matrix multiplication along the last axis since these are 3D tensors not 2D. So the shape you get reflects that. What you are trying to do is take the product across the last columns of each of your inputs. You can instead, take the element wise product of the two inputs and then sum along the last axis. For example,

\n\n
import keras.backend as K\nimport tensorflow as tf\n\nK.sum(tf.multiply(input_a, input_b[:tf.newaxis]), axis=-1, keepdims=True)\n
\n\n

If you need a keras only solution, you can use keras.layers.multiply instead of tf.multiply and use K.expand_dims instead of the broadcasting with tf.newaxis.

\n", + "system": "" + }, + { + "instruction": "TensorFlow - Stop training when losses reach a defined value", + "input": "", + "output": "

You can use some method like this if you would switch to TensorFlow 2.0:

\n\n
class haltCallback(tf.keras.callbacks.Callback):\ndef on_epoch_end(self, epoch, logs={}):\n    if(logs.get('loss') <= 0.05):\n        print(\"\\n\\n\\nReached 0.05 loss value so cancelling training!\\n\\n\\n\")\n        self.model.stop_training = True\n
\n\n

You just need to create a callback like that and then add that callback to your model.fit so it becomes something like this:

\n\n
model.fit(x_train, y_train,\n      epochs=3000,\n      batch_size=128,\n      callbacks=['trainingStopCallback'])\n
\n\n

This way, the fitting should stop whenever it reaches down below 0.05 (or whatever value you put on while defining it).

\n\n

Also, since it's been a long time you asked this question but it still has no actual answer for using it with TensorFlow 2.0, I updated your code snippet to TensorFlow 2.0 so everyone can now easily find and use it with their new projects.

\n\n
import tensorflow as tf\n\n# Generate dummy data\nimport numpy as np\n\n\nx_train = np.random.random((1000, 20))\ny_train = tf.keras.utils.to_categorical(\n    np.random.randint(10, size=(1000, 1)), num_classes=10)\nx_test = np.random.random((100, 20))\ny_test = tf.keras.utils.to_categorical(\n    np.random.randint(10, size=(100, 1)), num_classes=10)\n\nmodel = tf.keras.models.Sequential()\n# Dense(64) is a fully-connected layer with 64 hidden units.\n# in the first layer, you must specify the expected input data shape:\n# here, 20-dimensional vectors.\nmodel.add(tf.keras.layers.Dense(64, activation='relu', input_dim=20))\nmodel.add(tf.keras.layers.Dropout(0.5))\nmodel.add(tf.keras.layers.Dense(64, activation='relu'))\nmodel.add(tf.keras.layers.Dropout(0.5))\nmodel.add(tf.keras.layers.Dense(10, activation='softmax'))\n\n\nclass haltCallback(tf.keras.callbacks.Callback):\n    def on_epoch_end(self, epoch, logs={}):\n        if(logs.get('loss') <= 0.05):\n            print(\"\\n\\n\\nReached 0.05 loss value so cancelling training!\\n\\n\\n\")\n            self.model.stop_training = True\n\n\ntrainingStopCallback = haltCallback()\n\nsgd = tf.keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)\nmodel.compile(loss='categorical_crossentropy',\n              optimizer=sgd,\n              metrics=['accuracy', 'loss'])\n\nmodel.fit(x_train, y_train,\n          epochs=3000,\n          batch_size=128,\n          callbacks=['trainingStopCallback'])\nscore = model.evaluate(x_test, y_test, batch_size=128)\n
\n", + "system": "" + }, + { + "instruction": "Extract features into a dataset from keras model", + "input": "", + "output": "

After training or loading the existing trained model, you can create another model:

\n
extract = Model(model.inputs, model.layers[-3].output) # Dense(128,...)\nfeatures = extract.predict(data)\n
\n

and use the .predict method to return the vectors from a specific layer, in this case every image will become (128,), the output of the Dense(128, ...) layer.

\n

You can also train these networks jointly with 2 outputs using the functional API. Follow the guide and you'll see that you can chain models together and have multiple outputs each possibly with a separate loss. This will allow your model to learn shared features that is useful for both classifying the MNIST image and your task at the same time.

\n", + "system": "" + }, + { + "instruction": "Keras error "You must feed a value for placeholder tensor"", + "input": "", + "output": "

Metrics, losses, and the entire model are \"symbolic\" tensors.

\n\n

This means, they have absolutely no data (or value) until the moment you start fitting or predicting.

\n\n

When you call K.get_value you're trying to get a value that doesn't exist. (It will only exist when you \"feed the data\" to the model. What it calls a placeholder is an empty input tensor that is expecting to receive the data when fitting or predicting).

\n\n

The solution to your problem is simply not trying to get the values. (The numpy version will not work either, the values don't exist at the moment this function is compiled).

\n\n

You gotta keep all operations symbolic, and they will be performed when you feed the data.

\n\n

So:

\n\n
def symbol_acc(true, predicted):\n    isEqual = K.cast(K.equal(true,predicted),K.floatx())\n    return K.mean(isEqual)\n
\n", + "system": "" + }, + { + "instruction": "Vectorize Sequences explanation", + "input": "", + "output": "

The script transforms you dataset into a binary vector space model. Let's disect things one by one.

\n\n

First, if we examine the x_train content we see that each review is represented as a sequence of word ids. Each word id corresponds to one specific word:

\n\n
print(train_data[0])  # print the first review\n[1, 14, 22, 16, 43, 530, 973, ..., 5345, 19, 178, 32]\n
\n\n

Now, this would be very difficult to feed the network. The lengths of reviews varies, fractional values between any integers have no meaning (e.g. what if on the output we get 43.5, what does it mean?)

\n\n

So what we can do, is create a single looong vector, the size of the entire dictionary, dictionary=10000 in your example. We will then associate each element/index of this vector with one word/word_id. So word represented by word id 14 will now be represented by 14-th element of this vector.

\n\n

Each element will either be 0 (word is not present in the review) or 1 (word is present in the review). And we can treat this as a probability, so we even have meaning for values in between 0 and 1. Furthermore, every review will now be represented by this very long (sparse) vector which has a constant length for every review.

\n\n

So on a smaller scale if:

\n\n
word      word_id\nI      -> 0\nyou    -> 1\nhe     -> 2\nbe     -> 3\neat    -> 4\nhappy  -> 5\nsad    -> 6\nbanana -> 7\na      -> 8\n
\n\n

the sentences would then be processed in a following way.

\n\n
I be happy      -> [0,3,5]   -> [1,0,0,1,0,1,0,0,0]\nI eat a banana. -> [0,4,8,7] -> [1,0,0,0,1,0,0,1,1]\n
\n\n

Now I highlighted the word sparse. That means, there will have A LOT MORE zeros in comparison with ones. We can take advantage of that. Instead of checking every word, whether it is contained in a review or not; we will check a substantially smaller list of only those words that DO appear in our review.

\n\n

Therefore, we can make things easy for us and create reviews \u00d7 vocabulary matrix of zeros right away by np.zeros((len(sequences), dimension)). And then just go through words in each review and flip the indicator to 1.0 at position corresponding to that word:

\n\n
result[review_id][word_id] = 1.0\n
\n\n

So instead of doing 25000 x 10000 = 250 000 000 operations, we only did number of words = 5 967 841. That's just ~2.5% of original amount of operations.

\n", + "system": "" + }, + { + "instruction": "Reduce the size of Keras LSTM model", + "input": "", + "output": "

Define the layers you want to save outside the function and name them. Then create two functions foo() and bar(). foo() will have the original pipeline including the embedding layer. bar() will have only the part of pipeline AFTER embedding layer. Instead, you will define new Input() layer in bar() with dimensions of your embeddings:

\n\n
lstm1 = LSTM(256, return_sequences=True, name='lstm1')\nlstm2 = LSTM(256, return_sequences=False, name='lstm2')\ndense = Dense(NUM_OF_LABELS, name='Susie Dense')\n\ndef foo(...):\n    sentence_indices = Input(input_shape, dtype=\"int32\")\n    embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)\n    embeddings = embedding_layer(sentence_indices)\n    X = lstm1(embeddings)\n    X = Dropout(0.5)(X)\n    X = lstm2(X)\n    X = Dropout(0.5)(X)    \n    X = dense(X)\n    X = Activation(\"softmax\")(X)\n    return Model(inputs=sentence_indices, outputs=X)\n\n\ndef bar(...):\n    embeddings = Input(embedding_shape, dtype=\"float32\")\n    X = lstm1(embeddings)\n    X = Dropout(0.5)(X)\n    X = lstm2(X)\n    X = Dropout(0.5)(X)    \n    X = dense(X)\n    X = Activation(\"softmax\")(X)\n    return Model(inputs=sentence_indices, outputs=X)\n\nfoo_model = foo(...)\nbar_model = bar(...)\n\nfoo_model.fit(...)\nbar_model.save_weights(...)\n
\n\n

Now, you will train the original foo() model. Then you can save the weights of the reduced bar() model. When loading the model, don't forget to specify by_name=True parameter:

\n\n
foo_model.load_weights('bar_model.h5', by_name=True)\n
\n", + "system": "" + }, + { + "instruction": "Do programs continue execution after connection is lost on google-colaboratory?", + "input": "", + "output": "

Normally, notebook connection expires after 12 hours and all the data loaded into colab will also be expired with session.

\n\n

When you run a process and the notebook get disconnected after some time but before 12 hours, the code run in background.

\n\n

So, if you try to reconnect the notebook, it get stuck on \"initializing\" until the completion of code execution or session expiry (whichever occurs first). After that, the connection is established again.

\n", + "system": "" + }, + { + "instruction": "ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`", + "input": "", + "output": "

You are cropping your input image too much. The cropping argument is interpreted as follows:

\n\n
\n

If tuple of 2 tuples of 2 ints: interpreted as ((top_crop,\n bottom_crop), (left_crop, right_crop))

\n
\n\n

Consider the following example from the Keras docs:

\n\n
# Crop the input 2D images or feature maps\nmodel = Sequential()\nmodel.add(Cropping2D(cropping=((2, 2), (4, 4)),\n                     input_shape=(28, 28, 3)))\n# now model.output_shape == (None, 24, 20, 3)\n
\n\n

In your code, you are cropping 79 pixels from the top and 145 pixels from the bottom, whereas the height of your images is only 160 pixels. With less cropping, your code runs fine, eg:

\n\n
model.add(Cropping2D(cropping=((10, 10), (10, 10)), input_shape=(160,320,3)))\n
\n", + "system": "" + }, + { + "instruction": "How to get results from custom loss function in Keras?", + "input": "", + "output": "
\n

Also I want to know what have to return the function.

\n
\n\n

Custom metrics can be passed at the compilation step.

\n\n

The function would need to take (y_true, y_pred) as arguments and return a single tensor value.

\n\n
\n

But I do not know how to get the value of the res for the if and else.

\n
\n\n

You can return the result from result_metric function.

\n\n
def custom_metric(y_true,y_pred):\n     result = K.abs((y_true-y_pred) / y_pred, axis = 1)\n     return result\n
\n\n

The second step is to use a keras callback function in order to find the sum of the errors.

\n\n

The callback can be defined and passed to the fit method.

\n\n
history = CustomLossHistory()\nmodel.fit(callbacks = [history])\n
\n\n

The last step is to create the the CustomLossHistory class in order to find out the sum of your expecting errors list.

\n\n

CustomLossHistory will inherit some default methods from keras.callbacks.Callback.

\n\n\n\n

You can read more in the Keras Documentation

\n\n

But for this example we only need on_train_begin and on_batch_end methods.

\n\n

Implementation

\n\n
class LossHistory(keras.callbacks.Callback):\n    def on_train_begin(self, logs={}):\n        self.errors= []\n\n    def on_batch_end(self, batch, logs={}):\n         loss = logs.get('loss')\n         self.errors.append(self.loss_mapper(loss))\n\n    def loss_mapper(self, loss):\n         if loss <= 0.1:\n             return 0\n         elif loss > 0.1 & loss <= 0.15:\n             return 5/3\n         elif loss > 0.15 & loss <= 0.2:\n             return 5\n         else:\n             return 2000\n
\n\n

After your model is trained you can access your errors using following statement.

\n\n
errors = history.errors\n
\n", + "system": "" + }, + { + "instruction": "Keras Top 5 predictions", + "input": "", + "output": "

if you are trying to get top prediction from a image classification problem,\nyou will receive a one hot code prediction.

\n\n
class_prob = [0.98,0.50,0.60,0.90,0.87,0.79,0.87]\ntop_values_index = sorted(range(len(class_prob)), key=lambda i: class_prob[i])[-the_top_values_you_want_to_extract:]\n
\n\n

you now have the index for all the five top values.you can now just loop through the index and get the class name.

\n\n

to extract just the top_values_without_index

\n\n
top_values= [class_prob[i] for i in np.argsort(class_prob)[-5:]]\n
\n", + "system": "" + }, + { + "instruction": "LSTM preprocessing: Build 3d arrays from pandas data frame based on ID", + "input": "", + "output": "

You can achieve this with a few steps by extracting data from a pandas groupby object. In the first two steps we will create the groupby object so that we can operate on it later on in the code. From the groupby object, we will find the largest group, so that that we can pad with zeros accordingly

\n\n
gb = df.groupby(['gameId','side']) # Create Groupby object\nmx = gb['side'].size().max() # Find the largest group\n
\n\n

The steps for creating x & y are very similar. We can use list comprehension to loop over each group, convert the dataframes into numpy arrays and pad with zeros using np.pad(). Then reshape each array to be 3d

\n\n
x = np.array([np.pad(frame['timestamp'].values,\n                     pad_width=(0,mx-len(frame)),\n                     mode='constant',\n                     constant_values=0) \n                     for _,frame in gb]).reshape(-1,mx,1)\n\ny = np.array([np.pad(frame['itemId'].values,\n                     pad_width=(0,mx-len(frame)),\n                     mode='constant',\n                     constant_values=0) \n                     for _,frame in gb]).reshape(-1,mx,1)\n
\n\n
\n\n

In this example, the setup is for a many-to-many lstm. In the comments I had pointed out that your current setup would not support a 3d output value, because in the lstm layer you did not have the argument return_sequence=True.

\n\n

Its unclear which structure you are looking for in this problem. I like to consult the following image when deciding which LSTM network I am using. The code Above will support a many-to-many network, assuming you add return_sequence=True to your LSTM layer. If you wanted many-to-one instead, drop .reshape(-1,mx,1) from y, and now you have a network with mx outputs.

\n\n

\"enter

\n\n
\n\n

For either setup, you need to modify the input_shape argument for your model. This argument must specify the shape of your 2nd and 3rd dimensions of x i.e.

\n\n
                                                        # v Use input_shape here\nmodel.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2, input_shape=x.shape[1:]))\n
\n", + "system": "" + }, + { + "instruction": "Keras reset layer numbers", + "input": "", + "output": "

The solution, from Attempting to reset tensorflow graph when using keras, failing:

\n\n
from keras import backend as K\nK.clear_session()\n
\n", + "system": "" + }, + { + "instruction": "How do I split an convolutional autoencoder?", + "input": "", + "output": "

Ok, I figured this out after a few hours. What worked for me was to:\n1. Create a separate model for the encoder\n2. Create a separate model for teh decoder\n3. Create a general model for the autoencoder:

\n\n
autoencoder = Model(input, Decoder()(Encoder(input))\n
\n\n

The full working code is below:

\n\n
def Encoder():\n    input_img = Input(shape=(28, 28, 1))  # adapt this if using `channels_first` image data format   \n    e1 = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)\n    e2 = MaxPooling2D((2, 2), padding='same')(e1)\n    e3 = Conv2D(8, (3, 3), activation='relu', padding='same')(e2)\n    e4 = MaxPooling2D((2, 2), padding='same')(e3)\n    e5 = Conv2D(8, (3, 3), activation='relu', padding='same')(e4)\n    e6 = MaxPooling2D((2, 2), padding='same')(e5)\n    return Model(input_img, e6)\n\n\ndef Decoder():\n    input_img = Input(shape=(4, 4, 8))  # adapt this if using `channels_first` image data format   \n    d1 = Conv2D(8, (3, 3), activation='relu', padding='same')(input_img)\n    d2 = UpSampling2D((2, 2))(d1)\n    d3 = Conv2D(8, (3, 3), activation='relu', padding='same')(d2)\n    d4 = UpSampling2D((2, 2))(d3)\n    d5 = Conv2D(16, (3, 3), activation='relu')(d4)\n    d6 = UpSampling2D((2, 2))(d5)\n    d7 = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(d6)\n    return Model(input_img, d7)\n\n\n# define input to the model:\nx = Input(shape=(28, 28, 1))\n\n# make the model:\nautoencoder = Model(x, Decoder()(Encoder()(x)))\n\n# compile the model:\nautoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')\n
\n", + "system": "" + }, + { + "instruction": "my Keras model does not predict negative values", + "input": "", + "output": "

Yes, you are constraining the negatives to zeros. The output activation is a ReLU, which does exactly that.

\n\n

The solution is just to change the output activation to one that produces negative numbers, like the tanh. Note that the range of that activation is [-1, 1], so you will have to normalize your output labels to the same range.

\n", + "system": "" + }, + { + "instruction": "LSTM Initial state from Dense layer", + "input": "", + "output": "

The first problem is that an LSTM(8) layer expects two initial states h_0 and c_0, each of dimension (None, 8). That's what it means by \"cell.state_size is (8, 8)\" in the error message.

\n\n

If you only have one initial state dense_2, maybe you can switch to GRU (which requires only h_0). Or, you can transform your feature_input into two initial states.

\n\n

The second problem is that h_0 and c_0 are of shape (batch_size, 8), but your dense_2 is of shape (batch_size, timesteps, 8). You need to deal with the time dimension before using dense_2 as initial states.

\n\n

So maybe you can change your input shape into (data.training_features.shape[1],) or take average over timesteps with GlobalAveragePooling1D.

\n\n

A working example would be:

\n\n
feature_input = Input(shape=(5,))\ndense_1_h = Dense(4, activation='relu')(feature_input)\ndense_2_h = Dense(8, activation='relu')(dense_1_h)\ndense_1_c = Dense(4, activation='relu')(feature_input)\ndense_2_c = Dense(8, activation='relu')(dense_1_c)\n\nseries_input = Input(shape=(None, 5))\nlstm = LSTM(8)(series_input, initial_state=[dense_2_h, dense_2_c])\nout = Dense(1, activation=\"sigmoid\")(lstm)\nmodel = Model(inputs=[feature_input,series_input], outputs=out)\nmodel.compile(loss='mean_squared_error', optimizer='adam', metrics=[\"mape\"])\n
\n", + "system": "" + }, + { + "instruction": "Can not save model using model.save following multi_gpu_model in Keras", + "input": "", + "output": "

To be honest, the easiest approach to this is to actually examine the multi gpu parallel model using

\n\n
 parallel_model.summary()\n
\n\n

(The parallel model is simply the model after applying the multi_gpu function). This clearly highlights the actual model (in I think the penultimate layer - I am not at my computer right now). Then you can use the name of this layer to save the model.

\n\n
 model = parallel_model.get_layer('sequential_1)\n
\n\n

Often its called sequential_1 but if you are using a published architecture, it may be 'googlenet' or 'alexnet'. You will see the name of the layer from the summary.

\n\n

Then its simple to just save

\n\n
 model.save()\n
\n\n

Maxims approach works, but its overkill I think.

\n\n

Rem: you will need to compile both the model, and the parallel model.

\n", + "system": "" + }, + { + "instruction": "Convert image to array for CNN", + "input": "", + "output": "

For CNN, your input must be a 4-D tensor [batch_size, width, height, channels], so each image is a 3-D sub-tensor. Since your images are gray-scale, channels=1. Also for training all images must be of the same size - WIDTH and HEIGHT.

\n\n

The skimage.io.imread is returning an ndarray, and this works perfectly for keras. So you can read the data like this:

\n\n
all_images = []\nfor image_path in os.listdir(path):\n  img = io.imread(image_path , as_grey=True)\n  img = img.reshape([WIDTH, HEIGHT, 1])\n  all_images.append(img)\nx_train = np.array(all_images)\n
\n\n

Not sure how you store the labels, but you'll need to make an array of labels as well. I call it y_train. You can convert it to one-hot like this:

\n\n
y_train = keras.utils.to_categorical(y_train, num_classes)\n
\n\n

The model in keras is pretty straighforward, here's the simplest one (uses relu and x-entropy):

\n\n
model = Sequential()\nmodel.add(Conv2D(32, kernel_size=(3, 3), activation='relu', \n                 input_shape=[WIDTH, HEIGHT, 1]))\nmodel.add(Conv2D(64, (3, 3), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='relu'))\nmodel.add(Dense(num_classes, activation='softmax'))\n\nmodel.compile(loss=keras.losses.categorical_crossentropy,\n              optimizer=keras.optimizers.Adadelta(),\n              metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, batch_size=100, epochs=10, verbose=1)\n
\n\n

A complete working MNIST example can be found here.

\n", + "system": "" + }, + { + "instruction": "Same code, very different accuracy on windows/ubuntu (Keras/Tensorflow)", + "input": "", + "output": "

The issue proved to be in the csv files, which were originally ported from windows. though they were saved in utf-8 format, I still needed to go to libreoffice and save them as linux csv files.

\n\n

In their initial state, they did not fail to load, but did not properly one-hot encode, leading to all one-hot encodings being 0. Apparently that leads to very high accuracy.

\n", + "system": "" + }, + { + "instruction": "How to provide learning rate value to tensorboard in keras", + "input": "", + "output": "

I'm not sure how to pass it to Tensorboard, but you can monitor it from python.

\n\n
from keras.callbacks import Callback\nclass LossHistory(Callback):\n    def on_train_begin(self, logs={}):\n        self.losses = []\n        self.lr = []\n\n    def on_epoch_end(self, batch, logs={}):\n        self.losses.append(logs.get('loss'))\n        self.lr.append(initial_lr * 0.95 ** len(self.losses))\n\nloss_hist = LossHistory()\n
\n\n

Then just add loss_hist to your callbacks.

\n\n

Update:

\n\n

Based on this answer:

\n\n
class LRTensorBoard(TensorBoard):\n\n    def __init__(self, log_dir='./logs', **kwargs):\n        super(LRTensorBoard, self).__init__(log_dir, **kwargs)\n\n        self.lr_log_dir = log_dir\n\n    def set_model(self, model):\n        self.lr_writer = tf.summary.FileWriter(self.lr_log_dir)\n        super(LRTensorBoard, self).set_model(model)\n\n    def on_epoch_end(self, epoch, logs=None):\n        lr = initial_lr * 0.95 ** epoch\n\n        summary = tf.Summary(value=[tf.Summary.Value(tag='lr',\n                                                     simple_value=lr)])\n        self.lr_writer.add_summary(summary, epoch)\n        self.lr_writer.flush()\n\n        super(LRTensorBoard, self).on_epoch_end(epoch, logs)\n\n    def on_train_end(self, logs=None):\n        super(LRTensorBoard, self).on_train_end(logs)\n        self.lr_writer.close()\n
\n\n

Just use it like the normal TensorBoard.

\n", + "system": "" + }, + { + "instruction": "Is it really that difficult to return a scalar dot product in Keras (tf backend)?", + "input": "", + "output": "

Use K.get_value(x) to get scalar of a tensor.

\n\n

tf.keras.backend.get_value

\n", + "system": "" + }, + { + "instruction": "More than one prediction in multi-classification in Keras?", + "input": "", + "output": "

You are not doing multi-class classification, but multi-label. This is caused by the use of a sigmoid activation at the output layer. To do multi-class classification properly, use a softmax activation at the output, which will produce a probability distribution over classes.\nTaking the class with the biggest probability (argmax) will produce a single class prediction, as expected.

\n", + "system": "" + }, + { + "instruction": "Train Multi-Input Keras NN with batch of training data", + "input": "", + "output": "

the problem is just incorrect formatting. You can't pass a list to keras, only numpy arrays, so when you have your data structured like

\n\n
 inputs = [[np.array([[0.1, 0.2]]), np.array([[0.3, 0.5]]), np.array([[0.1, 0.3]])],\n                     [np.array([[0.2, 0.1]]), np.array([[0.5, 0.3]]), np.array([[0.3, 0.1]])]\n                         ]\n
\n\n

You need to pass one list element into your model at a time. You will also need to pass one output value to the model at a time. To do this, structure you outputs like this

\n\n
outputs = [np.ones(1) for x in inputs]\n\n[array([ 1.]), array([ 1.])]\n
\n\n

Then you can loop over the the fit function like this

\n\n
 for z in range(0,len(inputs)):\n     model.fit(inputs[z],outputs[z],batch_size=1)\n
\n\n

you can also replace model.fit with model.train_on_batch() instead, see docs

\n\n

however to avoid the loop, you could just have 3 numpy arrays stored in your inputs list and have you single outputs as a numpy array. If you only want to train on a single batch at a time, you could set your batch size to do that.

\n\n
inputs = [np.array([[0.1, 0.2],[0.2, 0.1]]), np.array([[0.3, 0.5],[0.5, 0.3]]), np.array([[0.1, 0.3],[0.3, 0.1]])]\n\noutputs = np.ones(inputs[0].shape[0])\n\nmodel.fit(inputs,outputs,batch_size=1)\n
\n", + "system": "" + }, + { + "instruction": "Force positive weight keras", + "input": "", + "output": "

I find the answer !!

\n\n

In Keras 1.2.2 :

\n\n
 from keras.constraints import maxnorm, nonneg\n\n x = Dense(1, bias=0, W_regularizer=regularizers.l1(0.01),W_constraint=nonneg())(input_sequences)\n
\n", + "system": "" + }, + { + "instruction": "Is it possible to have non-trainable layer in Keras?", + "input": "", + "output": "

You should be able to pass a trainable = False argument to your layer definition, or set the layer.trainable = False property after creating your layer. In the latter case you need to compile after the fact. See the FAQ here.

\n\n

You can then set constant weights for the layer by passing a kernel_initializer = initializer argument. More information on initializers can be found here. If you have a weight matrix already defined somewhere, I think you will need to define a custom initializer that sets the weights to your desired values. The link shows how to define custom initializers at the bottom. Something as simple as the following might work, assuming you have my_constant_weight_matrix defined:

\n\n
def my_init(shape, dtype=None):\n    # Note it must take arguments 'shape' and 'dtype'.\n    return my_constant_weight_matrix\nmodel.add(Conv2D(..., kernel_initializer=my_init))  # replace '...' with your args\n
\n\n

That said, I have not verified, and when I did a Google search I saw a lot of bug reports pop up about layer freezing not working correctly. Worth a shot though.

\n", + "system": "" + }, + { + "instruction": "Stateful LSTM fails to predict due to batch_size issue", + "input": "", + "output": "

I suspect that the reason for the error is that you did not specify the batch size in model.predict. As you can see in the documentation in the \"predict\" section, the default parameters are

\n\n
model.predict(self, x, batch_size=32, verbose=0)\n
\n\n

which is why 32 appears in your error message. So you need to specify batch_size=60 in model.predict.

\n", + "system": "" + }, + { + "instruction": "Keras get_weight interpretation for RNNs", + "input": "", + "output": "

In simpleRNN implementation there are indeed 3 sets of weights needed.

\n\n

weights[0] is the input matrix. It transforms the input and therefore has a shape [input_dim, output_dim]

\n\n

weights[1] is the recurent matrix. It transforms the recurrent state and has a shape [output_dim, output_dim]

\n\n

weights[2] is the bias matrix. It is added to the output and has a shape [output_dim]

\n\n

the results of the three operations are summed and then go through an activation layer.

\n\n

I hope this is now clearer ?

\n", + "system": "" + }, + { + "instruction": "Replicate MLPClassifier() of sklearn in keras", + "input": "", + "output": "

To get a bona fide scikit estimator you can use KerasClassifier from tensorflow.keras.wrappers.scikit_learn. For example:

\n\n
from sklearn.datasets import make_classification\nfrom tensorflow import keras\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.wrappers.scikit_learn import KerasClassifier\n\n\nX, y = make_classification(\n    n_samples=26000, n_features=5, n_classes=4, n_informative=3, random_state=0\n)\n\n\ndef build_fn(optimizer):\n    model = Sequential()\n    model.add(\n        Dense(200, input_dim=5, kernel_initializer=\"he_normal\", activation=\"relu\")\n    )\n    model.add(Dense(100, kernel_initializer=\"he_normal\", activation=\"relu\"))\n    model.add(Dense(100, kernel_initializer=\"he_normal\", activation=\"relu\"))\n    model.add(Dense(100, kernel_initializer=\"he_normal\", activation=\"relu\"))\n    model.add(Dense(4, kernel_initializer=\"he_normal\", activation=\"softmax\"))\n    model.compile(\n        loss=\"categorical_crossentropy\",\n        optimizer=optimizer,\n        metrics=[\n            keras.metrics.Precision(name=\"precision\"),\n            keras.metrics.Recall(name=\"recall\"),\n            keras.metrics.AUC(name=\"auc\"),\n        ],\n    )\n    return model\n\n\nclf = KerasClassifier(build_fn, optimizer=\"rmsprop\", epochs=500, batch_size=300)\nclf.fit(X, y)\nclf.predict(X)\n\n
\n", + "system": "" + }, + { + "instruction": "How can I identify the images with 'Possibly corrupt EXIF data'", + "input": "", + "output": "

Edit: To raise Warnings as errors which you can catch, take a look at Justas comment below.

\n
\n

Even if this question is over a year old, i want to show my solution cause i was running into the same problem.

\n

I was editing the error messages. The output shows where to find the file on your system and also the line number.\nFor example i changed following:

\n
if len(data) != size:\n    warnings.warn("Possibly corrupt EXIF data.  "\n                  "Expecting to read %d bytes but only got %d."\n                  " Skipping tag %s" % (size, len(data), tag))\n    continue\n
\n

to

\n
if len(data) != size:\n    raise ValueError('Corrupt Exif data')\n    warnings.warn("Possibly corrupt EXIF data.  "\n                  "Expecting to read %d bytes but only got %d."\n                  " Skipping tag %s" % (size, len(data), tag))\n    continue\n
\n

My code to catch the ValueError is shown below. The code gives you the advantage that PIL is interrupted and is not showing an useless message. Also you can catch this one and use it, e.g. to delete the corresponding file via the 'except' part.

\n
import os\nfrom PIL import Image\n\nimageFolder = /Path/To/Image/Folder\nlistImages = os.listdir(imageFolder)\n\nfor img in listImages:\n    imgPath = os.path.join(imageFolder,img)\n            \n    try:\n        img = Image.open(imgPath)\n        exif_data = img._getexif()\n    except ValueError as err:\n        print(err)\n        print("Error on image: ", img)\n
\n

I know adding the ValueError part is quick and dirty, but it's better than get confronted with all the useless warning messages.

\n", + "system": "" + }, + { + "instruction": "What is the meaning of the implementation of the KL divergence in Keras?", + "input": "", + "output": "

Kullback-Leibler divergence is a measure of similarity between two probability distributions. The KL divergence implemented in Keras assumes two discrete probability distributions (hence the sum).

\n\n

The exact format of your KL loss function depends on the underlying probability distributions. A common usecase is that the neural network models the parameters of a probability distribution P (eg a Gaussian), and the KL divergence is then used in the loss function to determine the similarity between the modelled distribution and some other, known distribution (potentially Gaussian as well). E.g. a network outputs two vectors mu and sigma^2. Mu forms the mean of a Gaussian distribution P while sigma^2 is the diagonal of the covariance matrix Sigma. A possible loss function is then the KL divergence between the Gaussian P described by mu and Sigma, and a unit Gaussian N(0, I). The exact format of the KL divergence in that case can be derived analytically, yielding a custom keras loss function that is not at all equal to the KL divergence implemented in Keras.

\n\n

In the original paper that introduces Variational Auto-Encoders, the loss function is summed over the samples in the minibatch and then multiplied by a factor (N/M), where N is the size of the entire dataset, and M is the size of the minibatch. See equations 8 and 10 in https://arxiv.org/abs/1312.6114.

\n", + "system": "" + }, + { + "instruction": "ValueError: Input 0 is incompatible with layer conv_1: expected ndim=3, found ndim=4", + "input": "", + "output": "

In the documentation it is mentioned that we need to mention the input in a specific format which is (None,NumberOfFeatureVectors). In your case it will be (None,4)

\n\n

https://keras.io/layers/convolutional/

\n\n
\n

When using this layer as the first layer in a model, provide an\n input_shape argument (tuple of integers or None, e.g. (10, 128) for\n sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for\n variable-length sequences of 128-dimensional vectors.

\n
\n", + "system": "" + }, + { + "instruction": "VGG, perceptual loss in keras", + "input": "", + "output": "

The usual way of doing that is appending your VGG to the end of your model, making sure all its layers have trainable=False before compiling.

\n\n

Then you recalculate your Y_train.

\n\n

Suppose you have these models:

\n\n
mainModel - the one you want to apply a loss function    \nlossModel - the one that is part of the loss function you want   \n
\n\n

Create a new model appending one to another:

\n\n
from keras.models import Model\n\nlossOut = lossModel(mainModel.output) #you pass the output of one model to the other\n\nfullModel = Model(mainModel.input,lossOut) #you create a model for training following a certain path in the graph. \n
\n\n

This model will have the exact same weights of mainModel and lossModel, and training this model will affect the other models.

\n\n

Make sure lossModel is not trainable before compiling:

\n\n
lossModel.trainable = False\nfor l in lossModel.layers:\n    l.trainable = False\n\nfullModel.compile(loss='mse',optimizer=....)\n
\n\n

Now adjust your data for training:

\n\n
fullYTrain = lossModel.predict(originalYTrain)\n
\n\n

And finally do the training:

\n\n
fullModel.fit(xTrain, fullYTrain, ....)\n
\n", + "system": "" + }, + { + "instruction": "How to calculate vector-wise dot product in Keras?", + "input": "", + "output": "

You can try something like:

\n\n
import keras.backend as K\n\nC = K.sum(A * B,axis=-1,keepdims=True)\n
\n", + "system": "" + }, + { + "instruction": "What does Dense do?", + "input": "", + "output": "

Dense is the only actual network layer in that model.

\n\n

A Dense layer feeds all outputs from the previous layer to all its neurons, each neuron providing one output to the next layer.

\n\n

It's the most basic layer in neural networks.

\n\n

A Dense(10) has ten neurons. A Dense(512) has 512 neurons.

\n", + "system": "" + }, + { + "instruction": "How to train a reverse embedding, like vec2word?", + "input": "", + "output": "

One easiest thing that you can do is to use the nearest neighbor word. Given a query feature of an unknown word fq, and a reference feature set of known words R={fr}, then you can find out what is the nearest fr* for fq, and use the corresponding fr* word as fq's word.

\n", + "system": "" + }, + { + "instruction": "keras usage of the Activation layer instead of activation parameter", + "input": "", + "output": "

As you may see, both approaches are equivalent. I will show you a few scenarios in which having this layer might help:

\n\n
    \n
  1. Same layer - different activations- one may easily imagine a net where you want to have different activations applied to the same output. Without Activation it's impossible.
  2. \n
  3. Need for output before activation - e.g. in siamese networks you are training your network using softmax as final activation - but in the end - you want to have so called logits - inverse softmax function. Without additional Activation layer that could be difficult.
  4. \n
  5. Saliency maps: in saliency maps - similiar to what you have in a previous point - you also need output before activation in order to compute a gradient w.r.t. to it - without Activation it wouldn't be possible.
  6. \n
\n\n

As you may see - lack of Activation would make output of a layer before activation and final activation strongly coupled. That's why Activation might be pretty useful - as it breaks this ties.

\n", + "system": "" + }, + { + "instruction": "Text classification using Keras: How to add custom features?", + "input": "", + "output": "

Try:

\n\n
input = Input(shape=(params.maxlen,))\nembedding = Embedding(params.nb_words,\n                    params.embedding_dims,\n                    weights=[embedding_matrix],\n                    input_length=params.maxlen,\n                    trainable=params.trainable)(input)\nconv = Convolution1D(nb_filter=params.nb_filter,\n                        filter_length=params.filter_length,\n                        border_mode='valid',\n                        activation='relu')(embedding)\ndrop = Dropout(params.dropout_rate)(conv)\nseq_features = GlobalMaxPooling1D()(drop)\n\n# Adding hand-picked features\nnb_features = len(features[0][0])\nother_features = Input(shape=(nb_features,))\n\nmodel_final = merge([seq_features , other_features], mode='concat'))\n\nmodel_final = Dense(len(citfunc.funcs), activation='softmax'))(model_final)\n\nmodel_final = Model([input, other_features], model_final)\n\nmodel_final.compile(loss='categorical_crossentropy',\n                    optimizer='adam',\n                    metrics=['accuracy'])\n
\n\n

In this case - you are merging features from a sequence analysis with custom features directly - without squashing all custom features to 1 features using Dense.

\n", + "system": "" + }, + { + "instruction": "Add dense layer before LSTM layer in keras or Tensorflow?", + "input": "", + "output": "

The dense layer can take sequences as input and it will apply the same dense layer on every vector (last dimension). Example :

\n\n

You have a 2D tensor input that represents a sequence (timesteps, dim_features), if you apply a dense layer to it with new_dim outputs, the tensor that you will have after the layer will be a new sequence (timesteps, new_dim)

\n\n

If you have a 3D tensor (n_lines, n_words, embedding_dim) that can be a document, with n_lines lines, n_words words per lines and embedding_dim dimensions for each word, applying a dense layer to it with new_dim outputs will get you a new doc tensor (3D) with shape (n_lines, n_words, new_dim)

\n\n

You can see here the dimensions input and output that you can feed and get with the Dense() layer.

\n", + "system": "" + }, + { + "instruction": "Keras jupyter notebook outputs blocks during training", + "input": "", + "output": "

see this issue on github

\n\n

in short 4 options :

\n\n\n\n

I hope this helps.

\n", + "system": "" + }, + { + "instruction": "Multi-Output Multi-Class Keras Model", + "input": "", + "output": "

Generally, multi-class problems correspond with models outputting a probability distribution over the set of classes (that is typically scored against the one-hot encoding of the actual class through cross-entropy). Now, independently of whether you are structuring it as one single output, two outputs, 49 outputs or 49 x 2 = 98 outputs, that would mean having 1,200 x 49 + 1,300 x 49 = 122,500 output units - which is not something a computer cannot handle, but maybe not the most convenient thing to have. You could try having each class output to be a single (e.g. linear) unit and round it's value to choose the label, but, unless the labels have some numerical meaning (e.g. order, sizes, etc.), that is not likely to work.

\n\n

If the order of the elements in the input has some meaning (that is, shuffling it would affect the output), I think I'd approach the problem through an RNN, like an LSTM or a bidirectional LSTM model, with two outputs. Use return_sequences=True and TimeDistributed Dense softmax layers for the outputs, and for each 118-long input you'd have 118 pairs of outputs; then you can just use temporal sample weighting to drop, for example, the first 69 (or maybe do something like dropping the 35 first and the 34 last if you're using a bidirectional model) and compute the loss with the remaining 49 pairs of labellings. Or, if that makes sense for your data (maybe it doesn't), you could go with something more advanced like CTC (although Keras does not have it, I'm trying to integrate TensorFlow implementation into it without much sucess), which is also implemented in Keras (thanks @indraforyou)!.

\n\n

If the order in the input has no meaning but the order of the outputs does, then you could have an RNN where your input is the original 118-long vector plus a pair of labels (each one-hot encoded), and the output is again a pair of labels (again two softmax layers). The idea would be that you get one \"row\" of the 49x2 output on each frame, and then you feed it back to the network along with the initial input to get the next one; at training time, you would have the input repeated 49 times along with the \"previous\" label (an empty label for the first one).

\n\n

If there are no sequential relationships to exploit (i.e. the order of the input and the output do not have a special meaning), then the problem would only be truly represented by the initial 122,500 output units (plus all the hidden units you may need to get those right). You could also try some kind of middle ground between a regular network and a RNN where you have the two softmax outputs and, along with the 118-long vector, you include the \"id\" of the output that you want (e.g. as a 49-long one-hot encoded vector); if the \"meaning\" of each label at each of the 49 outputs is similar, or comparable, it may work.

\n", + "system": "" + }, + { + "instruction": "Issue with setting TensorFlow as the session in Keras", + "input": "", + "output": "

Requires .compat.v1. after the tf

\n\n

Example:

\n\n
tf.compat.v1.keras.backend.set_session(tf.compat.v1.Session(config=config));\n
\n\n

(Tested in Anaconda, Python 3.7, tensorflow 2.0.0)

\n", + "system": "" + }, + { + "instruction": "How exactly Convolution2D layer works in Keras?", + "input": "", + "output": "

Since you input image shape is (266, 266, 3)[tf]/(3, 266, 266)[th], and the filter number is 64, and the kernel size is 3x3, and for padding, I think the default padding is 1, and the default stride is 1.

\n\n

So, the output is 266x266x64.

\n\n
output_width=output_height=(width \u2013 filter + 2*padding)/stride + 1\n
\n\n

in your code, the width=266, the filter=3, the padding=1, and stride=1.

\n\n

If you have any trouble understanding the basic concepts, I think you can read this cs231n post for more information.

\n\n

For how to understanding the process of conv, click here.

\n", + "system": "" + }, + { + "instruction": "How to install TensorFlow in jupyter notebook on Azure Machine Learning Studio", + "input": "", + "output": "

As you noticed, the active user doesn't have permissions to write to the site-packages directory in Azure Machine Learning Studio notebooks. You could try installing the package to another directory where you do have write permissions (like the default working directory) and importing from there, but I recommend the following lower-hassle option.

\n\n

Azure Notebooks is a separate Jupyter Notebook service that will allow you to install tensorflow, theano, and keras. Like the notebooks in AML Studio, these notebooks will persist in your account. The primary downside is that if you want to access your workspace through e.g. the Python azureml package, you'll need to provide your workspace id/authorization token to set up the connection. (In Azure ML Studio, those values are loaded automatically from the current workspace.) Otherwise I believe Azure Notebooks can do everything you are used to doing inside AML Studio only.

\n", + "system": "" + }, + { + "instruction": "Why neural network tends to output 'mean value'?", + "input": "", + "output": "

Possibly because the data are unpredictable....? Do you know for certain that the data set has N order predictability of some kind?

\n\n

Just eyeballing your data set, it lacks periodicity, lacks homoscedasticity, it lacks any slope or skew or trend or pattern... I can't really tell if there is anything wrong with your 'net. In the absence of any pattern, the mean is always the best prediction... and it is entirely possible (although not certain) that the neural net is doing its job.

\n\n

I suggest you find an easier data set, and see if you can tackle that first.

\n", + "system": "" + }, + { + "instruction": "keras predict is very slow", + "input": "", + "output": "

It seems the size of your NN is much too small to fully utilize the GPU. Typically GPU is faster than multi-core CPU only when the input/hidden/output layer size is larger than 200~500 (depending on the implementation code).

\n\n

However the size of your NN is only 8/5/1, which means most of the time is spent on GPU overhead such CUDA kernel launching, PCIe data transfer, etc. In this case, the number of calls is the main factor that determines the training time. To speed up, you probably need to train your model on CPU, and with a programming language such as C/C++ that has much lower overhead.

\n", + "system": "" + }, + { + "instruction": "Do programs continue execution after connection is lost on google-colaboratory?", + "input": "", + "output": "

Normally, notebook connection expires after 12 hours and all the data loaded into colab will also be expired with session.

\n\n

When you run a process and the notebook get disconnected after some time but before 12 hours, the code run in background.

\n\n

So, if you try to reconnect the notebook, it get stuck on \"initializing\" until the completion of code execution or session expiry (whichever occurs first). After that, the connection is established again.

\n", + "system": "" + }, + { + "instruction": "Pre-Trained models for text Classification", + "input": "", + "output": "

The task we are taking about is called Zero-Shot Topic Classification - predicting a topic that the model has not been trained on. This paradigm is supported by Hugging Face library, you can read more here. The most common pre-trained model is Bart Large MNLI - the checkpoint for bart-large after being trained on the MNLI dataset.\nHere is a simple example, showing the classification of phrase "I like hot dogs" without any preliminary training:

\n
    \n
  1. First of all, please install the transformers library:

    \n
    pip install --upgrade transformers\n
    \n
  2. \n
  3. Then import and initialize the pipeline:

    \n
    from transformers import pipeline\n\nclassifier = pipeline('zero-shot-classification', model='facebook/bart-large-mnli')\n
    \n
  4. \n
  5. Enter our toy dataset:

    \n
     labels = ["artifacts", "animals", "food", "birds"]\n hypothesis_template = 'This text is about {}.'\n sequence = "I like hot dogs"\n
    \n
  6. \n
  7. Predict the label:

    \n
    prediction = classifier(sequence, labels, hypothesis_template=hypothesis_template, multi_class=True)\n\nprint(prediction)\n
    \n
  8. \n
\n

The output will be somethng like

\n
`{'sequence': 'i like hot dogs', \n'labels': ['food', 'animals', 'artifacts', 'birds'], \n'scores': [0.9971900582313538, 0.00529429130256176, 0.0020991512574255466, \n0.00023589911870658398]}`\n
\n

It can be interpreted, that the model assigns the highest probability (0.997..) to the label 'food', which is the correct answer.

\n", + "system": "" + }, + { + "instruction": "How can I add tf.keras.layers.AdditiveAttention in my model?", + "input": "", + "output": "

This will help you from the previous link

\n

How to build a attention model with keras?

\n
context_vector, attention_weights = Attention(32)(lstm, state_h)\n
\n

or

\n

This is how to use Luong-style attention:

\n
attention = tf.keras.layers.Attention()([query, value])\n
\n

And Bahdanau-style attention :

\n
attention = tf.keras.layers.AdditiveAttention()([query, value])\n
\n

The adapted version:

\n
weights = tf.keras.layers.Attention()([lstm, state_h])\n
\n", + "system": "" + }, + { + "instruction": "UnidentifiedImageError: cannot identify image file", + "input": "", + "output": "

Try this function to check if the image are all in correct format.

\n
import os\nfrom PIL import Image\nfolder_path = 'data\\img'\nextensions = []\nfor fldr in os.listdir(folder_path):\n    sub_folder_path = os.path.join(folder_path, fldr)\n    for filee in os.listdir(sub_folder_path):\n        file_path = os.path.join(sub_folder_path, filee)\n        print('** Path: {}  **'.format(file_path), end="\\r", flush=True)\n        im = Image.open(file_path)\n        rgb_im = im.convert('RGB')\n        if filee.split('.')[1] not in extensions:\n            extensions.append(filee.split('.')[1])\n    \n
\n", + "system": "" + }, + { + "instruction": "Add additional layers to the Huggingface transformers", + "input": "", + "output": "

It looks like pooler_output is a Roberta and Bert specific output.

\n

But instead of using pooler_output we can use a few hidden_states(so, not only last hidden state) with all models, we want to use them because papers report that hidden_states can give more accuracy than just one last_hidden_state.

\n
# Import the needed model(Bert, Roberta or DistilBert) with output_hidden_states=True\ntransformer_model = TFBertForSequenceClassification.from_pretrained('bert-large-cased', output_hidden_states=True)\n\ninput_ids = tf.keras.Input(shape=(128, ),dtype='int32')\nattention_mask = tf.keras.Input(shape=(128, ), dtype='int32')\n\ntransformer = transformer_model([input_ids, attention_mask])    \nhidden_states = transformer[1] # get output_hidden_states\n\nhidden_states_size = 4 # count of the last states \nhiddes_states_ind = list(range(-hidden_states_size, 0, 1))\n\nselected_hiddes_states = tf.keras.layers.concatenate(tuple([hidden_states[i] for i in hiddes_states_ind]))\n\n# Now we can use selected_hiddes_states as we want\noutput = tf.keras.layers.Dense(128, activation='relu')(selected_hiddes_states)\noutput = tf.keras.layers.Dense(1, activation='sigmoid')(output)\nmodel = tf.keras.models.Model(inputs = [input_ids, attention_mask], outputs = output)\nmodel.compile(tf.keras.optimizers.Adam(lr=1e-4), loss='binary_crossentropy', metrics=['accuracy'])\n
\n", + "system": "" + }, + { + "instruction": "cannot import name 'Input' from 'tensorflow.keras.models'", + "input": "", + "output": "

It should be from tensorflow.keras.layers import Input.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Removing JFIF", + "input": "", + "output": "

Wikipedia explains that JPG files contain the string \"JFIF\" at the beginning of the file, encoded as bytes:

\n\n

\"JFIF

\n\n

So:

\n\n\n", + "system": "" + }, + { + "instruction": "How to create a Keras layer to do a 4D convolutions (Conv4D)?", + "input": "", + "output": "

Super cool question.

\n\n

This needs a custom layer (with trainable parameters).
\nThe following accepts any number of dimensions, you control that via kernel_size.

\n\n
class Conv(Layer):\n    def __init__(self, filters, kernel_size, padding='VALID', **kwargs):\n        self.filters = filters\n        self.kernel_size = kernel_size #must be a tuple!!!!\n        self.padding=padding\n\n        super(Conv, self).__init__(**kwargs)\n\n    #using channels last!!!\n    def build(self, input_shape):\n        spatialDims = len(self.kernel_size)\n        allDims = len(input_shape)\n        assert allDims == spatialDims + 2 #spatial dimensions + batch size + channels\n\n        kernelShape = self.kernel_size + (input_shape[-1], self.filters)\n            #(spatial1, spatial2,...., spatialN, input_channels, output_channels)\n\n        biasShape = tuple(1 for _ in range(allDims-1)) + (self.filters,)\n\n\n        self.kernel = self.add_weight(name='kernel', \n                                      shape=kernelShape\n                                      initializer='uniform',\n                                      trainable=True)\n        self.bias = self.add_weight(name='bias', \n                                    shape = biasShape, \n                                    initializer='zeros',\n                                    trainable=True)\n        self.built = True\n\n    def call(self, inputs):\n        results = tf.nn.convolution(inputs, self.kernel, padding=self.padding)\n        return results + self.bias\n\n    def compute_output_shape(self, input_shape)\n        sizes = input_shape[1:-1]\n\n        if self.padding='VALID' or self.padding='valid':\n            sizes = [s - kSize + 1 for s, kSize in zip(sizes, self.kernel_size)]\n\n        return input_shape[:1] + sizes + (self.filters,)\n
\n", + "system": "" + }, + { + "instruction": "Google Colab - Your session crashed for an unknown reason", + "input": "", + "output": "

Google Colab is crashing because you are trying to Run Code related to GPU with Runtime as CPU.

\n\n

The execution is successful if you change the Runtime as GPU. Steps for the same are mentioned below:

\n\n

Runtime -> Change Runtime -> GPU (Select from dropdown).

\n\n

Please find the Working code in Github Gist.

\n", + "system": "" + }, + { + "instruction": "How to fit and evaluate tf.data.dataset with tf.keras at the end of every epoch?", + "input": "", + "output": "
X = np.array(X)\ny = np.array(y)\nmodel.fit(X, y, batch_size=32, epochs=3, validation_split=0.1)\n
\n

Just convert the

\n
\n

tf.data.Dataset (Mine were Python Lists)

\n
\n

to

\n
\n

Numpy.array

\n
\n

and it will work Great

\n", + "system": "" + }, + { + "instruction": "Error when running tensorflow in virtualenv: module 'tensorflow' has no attribute 'truncated_normal'", + "input": "", + "output": "

In Tensorflow v2.0 and above, "tf.truncated_normal" replaced with "tf.random.truncated_normal"

\n", + "system": "" + }, + { + "instruction": "how to add custom Keras model in OpenCv in python", + "input": "", + "output": "

You can do that with the help of OpenCV DNN module:

\n\n
import cv2\n\n# Load a model imported from Tensorflow\ntensorflowNet = cv2.dnn.readNetFromTensorflow('card_graph/frozen_inference_graph.pb', 'exported_pbtxt/output.pbtxt')\n\n# Input image\nimg = cv2.imread('image.jpg')\nrows, cols, channels = img.shape\n\n# Use the given image as input, which needs to be blob(s).\ntensorflowNet.setInput(cv2.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))\n\n# Runs a forward pass to compute the net output\nnetworkOutput = tensorflowNet.forward()\n\n# Loop on the outputs\nfor detection in networkOutput[0,0]:\n\n    score = float(detection[2])\n    if score > 0.9:\n\n        left = detection[3] * cols\n        top = detection[4] * rows\n        right = detection[5] * cols\n        bottom = detection[6] * rows\n\n        #draw a red rectangle around detected objects\n        cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (0, 0, 255), thickness=2)\n\n# Show the image with a rectagle surrounding the detected objects \ncv2.imshow('Image', img)\ncv2.waitKey()\ncv2.destroyAllWindows()\n
\n\n

you need frozen inference graph and pbtxt file to run your model in OpenCV

\n", + "system": "" + }, + { + "instruction": "How to deploy cnn file", + "input": "", + "output": "

Concerning the first error, I the problem is that the flask app tries to load the complete model (i.e. with configuration):

\n\n
 model = load_model(MODEL_PATH)\n
\n\n

whereas after the training you save only weights:

\n\n
cnn.save_weights('cnn.h5')\n
\n\n

Try to use cnn.save('cnn.h5') instead.

\n\n

In the second case, your model definition does not match the trained model. Indeed, it is a completely different model with no Convolution layers at all. The corresponding model definition would be:

\n\n
def build_model():\n    model = Sequential()\n\n    model.add(Conv2D(filters=32, \n           kernel_size=(2,2), \n           strides=(1,1),\n           padding='same',\n           input_shape=(IMG_SIZE,IMG_SIZE,NB_CHANNELS),\n           data_format='channels_last'))\n\n\n    model.add(Activation('relu'))\n    model.add(MaxPooling2D(pool_size=(2,2),\n                           strides=2))\n\n    model.add(Dropout(0.4))\n\n    model.add(Conv2D(filters=64,\n                     kernel_size=(2,2),\n                     strides=(1,1),\n                     padding='valid'))\n    model.add(Activation('relu'))\n    model.add(MaxPooling2D(pool_size=(2,2),\n                           strides=2))\n\n    model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])\n    return model\n
\n", + "system": "" + }, + { + "instruction": "How to convert tf.Keras model to TPU using TensorFlow 2.0 in Google Colab?", + "input": "", + "output": "

Colab with TensorFlow 2.2 (Updated Mar 2020)

\n\n

It works after I fixed this issue, there's also a Colab Notebook at here

\n\n

Convert Keras Model to TPU with TensorFlow 2.0 (Update Nov 2019)

\n\n

To use Keras Model with Google Cloud TPU is very easy with TensorFlow 2.0, it does not need to be \"converted\" anymore.

\n\n

What we need to do is just specify a Distributed Strategy to make the TensorFlow do all the heavy lift for us.

\n\n
def create_model():\n    model = tf.keras.models.Sequential()\n\n    model.add(tf.keras.layers.Conv2D(128, (3, 3), input_shape=x_train.shape[1:]))\n    model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))\n    model.add(tf.keras.layers.Activation('elu'))\n\n    model.add(tf.keras.layers.Flatten())\n    model.add(tf.keras.layers.Dense(10))\n    model.add(tf.keras.layers.Activation('softmax'))\n\n    return model\n\nresolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])\ntf.config.experimental_connect_to_host(resolver.master())\ntf.tpu.experimental.initialize_tpu_system(resolver)\nstrategy = tf.distribute.experimental.TPUStrategy(resolver)\n\nwith strategy.scope():\n    model = create_model()\n    model.compile(\n        optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),\n        loss=tf.keras.losses.sparse_categorical_crossentropy,\n        metrics=[tf.keras.metrics.sparse_categorical_accuracy])\n
\n\n

We create a Distribute Resolver, then create the Strategy for the Resolver, then with the strategy.scope(), we create our Keras Model, then we are done.

\n\n

Learn more about how to create a Keras Model with TPU from my Colab notebook at https://colab.research.google.com/github/huan/tensorflow-handbook-tpu/blob/master/tensorflow-handbook-tpu-example.ipynb.

\n\n

The Official TensorFlow Distributed Training document: https://www.tensorflow.org/guide/distributed_training

\n\n

However, please notice that there are some environmental issues with the Colab that need to be fixed, so it might not be able to run in the Colab yet.

\n\n

Colab TPU is not ready with 2.0 (yet)

\n\n

The TensorFlow 2.0 had been released, but the Colab with TPU support still does not be supported yet. It was said by some Googlers that it will be ok after the 2.1 released.

\n\n

We have an issue for tracking this progress at; https://github.com/huan/tensorflow-handbook-tpu/issues/1#issuecomment-552807573

\n\n

My Old Answer

\n\n

It was confirmed from Googler Wolff that we can not use TF 2.0 in Colab with TPU yet(reported at 15th April 2019):

\n\n
\n

The TPU you will get allocated via Colab is running TF 1.x. When you install the nightly 2.0 pip on your Jupyter VM, it doesn't change the TPU. You end up with a mismatch between what's running on your Jupyter instance and what the TPU has.

\n
\n\n

And according to https://github.com/tensorflow/tensorflow/issues/24412, the TPU support for TensorFlow 2.0 is incomplete yet.

\n\n

The solution will be to monitor the above issue and wait for the TF 2.0 to be released.

\n", + "system": "" + }, + { + "instruction": "Combining Keras model.fit's `steps_per_epoch` with TensorFlow's Dataset API's `batch()`", + "input": "", + "output": "

steps_per_epoch is the number of batches of your set batch size is ran through the network in one epoch.

\n\n

You have set your steps_per_epoch to be training_set_size//batch_size for a good reason. This ensures all data are trained upon in one epoch, providing the number divides exactly (if not it rounds by the // operator).

\n\n

That is to say if you had a batch size of 10 and a training set size of 30, then steps_per_epoch = 3 ensures all data are used.

\n\n

And to quote your question:

\n\n
\n

\"Are epochs larger than one pass over the training set if steps_per_epoch is set to more than training_set_size//batch_size?\"

\n
\n\n

Yes. Some data will be passed through again in the same epoch.

\n", + "system": "" + }, + { + "instruction": "Why do I get error while trying to build an architecture with multiple inputs in Keras?", + "input": "", + "output": "

Keras functional api Model expects two positional arguments namely inputs, and outputs.\nThe error

\n\n
\n

TypeError: _init_subclassed_network() got an unexpected keyword\n argument 'inputs'

\n
\n\n

is thrown when the output of the model is not specified.

\n\n
input1 = keras.layers.Input(shape=(8,))\ninput2 = keras.layers.Input(shape=(8,))\nh1 = keras.layers.Concatenate()([input1, input2])\nmodel = keras.models.Model(inputs=[input1, input2])\n
\n\n

this throws the following error

\n\n
TypeError: _init_subclassed_network() got an unexpected keyword argument 'inputs'\n
\n\n

But if outputs are specified it works without error

\n\n
model = keras.models.Model(inputs=[input1, input2], outputs=h1)\n
\n\n

Similar exception is thrown when the outputs argument is passed without inputs argument.

\n\n
model = keras.models.Model(outputs=h1)\n
\n\n
TypeError: _init_subclassed_network() got an unexpected keyword argument 'outputs'\n
\n\n

I think it would be very helpful, if the error message would have been more informative. The inputs and outputs arguments are \"not unexpected\" arguments here. It would be more informative if the error message would have been

\n\n
TypeError: _init_subclassed_network() missing expected keyword argument 'outputs'\n
\n\n

for the former case where only inputs argument is specified and

\n\n
TypeError: _init_subclassed_network() missing expected keyword argument 'inputs'\n
\n\n

for the later case where there only outputs argument is specified.

\n", + "system": "" + }, + { + "instruction": "How to create a loss function parameter that is dependent on epoch number in Keras?", + "input": "", + "output": "

I understood your idea. I think that the problem is that the alpha in the loss function is not referred to the member of changeAlpha class. You can try like this:

\n\n
instance = changeAlpha()\ndef custom_loss(x, x_pred): \n    loss1 = binary_crossentropy(x, x_pred)\n    loss2 = (x, x_pred)\n    return (instance.alpha*)* loss1 + (1-instance.alpha)*loss2\n
\n\n

Or, you can make the alpha as the class variable not the install variable, and then change the loss function as below:

\n\n
def custom_loss(x, x_pred): \n    loss1 = binary_crossentropy(x, x_pred)\n    loss2 = (x, x_pred)\n    return (changeAlpha.alpha*)* loss1 + (1-changeAlpha.alpha)*loss2\n
\n\n

Hope it can help you.

\n", + "system": "" + }, + { + "instruction": "keras LSTM feeding input with the right shape", + "input": "", + "output": "

You want to set up a LSTM ( stateful or stateless ? ) with multiple features, the features are the columns Value USD Drop 7 Up 7 Mean Change 7 Change in your dataframe. A similar problem is in https://github.com/keras-team/keras/issues/6471

\n\n

Keras LSTMs accept input as (batch_size (number of samples processed at a time),timesteps,features) = (batch_size, timesteps, input_dim) As you have 5 features input_dim = features = 5. i do not know your entire data so i can not say more. The relation of number_of_samples ( number of rows in your dataframe ) and batch_size is in http://philipperemy.github.io/keras-stateful-lstm/, batch_size is the number of samples ( rows ) processed at a time ( doubts regarding batch size and time steps in RNN ) :

\n\n
\n

Said differently, whenever you train or test your LSTM, you first have\n to build your input matrix X of shape nb_samples, timesteps, input_dim\n where your batch size divides nb_samples. For instance, if\n nb_samples=1024 and batch_size=64, it means that your model will\n receive blocks of 64 samples, compute each output (whatever the number\n of timesteps is for every sample), average the gradients and propagate\n it to update the parameters vector.

\n
\n\n

source : http://philipperemy.github.io/keras-stateful-lstm/

\n\n

batch size is important for training

\n\n
\n

A batch size of 1 means that the model will be fit using online\n training (as opposed to batch training or mini-batch training). As a\n result, it is expected that the model fit will have some variance.

\n
\n\n

source : https://machinelearningmastery.com/stateful-stateless-lstm-time-series-forecasting-python/

\n\n

timesteps is the number of timesteps / past network states you want to look back on, there is a maximal value for LSTMs of about 200-500 ( Vanishing Gradient problem ) for performance reason maximal value is about 200 ( https://github.com/keras-team/keras/issues/2057 )

\n\n

splitting is easier ( Selecting multiple columns in a pandas dataframe ) :

\n\n
y = df['Predict']\nX = df[['Value USD','Drop 7','Up 7','Mean Change 7', 'Change']]\n
\n\n

in https://www.kaggle.com/mknorps/titanic-with-decision-trees is code for modifying data types

\n\n

updated :

\n\n

to get rid of these errors you have to reshape the training data like in Error when checking model input: expected lstm_1_input to have 3 dimensions, but got array with shape (339732, 29) ( also contains reshaping code for more than 1 timestep ). i post entire code that worked for me because this question is less trivial than it appeared on first sight ( note the number of [ and ] that indicate the dimension of an array, when reshaping ) :

\n\n
import pandas as pd\nimport numpy as np\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras.callbacks import EarlyStopping\nfrom keras.callbacks import ModelCheckpoint\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nfrom keras.layers import LSTM\nfrom sklearn import preprocessing\n\ndf = pd.read_csv('/path/data_lstm.dat')\n\ny = df['Predict']\nX = df[['Value USD', 'Drop 7', 'Up 7', 'Mean Change 7', 'Change']]\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, shuffle=False)\n\nX_train_array = X_train.values  ( https://stackoverflow.com/questions/13187778/convert-pandas-dataframe-to-numpy-array-preserving-index )\ny_train_array = y_train.values.reshape(4,1)\n\nX_test_array = X_test.values\ny_test_array = y_test.values\n\n\n# reshaping to fit batch_input_shape=(4,1,5) batch_size, timesteps, number_of_features , batch_size can be varied batch_input_shape=(2,1,5), = (1,1,5),... is also working\n\nX_train_array = np.reshape(X_train_array, (X_train_array.shape[0], 1, X_train_array.shape[1]))\n#>>> X_train_array    NOTE THE NUMBER OF [ and ] !!\n#array([[[ 6.480e-02,  2.000e+00,  4.000e+00, -4.290e-04, -4.200e-03]],\n\n#       [[ 6.900e-02,  1.000e+00,  5.000e+00,  2.740e-04,  4.030e-03]],\n\n#       [[ 6.497e-02,  1.000e+00,  5.000e+00,  2.290e-04,  7.000e-05]],\n\n#       [[ 6.490e-02,  1.000e+00,  5.000e+00,  5.140e-04,  2.000e-03]]])\ny_train_array = np.reshape(y_train_array, (y_train_array.shape[0], 1, y_train_array.shape[1]))\n#>>> y_train_array     NOTE THE NUMBER OF [ and ]   !!\n#array([[[4]],\n\n#       [[2]],\n\n#       [[2]],\n\n#       [[2]]])\n\n\n\nmodel = Sequential()\nmodel.add(LSTM(32, return_sequences=True, batch_input_shape=(4,1,5) ))\nmodel.add(LSTM(32, return_sequences=True ))\nmodel.add(Dense(1, activation='softmax'))\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n
\n", + "system": "" + }, + { + "instruction": "Loss not changing no matter the learning rate", + "input": "", + "output": "

relu

\n\n

Don't use relu carelessly! It has a constant zero region without gradients. It's totally normal for it to get stuck.

\n\n\n\n

.

\n\n
model.add(Conv2D(..... , activation='linear'))\nmodel.add(BatchNormalization())\nmodel.add(Activation('relu'))\n
\n", + "system": "" + }, + { + "instruction": "RNN: What is the use of return_sequences in LSTM layer in Keras Framework", + "input": "", + "output": "

When the return_sequences argument is set to False (default), the network will only output hn, i.e. the hidden state at the final time step. Otherwise, the network will output the full sequence of hidden states, [h1, h2, ..., hn]. The internal equations of the layer are unchanged. Refer to the documentation.

\n", + "system": "" + }, + { + "instruction": "Use multiple directories for flow_from_directory in Keras", + "input": "", + "output": "

The Keras ImageDataGenerator flow_from_directory method has a follow_links parameter.

\n\n

Maybe you can create one directory which is populated with symlinks to files in all the other directories.

\n\n

This stack question discusses using symlinks with Keras ImageDataGenerator: Understanding 'follow_links' argument in Keras's ImageDataGenerator?

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'Model' object has no attribute 'name'", + "input": "", + "output": "

x1 and x2 point to the Reshape layers for the input and not the Input layers themselves.

\n", + "system": "" + }, + { + "instruction": "Keras SimpleRNN confusion", + "input": "", + "output": "

The documentation touches on the expected shapes of recurrent components in Keras, let's look at your case:

\n\n
    \n
  1. Any RNN layer in Keras expects a 3D shape (batch_size, timesteps, features). This means you have timeseries data.
  2. \n
  3. The RNN layer then iterates over the second, time dimension of the input using a recurrent cell, the actual recurrent computation.
  4. \n
  5. If you specify return_sequences then you collect the output for every timestep getting another 3D tensor (batch_size, timesteps, units) otherwise you only get the last output which is (batch_size, units).
  6. \n
\n\n

Now returning to your questions:

\n\n
    \n
  1. You mention vectors but shape=(2,) is a vector so this doesn't work. shape=(2,1) works because now you have 2 vectors of size 1, these shapes exclude batch_size. So to feed vectors of size to you need shape=(how_many_vectors, 2) where the first dimension is the number of vectors you want your RNN to process, the timesteps in this case.
  2. \n
  3. To chain RNN layers you need to feed 3D data because that what RNNs expect. When you specify return_sequences the RNN layer returns output at every timestep so that can be chained to another RNN layer.
  4. \n
  5. States are collection of vectors that a RNN cell uses, LSTM uses 2, GRU has 1 hidden state which is also the output. They default to 0s but can be specified when calling the layer using initial_states=[...] as a list of tensors.
  6. \n
\n\n

There is already a post about the difference between RNN layers and RNN cells in Keras which might help clarify the situation further.

\n", + "system": "" + }, + { + "instruction": "Report Keras model evaluation metrics every 10 epochs?", + "input": "", + "output": "

A custom Callback would be a nice solution giving you enough control over the training procedure. Something along the lines of:

\n\n
class SensitivitySpecificityCallback(Callback):\n    def on_epoch_end(self, epoch, logs=None):\n        if epoch % 10 == 1:\n            x_test = self.validation_data[0]\n            y_test = self.validation_data[1]\n            # x_test, y_test = self.validation_data\n            predictions = self.model.predict(x_test)\n            y_test = np.argmax(y_test, axis=-1)\n            predictions = np.argmax(predictions, axis=-1)\n            c = confusion_matrix(y_test, predictions)\n\n            print('Confusion matrix:\\n', c)\n            print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0]))\n            print('specificity', c[1, 1] / (c[1, 1] + c[1, 0]))\n
\n\n

where epoch is the epoch number and logs contain the usual metrics + loss the model trains.

\n\n

Then run with:

\n\n
model.fit(x_train, y_train,\n          batch_size=batch_size,\n          epochs=epochs,\n          verbose=1,\n          shuffle='batch',\n          validation_data=(x_test, y_test),\n          callbacks=[SensitivitySpecificityCallback()])\n
\n\n

NOTE: if you don't like how your model is training based on your metrics you can cut the training short with:

\n\n
self.model.stop_training = True\n
\n\n

which will stop the training for you.

\n", + "system": "" + }, + { + "instruction": "How to get back to default tensorflow version on google colab", + "input": "", + "output": "

To get back to the default versions, I had to restart the VM.
\nTo do so, just do:

\n\n
!kill -9 -1\n
\n\n

Then, wait 30 seconds, and reconnect.

\n\n

I got the information by opening an issue on the github repository.

\n", + "system": "" + }, + { + "instruction": "Python add one more channel to image", + "input": "", + "output": "
y = np.expand_dims(x, axis=-1)  # Equivalent to x[:,:,np.newaxis]\n
\n\n

As the function says, it will add an extra dimension as the new Last Channel

\n\n

Edit

\n\n\n", + "system": "" + }, + { + "instruction": "Choosing only specific classes of CIFAR-10", + "input": "", + "output": "
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()\n
\n\n

which contains all samples of 10 classes

\n\n

choose index with respect to classes

\n\n
index = np.where(y_train == 0)\nX_train = X_train[indices]\ny_train = y_train[indices]\n
\n\n

which gives all 0th indices samples

\n", + "system": "" + }, + { + "instruction": "Merge two different deep learning models in Keras", + "input": "", + "output": "

Sequential models are not suited for creating models with branches.

\n\n

You can have the two independent models as Sequential models, as you did, but from the Concatenate on, you should start using the functional Model API.

\n\n

The idea is to get the output tensors of the two models and feed them in other layers to get new output tensors.

\n\n

So, considering you have model and extra:

\n\n
mergedOutput = Concatenate()([model.output, extra.output])\n
\n\n

This mergetOutput is a tensor. You can either create the last part of the model using this tensor, or create the last part independently, and call it on this tensor. The second approach may be good if you want to train each model separately (doesn't seem to be your case).

\n\n

Now, creating the new model as a functional API model:

\n\n
out = Dense(128, activation='relu')(mergetOutput)\nout = Dropout(0.8)(out)\nout = Dense(32, activation='sigmoid')(out)\nout = Dense(num_classes, activation='softmax')(out)\n\nnew_model = Model(\n    [model.input, extra.input], #model with two input tensors\n    out                         #and one output tensor\n) \n
\n\n
\n\n

An easier approach is to take all three models you have already created and use them to create a combined model:

\n\n
model = Sequential() #your first model\nextra = Sequential() #your second model    \nnew_model = Sequential() #all these three exactly as you did   \n\n#in this case, you just need to add an input shape to new_model, compatible with the concatenated output of the previous models. \nnew_model.add(FirstNewModelLayer(...,input_shape=(someValue,)))\n
\n\n

Join them like this:

\n\n
mergedOutput = Concatenate()([model.output, extra.output])\nfinalOutput = new_model(mergedOutput)    \n\nfullModel = Model([model.input,extra.input],finalOutput)\n
\n", + "system": "" + }, + { + "instruction": "Keras initializers outside Keras", + "input": "", + "output": "

The glorot_uniform() creates a function, and later this function will be called with a shape. So you need:

\n\n
# from keras.initializers import * #(tf 1.x)\n\nfrom tensorflow.keras.initializers import *\n\nunif = glorot_uniform() #this returns a 'function(shape)'\nmat_as_tensor = unif((4,11)) #this returns a tensor - use this in keras models if needed   \nmat_as_numpy = K.eval(mat) #this returns a numpy array (don't use in models)\nprint(mat_as_numpy) \n
\n", + "system": "" + }, + { + "instruction": "Tensorflow equivalent of the Keras function: UpSampling2D", + "input": "", + "output": "

Assuming x is of shape (BATCH_SIZE, H, W, C), you can use tf.image.resize_nearest_neighbor, which is the backend implementation used by keras:

\n\n
x = tf.image.resize_nearest_neighbor(x, (2*H,2*W))\n
\n", + "system": "" + }, + { + "instruction": "Why use axis=-1 in Keras metrics function?", + "input": "", + "output": "

Here's what you're looking for, inside training_utils.py:

\n\n
def weighted(y_true, y_pred, weights, mask=None):\n    \"\"\"Wrapper function.\n    # Arguments\n        y_true: `y_true` argument of `fn`.\n        y_pred: `y_pred` argument of `fn`.\n        weights: Weights tensor.\n        mask: Mask tensor.\n    # Returns\n        Scalar tensor.\n    \"\"\"\n    # score_array has ndim >= 2\n    score_array = fn(y_true, y_pred)\n    if mask is not None:\n        # Cast the mask to floatX to avoid float64 upcasting in Theano\n        mask = K.cast(mask, K.floatx())\n        # mask should have the same shape as score_array\n        score_array *= mask\n        #  the loss per batch should be proportional\n        #  to the number of unmasked samples.\n        score_array /= K.mean(mask) + K.epsilon()\n\n    # apply sample weighting\n    if weights is not None:\n        # reduce score_array to same ndim as weight array\n        ndim = K.ndim(score_array)\n        weight_ndim = K.ndim(weights)\n        score_array = K.mean(score_array,\n                             axis=list(range(weight_ndim, ndim)))\n        score_array *= weights\n        score_array /= K.mean(K.cast(K.not_equal(weights, 0), K.floatx()))\n    return K.mean(score_array)\nreturn weighted\n
\n\n

The metric function is called by score_array = fn(y_true, y_pred) (it's a nested function and fn is defined in the outer function). This array is averaged in the last line return K.mean(score_array). That's why you're seeing scalar metrics instead of tensors. The lines in between are just to introduce masks and weights if necessary.

\n", + "system": "" + }, + { + "instruction": "How to train a LSTM model with different N-dimensions labels?", + "input": "", + "output": "

This answer is for non varying dimensions, but for varying dimensions, the padding idea in Giuseppe's answer seems the way to go, maybe with help of the "Masking" proposed in Keras documentation.

\n
\n

The output shape in Keras is totally dependent on the number of "units/neurons/cells" you put in the last layer, and of course, on the type of layer.

\n

I can see that your data does not match your code in your question, it's impossible, but, suppose your code is right and forget the data for a while.

\n

An input shape of (100,5) in an LSTM layer means a tensor of shape (None, 100, 5), which is

\n\n

And the 32 cells in this same LSTM layer means that the resulting vectors will change from 5 to 32-dimension vectors. With return_sequences=True, all the 100 timesteps will appear in the result. So the result shape of the first layer is (None, 100, 32):

\n\n

Now the second LSTM layer does exactly the same thing. Keeps the 100 timesteps, and since it has also 32 cells, keeps the 32-dimension vectors, so the output is also (None, 100, 32)

\n

Finally, the time distributed Dense layer will also keep the 100 timesteps (because of TimeDistributed), and change your vectors to 5-dimensoin vectors again (because of 5 units), resulting in (None, 100, 5).

\n
\n

As you can see, you cannot change the number of timesteps directly with recurrent layers, you need to use other layers to change these dimensions. And the way to do this is completely up to you, there are infinite ways of doing this.

\n

But in all of them, you need to get free of the timesteps and rebuild the data with another shape.

\n
\n

Suggestion

\n

A suggestion from me (which is just one possibility) is to reshape your result, and apply another dense layer just to achieve the final shape expeted.

\n

Suppose you want a result like (None, 4, 5) (never forget, the first dimension of your data is the number of examples, it can be any number, but you must take it into account when you organize your data). We can achieve this by reshaping the data to a shape containing 4 in the second dimension:

\n
#after the Dense layer:\n\nmodel.add(Reshape((4,125)) #the batch size doesn't appear here, \n   #just make sure you have 500 elements, which is 100*5 = 4*125\n\nmodel.add(TimeDistributed(Dense(5))\n#this layer could also be model.add(LSTM(5,return_sequences=True)), for instance\n\n#continue to the "Activation" layer\n
\n

This will give you 4 timesteps (because the dimension after Reshape was: (None, 4, 125), each step being a 5-dimension vector (because of Dense(5)).

\n

Use the model.summary() command to see the shapes outputted by each layer.

\n", + "system": "" + }, + { + "instruction": "Adding metrics to existing model in Keras", + "input": "", + "output": "

You can compile a model as many times as you want. That will not change its weights.

\n\n

So you can load the model and then compile it with metrics.

\n\n

The only possible problem of recompiling the model is possibly resetting the optimizer state.

\n", + "system": "" + }, + { + "instruction": "How to interpret keras " predict_generator " output?", + "input": "", + "output": "

So according to documentation you are calling your generator len(validation_generator.filenames) number of times - where at each call the sample of size batch_size is provided. Due to how ImageGenerator is implemented - if there is not enough files to complete batch (in your case you have batch_size=16 but only 5 images in your folder) - the maximum possible number of images is returned - in your case - 5. So you are getting len(validation_generator.filenames) * 5 = 25 images to evaluate - and that's why you have such results (if you look closely - you have the same values in each 5 numbers). In order to get this samples in order of filenames you need to create a new generator with shuffle option set to False and with batch_size=5 and call it once (or e.g. with batch_size=1 and call it 5 times).

\n", + "system": "" + }, + { + "instruction": "Keras CNN, verbose training progress bar display", + "input": "", + "output": "

I gave a solution (not sure if can work for every case, but worked well for me) in https://stackoverflow.com/a/57475559/9531617. To quote myself:

\n\n
\n\n

The problem can be fixed without modifying the sources by simply installing ipykernel and importing it in your code:

\n\n

pip install ipykernel\nThen \nimport ipykernel

\n\n

In fact, in the Keras generic_utils.py file, the probematic line was (in my case):

\n\n
            if self._dynamic_display:\n                sys.stdout.write('\\b' * prev_total_width)\n                sys.stdout.write('\\r')\n            else:\n                sys.stdout.write('\\n')\n
\n\n

Where the self._dynamic_display was False, whereas it needed to be True to work properly. But the value self._dynamic_display was initiated such as:

\n\n
        self._dynamic_display = ((hasattr(sys.stdout, 'isatty') and\n                                  sys.stdout.isatty()) or\n                                 'ipykernel' in sys.modules)\n
\n\n

So, loading ipykerneladded it to sys.modules and fixed the problem for me.

\n", + "system": "" + }, + { + "instruction": "'Model' object has no attribute 'load_model' keras", + "input": "", + "output": "

load_model() isn't an attribute of an model obejct indeed. load_model() is a function imported from keras.models that takes a file name and returns a model obejct.

\n\n

You should use it like this :

\n\n
from keras.models import load_model\n\nmodel = load_model(path_to_model)\n
\n\n
\n

You can then use keras.models.load_model(filepath) to reinstantiate your model. load_model will also take care of compiling the model using the saved training configuration (unless the model was never compiled in the first place). from source

\n
\n", + "system": "" + }, + { + "instruction": "How do I prevent the TypeError: unsupported operand type(s) for -: 'NoneType' and 'int' when using keras EarlyStopping?", + "input": "", + "output": "

If you think about it : you ask to monitor a validation loss without using a validation during the training.

\n\n

Use

\n\n
model.fit(X_train, Y_train, nb_epoch=20, batch_size=20, validation_split=0.2, callbacks=[EarlyStopping(monitor='val_loss', patience=4)])\n
\n\n

For example if you want to have a validation. It will use 20% of your data as validation set. You won't be training on those samples, just validating your model at the end of each epoch.

\n\n

And as mentionned in your other question about this code : change the last activation to a softmax to use with categorical_crossentropy. Or switch the objective to binary_crossentropy depending on your needs.

\n", + "system": "" + }, + { + "instruction": "Keras LSTM on CPU faster than GPU?", + "input": "", + "output": "

Use Keras' CuDNNLSTM cells for accelerated compute on Nvidia GPUs: https://keras.io/layers/recurrent/#cudnnlstm

\n\n

It's simply changing the LSTM line to:

\n\n
model.add(CuDNNLSTM(20, input_shape=(50000,3), return_sequences=True))\n
\n", + "system": "" + }, + { + "instruction": "use tensorflow.GPUOptions within Keras when using tensorflow backend", + "input": "", + "output": "

You can set the Keras global tensorflow session with keras.backend.tensorflow_backend.set_session():

\n\n
import tensorflow as tf\nimport keras.backend.tensorflow_backend as ktf\n\n\ndef get_session(gpu_fraction=0.333):\n    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_fraction,\n                                allow_growth=True)\n    return tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))\n\n\nktf.set_session(get_session())\n
\n", + "system": "" + }, + { + "instruction": "binary_accuracy in keras Metrices , what's the threshold value to predicted one sample as positive and negative cases", + "input": "", + "output": "

binary_accuracy don't have threshold param but you can easily define one yourself.

\n\n
import keras\nfrom keras import backend as K\n\ndef threshold_binary_accuracy(y_true, y_pred):\n    threshold = 0.80\n    if K.backend() == 'tensorflow':\n        return K.mean(K.equal(y_true, K.tf.cast(K.lesser(y_pred,threshold), y_true.dtype)))\n    else:\n        return K.mean(K.equal(y_true, K.lesser(y_pred,threshold)))\n\na_pred = K.variable([.1, .2, .6, .79, .8, .9])\na_true = K.variable([0., 0., 0.,  0., 1., 1.])\n\nprint K.eval(keras.metrics.binary_accuracy(a_true, a_pred))\nprint K.eval(threshold_binary_accuracy(a_true, a_pred))\n
\n\n

Now you can use it as metrics=[threshold_binary_accuracy]

\n", + "system": "" + }, + { + "instruction": "Trained keras model much slower making its predictions than in training", + "input": "", + "output": "

I think it's because Keras's default predict behavior is with batch size 32. As a result especially if you're using a GPU, the small batch sizes destroy the performance. If you just change the batch size to predict(X_test, batch_size=128) you'll get significantly faster performance.

\n", + "system": "" + }, + { + "instruction": "How to merge two LSTM layers in Keras", + "input": "", + "output": "

The problem is not with the merge layer. You need to create two embedding layers to feed in 2 different inputs.

\n\n

The following modifications should work:

\n\n
embedding_layer_1 = Embedding(len(word_index) + 1,\n                            EMBEDDING_DIM,\n                            weights=[embedding_matrix],\n                            input_length=50,\n                            trainable=False)\n\nembedding_layer_2 = Embedding(len(word_index) + 1,\n                            EMBEDDING_DIM,\n                            weights=[embedding_matrix],\n                            input_length=50,\n                            trainable=False)\n\n\ns1rnn = Sequential()\ns1rnn.add(embedding_layer_1)\ns1rnn.add(LSTM(128, input_shape=(100, 1)))\ns1rnn.add(Dense(1))\n\ns2rnn = Sequential()\ns2rnn.add(embedding_layer_2)\ns2rnn.add(LSTM(128, input_shape=(100, 1)))\ns2rnn.add(Dense(1))\n
\n", + "system": "" + }, + { + "instruction": "print out the shape of each layer in the net architecture", + "input": "", + "output": "

Just using model.summary(), which gives you pretty print.

\n", + "system": "" + }, + { + "instruction": "Validation Loss and Accuracy in LSTM Networks with Keras", + "input": "", + "output": "
    \n
  1. When to stop training: it's an usual way to stop training when a some metric computed on validation data starts to grow. This an usual indicator of overfitting. But please notice that you are using a dropout technique - which results in training slightly different model during every epochs - that's why you should apply some kind of patience - and stop training when a phenomena like this occurs in a several consecutive epochs.

  2. \n
  3. The reason of fluctuations: the same as in first point - you are using a dropout technique which introduces some sort of randomness to your network. This is in my opinion the main reason of the fluctuations observed.

  4. \n
  5. What Keras models take as an inputs to a Dense layer: if you study carefully a documentation of a LSTM/RNN layer you will notice return_sequences=False set as a default argument. This means that only the last output from a processed sequence is taken as an input to the following layer. You could change that using 1-D Convolutions.

  6. \n
\n", + "system": "" + }, + { + "instruction": "Keras + IndexError", + "input": "", + "output": "

You need to Pad the imdb sequences you are using, add those lines:

\n\n
from keras.preprocessing import sequence\nTrain_X = sequence.pad_sequences(Train_X, maxlen=maxlen)\nTest_X = sequence.pad_sequences(Test_X, maxlen=maxlen)\n
\n\n

Before building the actual model.

\n", + "system": "" + }, + { + "instruction": "OpenVino converted model not returning same score values as original model (Sigmoid)", + "input": "", + "output": "

Generally, Tensorflow is the only network with the shape NHWC while most others use NCHW. Thus, the OpenVINO Inference Engine satisfies the majority of networks and uses the NCHW layout. Model must be converted to NCHW layout in order to work with Inference Engine.

\n

The conversion of the native model format into IR involves the process where the Model Optimizer performs the necessary transformation to convert the shape to the layout required by the Inference Engine (N,C,H,W). Using the --input_shape parameter with the correct input shape of the model should suffice.

\n

Besides, most TensorFlow models are trained with images in RGB order. In this case, inference results using the Inference Engine samples may be incorrect. By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument.

\n

I suggest you validate this by inferring your model with the Hello Classification Python Sample instead since this is one of the official samples provided to test the model's functionality.

\n

You may refer to this "Intel Math Kernel Library for Deep Neural Network" for deeper explanation regarding the input shape.

\n", + "system": "" + }, + { + "instruction": "How to define a weighted loss function for TF2.0+ keras CNN for image classification?", + "input": "", + "output": "

You can simply wrap tf.nn.weighted_cross_entropy_with_logits inside a custom loss function.

\n

Remember also that tf.nn.weighted_cross_entropy_with_logits expects logits so your network must produce it and not probabilities (remove softmax activation from the last layer)

\n

Here a dummy example:

\n
X = np.random.uniform(0,1, (10,32,32,3))\ny = np.random.randint(0,2, (10,))\ny = tf.keras.utils.to_categorical(y)\n\nmodel = tf.keras.models.Sequential([\n    tf.keras.layers.Conv2D(32, (3, 3), input_shape=(32, 32, 3), activation='relu'),\n    tf.keras.layers.Flatten(),\n    tf.keras.layers.Dense(2) ### must be logits (remove softmax)\n])\n\ndef my_loss(weight):\n    def weighted_cross_entropy_with_logits(labels, logits):\n        loss = tf.nn.weighted_cross_entropy_with_logits(\n            labels, logits, weight\n        )\n        return loss\n    return weighted_cross_entropy_with_logits\n\nmodel.compile(optimizer="nadam",\n              loss=my_loss(weight=0.8),\n              metrics=['accuracy'])\nmodel.fit(X,y, epochs=3)\n
\n

At inference time you obtain the probabilities in this way:

\n
tf.nn.softmax(model.predict(X))\n
\n", + "system": "" + }, + { + "instruction": "Better way to concatenate ConvLSTM2D model and Tabular model", + "input": "", + "output": "

I agree with you that the huge Dense layer (which has millions of parameters) might hinder the performance of the model. Instead of inflating the tabular data with a Dense layer, you could rather choose one of the following two approaches.

\n
\n

Option 1: Tile the x_tab tensor so that it matches your desired shape. This can be achieved with the following steps:

\n

First, there is no need to flatten the ConvLSTM2D's encoded tensor:

\n
x_input = Input(shape=(3, 128, 128, 3))\nx = ConvLSTM2D(32, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x_input)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(16, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(8, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)\nx = BatchNormalization()(x)  # Shape=(None, None, 128, 128, 8) \n# Commented: x = Flatten()(x)\n
\n

Second, you can process your tabular data with one or several Dense layers. For example:

\n
dim = 10\nx_tab_input = Input(shape=(5))\nx_tab = Dense(100, activation="relu")(x_tab_input)\nx_tab = Dense(dim, activation="relu")(x_tab)\n# x_tab = Flatten()(x_tab)  # Note: Flattening a 2D tensor leaves the tensor unchanged\n
\n

Third, we wrap the tensorflow operation tf.tile in a Lambda layer, effectively creating copies of the tensor x_tab so that it matches the desired shape:

\n
def repeat_tabular(x_tab):\n    h = x_tab[:, None, None, None, :]  # Shape=(bs, 1, 1, 1, dim)\n    h = tf.tile(h, [1, 3, 128, 128, 1])  # Shape=(bs, 3, 128, 128, dim)\n    return h\nx_tab = Lambda(repeat_tabular)(x_tab)\n
\n

Finally, we concatenate the x and the tiled x_tab tensors along the last axis (you might also consider concatenating along the first axis, corresponding to the channels' dimension)

\n
concat = Concatenate(axis=-1)([x, x_tab])  #\u00a0Shape=(3,128,128,8+dim)\noutput = concat\noutput = Conv3D(filters=3, kernel_size=(3, 3, 3), activation='relu', padding="same")(output)\n# ...\n
\n

Note that this solution might be a bit naive in the sense that the model is not encoding the input sequence of images into a low-dimensional representation, limiting the receptive field of the network and potentially resulting in degraded performance.

\n
\n

Option 2: Similar to autoencoders and U-Net, it might be desirable to encode your sequence of images into a low-dimensional representation in order to discard the unwanted variation (e.g. noise) while preserving the meaningful signal (e.g. required to infer the next 3 images of the sequence). This can be achieved as follows:

\n

First, encode the input sequence of images into a low-dimension 2-dimensional tensor. For example, something along the lines of:

\n
x_input = Input(shape=(None, 128, 128, 3))\nx = ConvLSTM2D(32, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x_input)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(16, 3, strides = 1, padding='same', dilation_rate = 2,return_sequences=True)(x)\nx = BatchNormalization()(x)\nx = ConvLSTM2D(8, 3, strides = 1, padding='same', dilation_rate = 2, return_sequences=False)(x)\nx = BatchNormalization()(x)\nx = Flatten()(x)\nx = Dense(64, activation='relu')(x)\n
\n

Note that the last ConvLSTM2D is not returning the sequences. You might want to explore different encoders to arrive at this point (e.g. you could also use pooling layers here).

\n

Second, process your tabular data with Dense layers. For example:

\n
dim = 10\nx_tab_input = Input(shape=(5))\nx_tab = Dense(100, activation="relu")(x_tab_input)\nx_tab = Dense(dim, activation="relu")(x_tab)\n
\n

Third, concatenate the data from the previous two streams:

\n
concat = Concatenate(axis=-1)([x, x_tab])\n
\n

Fourth, use a Dense + Reshape layer to project the concatenated vectors into a sequence of low-resolution images:

\n
h = Dense(3 * 32 * 32 * 3)(concat)\noutput = Reshape((3, 32, 32, 3))(h)\n
\n

The shape of output allows to up-sample the images into a shape of (128, 128, 3), but it is otherwise arbitrary (e.g. you might also want to experiment here).

\n

Finally, apply one or several Conv3DTranspose layers to get to the desired output (e.g. 3 images of shape (128, 128, 3)).

\n
output = tf.keras.layers.Conv3DTranspose(filters=50, kernel_size=(3, 3, 3),\n                                         strides=(1, 2, 2), padding='same',\n                                         activation='relu')(output)\noutput = tf.keras.layers.Conv3DTranspose(filters=3, kernel_size=(3, 3, 3),\n                                         strides=(1, 2, 2), padding='same',\n                                         activation='relu')(output)  # Shape=(None, 3, 128, 128, 3)\n
\n

The rationale behind transposed convolution layers is discussed here. Essentially, the Conv3DTranspose layer goes in the opposite direction of normal convolutions - it allows upsampling your low-resolution images into high-resolution images.

\n", + "system": "" + }, + { + "instruction": "Using gradients in a custom loss function (tensorflow+keras)", + "input": "", + "output": "

I was not able to implement the training using automatic fit method. However it can certainly be done by manually writing the loop. I will provide example for learning a function only using its gradients.

\n
for epoch in range(epochs):\nprint("\\nStart of epoch %d" % (epoch,))\n\n# Iterate over the batches of the dataset.\nfor step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n\n    # Open a GradientTape to record the operations run\n    # during the forward pass, which enables auto-differentiation.\n    with tf.GradientTape(persistent=True) as tape:\n\n        # Create tensor that you will watch\n        x_tensor = tf.convert_to_tensor(x_batch_train, dtype=tf.float64)\n        tape.watch(x_tensor)\n        # Feed forward\n        output = model(x_tensor, training=True)\n\n        # Gradient and the corresponding loss function\n        o_x = tape.gradient(output, x_tensor)\n        loss_value = loss_fn(y_batch_train, o_x)\n    \n\n    # Use the gradient tape to automatically retrieve\n    # the gradients of the trainable variables with respect to the loss.\n    grads = tape.gradient(loss_value, model.trainable_weights)\n\n    # Run one step of gradient descent by updating\n    # the value of the variables to minimize the loss.\n    optimizer.apply_gradients(zip(grads, model.trainable_weights))\n\n    # Log every 200 batches.\n    if step % 200 == 0:\n        print(\n            "Training loss (for one batch) at step %d: %.4f"\n            % (step, float(loss_value))\n        )\n        print("Seen so far: %s samples" % ((step + 1) * 64))\n
\n

loss_fn is simple in this case

\n
loss_fn = tf.keras.losses.MeanSquaredError()\n
\n

Keep in mind that you are using a second derivative to train the function and the with tf.GradientTape(persistent=True) as tape: will produce a warning which is okay for this case. Also from my experience, this method is particularly sensitive on the choice of an activation function. Continuously differentiable variants of ReLU might be the way to go.

\n", + "system": "" + }, + { + "instruction": "Keras custom loss function per tensor group", + "input": "", + "output": "

I ended up figuring out a solution to this, though I would like some feedback on it (specifically some parts). Here is the solution:

\n
import pandas as pd\nimport tensorflow as tf\nimport keras.backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Flatten, Dropout\nfrom tensorflow.python.ops import gen_array_ops\n\n\ndef main():\n    df = pd.DataFrame(columns=["feature_1", "feature_2", "condition_1", "condition_2", "label"],\n                      data=[[5, 10, "a", "1", 0],\n                            [30, 20, "a", "1", 1],\n                            [50, 40, "a", "1", 0],\n                            [15, 20, "a", "2", 0],\n                            [25, 30, "b", "2", 1],\n                            [35, 40, "b", "1", 0],\n                            [10, 80, "b", "1", 1]])\n    df = pd.concat([df] * 500)  # making data artificially larger\n    true_ratios = pd.DataFrame(columns=["condition_1", "condition_2", "true_ratio"],\n                               data=[["a", "1", 0.1],\n                                     ["a", "2", 0.2],\n                                     ["b", "1", 0.8],\n                                     ["b", "2", 0.9]])\n    features = ["feature_1", "feature_2"]\n    conditions = ["condition_1", "condition_2"]\n    conds_ratios_label = conditions + ["true_ratio", "label"]\n    df = pd.merge(df, true_ratios, on=conditions, how="left")\n    X = df[features]\n    Y = df[conds_ratios_label]\n    # need to convert strings to ints because tensors can't mix strings with floats/ints\n    mapping_1 = {"a": 1, "b": 2}\n    mapping_2 = {"1": 1, "2": 2}\n    Y.replace({"condition_1": mapping_1}, inplace=True)\n    Y.replace({"condition_2": mapping_2}, inplace=True)\n    X = tf.convert_to_tensor(X)\n    Y = tf.convert_to_tensor(Y)\n    model = my_model(input_shape=len(features))\n    model.fit(X, Y, epochs=1, batch_size=64)\n    print()\n    print(model.evaluate(X, Y))\n\n\ndef custom_loss(conditions, true_ratios, y_pred):\n    y_pred = tf.sigmoid((y_pred - 0.5) * 1000)\n    uniques, idx, count = gen_array_ops.unique_with_counts_v2(conditions, [0])\n    num_unique = tf.size(count)\n    sums = tf.math.unsorted_segment_sum(data=y_pred, segment_ids=idx, num_segments=num_unique)\n    lengths = tf.cast(count, tf.float32)\n    pred_ratios = tf.divide(sums, lengths)\n    mean_pred_ratios = tf.math.reduce_mean(pred_ratios)\n    mean_true_ratios = tf.math.reduce_mean(true_ratios)\n    diff = mean_pred_ratios - mean_true_ratios\n    return K.mean(K.abs(diff))\n\n\ndef standard_loss(y_true, y_pred):\n    return tf.losses.binary_crossentropy(y_true=y_true, y_pred=y_pred)\n\n\ndef joint_loss(conds_ratios_label, y_pred):\n    y_true = conds_ratios_label[:, 3]\n    true_ratios = conds_ratios_label[:, 2]\n    conditions = tf.gather(conds_ratios_label, [0, 1], axis=1)\n    loss_1 = standard_loss(y_true=y_true, y_pred=y_pred)\n    loss_2 = custom_loss(conditions=conditions, true_ratios=true_ratios, y_pred=y_pred)\n    return 0.5 * loss_1 + 0.5 * loss_2\n\n\ndef my_model(input_shape=None):\n    model = Sequential()\n    model.add(Dropout(0, input_shape=(input_shape,)))\n    model.add(Dense(units=2, activation="relu"))\n    model.add(Dense(units=1, activation='sigmoid'))\n    model.add(Flatten())\n    model.compile(loss=joint_loss, optimizer="Adam",\n                  metrics=[joint_loss, "accuracy"],  # had to remove custom_loss because it takes 3 args now\n                  run_eagerly=True)\n    return model\n\n\nif __name__ == '__main__':\n    main()\n
\n

The main updates are to custom_loss. I removed creating the true_ratios DataFrame from custom_loss and instead appended it to my Y in main. Now custom_loss takes 3 arguments, one of which is the true_ratios tensor. I had to use gen_array_ops.unique_with_counts_v2 and unsorted_segment_sum to get sums per group of conditions. And then I got the lengths of each group in order to create pred_ratios (calculated ratios per group based on y_pred). Finally I get the mean predicted ratios and mean true ratios, and take the absolute difference to get my custom loss.

\n

Some things of note:

\n
    \n
  1. Because the last layer of my model is a sigmoid, my y_pred values are probabilities between 0 and 1. So I needed to convert them to 0s and 1s in order to calculate the ratios I need in my custom loss. At first I tried using tf.round, but I realized that is not differentiable. So instead I replaced it with y_pred = tf.sigmoid((y_pred - 0.5) * 1000) inside of custom_loss. This essentially takes all the y_pred values to 0 and 1, but in a differentiable way. It seems like a bit of a "hack" though, so please let me know if you have any feedback on this.
  2. \n
  3. I noticed that my model only works if I use run_eagerly=True in model.compile(). Otherwise I get this error: "ValueError: Dimensions must be equal, but are 1 and 2 for ...". I'm not sure why this is the case, but the error originates from the line where I use tf.unsorted_segment_sum.
  4. \n
  5. unique_with_counts_v2 does not actually exist in tensorflow API yet, but it exists in the source code. I needed this to be able to group by multiple conditions (not just a single one).
  6. \n
\n

Feel free to comment if you have any feedback on this, in general, or in response to the bullets above.

\n", + "system": "" + }, + { + "instruction": "Memory leak issue in tensorflow", + "input": "", + "output": "

I ran into similar issue when I tried to use pre-trained embedding model to generate embedding as input feature set. While using universal-sentence-encoder-4, memory used for generate embedding is not released. Neiether tf.keras.backend.clear_session() nor gc.collect() helped.

\n

I ended up using multiprocessing module to start a 'subprocess' to repeat

\n
    \n
  1. do something with multiprocess
  2. \n
  3. kill process (memory is released)
  4. \n
\n
import multiprocessing as mp\nimport time\n\ndef get_embedding(model_dir, text, q):\n    print("      loading encoder")\n    encoder = hub.load(model_dir)\n    print("      generate embedding")\n    encoded = encoder(text).numpy()\n    print("      returning embedding")\n    q.put(encoded)\n\n\nmanual_wait_time = 20\n\n## for iterate\nq = mp.Queue()\np = mp.Process(target=get_embedding, \nargs=(encoder_dir, some_text, q, ))\np.start()\n## p.join() ## hangs forever see reference\ntime.sleep(manual_wait_time)\nprint("manually wait %s seconds"%manual_wait_time)\nembedded_mini_batch = q.get()\nq.close()\np.terminate()\n
\n

references:

\n\n", + "system": "" + }, + { + "instruction": "Error with exporting TF2.2.0 model with tf.lookup.StaticHashTable for Serving", + "input": "", + "output": "

I had the same issue and I found the answer creating a custom layer with the lookup transformation and then addin that layer to my model. Somebody else put the answer here on stackoverflow but I cannot find this again so I will put it for you. The reason is that the variables and the other elements from outside must be trackeable and I didn't find other way to make them trackeable but to create a Custom layer because those are trackeable and don't need to add aditional assets when exporting.

\n

This is the code:

\n

Here is the custom layer specific to make the transformation before the model (includes the tokenizer as a lookup from a static table, and then the padding:

\n
class VocabLookup(tf.keras.layers.Layer):\n    def __init__(self, word_index, **kwargs):\n        self.word_index = word_index\n        self.vocab = list(word_index.keys())\n        self.indices = tf.convert_to_tensor(list(word_index.values()), dtype=tf.int64)\n        vocab_initializer = tf.lookup.KeyValueTensorInitializer(self.vocab, self.indices)\n        self.table = tf.lookup.StaticHashTable(vocab_initializer, default_value=1)\n        super(VocabLookup, self).__init__(**kwargs)\n\n    def build(self, input_shape):\n        self.built = True\n\n    def sentences_transform(self,tx):\n        x = tf.strings.lower(tx)\n        x = tf.strings.regex_replace(x,"[,.:;]", " ")\n        x = tf.strings.regex_replace(x,"\u00e1", "a")\n        x = tf.strings.regex_replace(x,"\u00e9", "e")\n        x = tf.strings.regex_replace(x,"\u00ed", "i")\n        x = tf.strings.regex_replace(x,"\u00f3", "i")\n        x = tf.strings.regex_replace(x,"\u00fa", "u")\n        x = tf.strings.regex_replace(x,"\u00fc", "u")\n        x = tf.strings.regex_replace(x,"\u00c1", "a")\n        x = tf.strings.regex_replace(x,"\u00c9", "e")\n        x = tf.strings.regex_replace(x,"\u00cd", "i")\n        x = tf.strings.regex_replace(x,"\u00d3", "o")\n        x = tf.strings.regex_replace(x,"\u00da", "u")\n        x = tf.strings.regex_replace(x,"\u00dc", "u")\n        x = tf.strings.regex_replace(x,"\u00dc", "u")\n        x = tf.strings.regex_replace(x,"[?\u00bf\u00a1!@#$-_\\?+\u00bf{}*/]", "")\n        x = tf.strings.regex_replace(x," +", " ")\n        x = tf.strings.strip(x)\n        x = tf.strings.split(x)\n        x = self.table.lookup(x)\n        x_as_vector = tf.reshape(x, [-1])\n        zero_padding = tf.zeros([191] - tf.shape(x_as_vector), dtype=x.dtype)\n        x = tf.concat([x_as_vector, zero_padding], 0)\n        return x\n        \n\n    def call(self, inputs):\n        x = tf.map_fn(lambda tx: self.sentences_transform(tx), elems = inputs,dtype=tf.int64)\n        return x\n\n    def get_config(self):\n        return {'word_index': self.word_index}\n
\n

In my case I create the layer to receive the word_index from a tokenizer as an Input. Then, you can use a Layer like this one inside your model:

\n
with open(<tokenizer_path>) as f:\n    data = json.load(f)\n    tokenizer = tokenizer_from_json(data)\n\nmoderator = load_model(<final model path ('.h5')>)\nword_index = tokenizer.word_index\ntext_bytes = tf.keras.Input(shape=(), name='image_bytes', dtype=tf.string)\nx = VocabLookup(word_index)(text_bytes)\noutput = moderator(x)\nmodel = tf.keras.models.Model(text_bytes, output)\n
\n

If you make the summary you will have something like this:

\n
model.summary()\nModel: "functional_57"\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nimage_bytes (InputLayer)     [(None,)]                 0         \n_________________________________________________________________\nvocab_lookup_60 (VocabLookup (None, None)              0         \n_________________________________________________________________\nsequential_1 (Sequential)    (None, 1)                 1354369   \n=================================================================\nTotal params: 1,354,369\nTrainable params: 1,354,369\nNon-trainable params: 0\n
\n

With this steps you finally can save as a TF2 serving model

\n
save_path = <your_serving_model_path>\ntf.saved_model.save(model,  save_path)\n
\n", + "system": "" + }, + { + "instruction": "How can I use GPT 3 for my text classification?", + "input": "", + "output": "

(i substituted hateful language with ******** in the following samples)

\n

Given samples like:

\n
("You look like ****** *** to me *******", true)\n("**** you *********", true)\n("**** my ****", true)\n("hey my name is John can you help me?", false)\n("hey my name is John, i think you ****** ***!", true)\n("i have a problem with my network driver hpz-3332d", false)\n
\n

GPT-3 can indeed then decide if a given input is hateful or not. GPT-3 actually is implementing filters that will very effectively tell if an arbitrary comment is hatefull or not. You would just enter the msg and let GPT3 autcomplete the , true|false) part at the end, setting tokens to about ~6 and temperature setting 90%.

\n

Boolean-ish classification that also relies on more complex context (you can insult someone without using foul-language) id doeable with GPT3 and can also be done with GPT2.

\n", + "system": "" + }, + { + "instruction": "Using BERT Embeddings in Keras Embedding layer", + "input": "", + "output": "

Hope these links are of help:

\n\n
transformer_model = transformers.TFBertModel.from_pretrained('bert-large-uncased')\n\ninput_ids = tf.keras.layers.Input(shape=(128,), name='input_token', dtype='int32')\ninput_masks_ids = tf.keras.layers.Input(shape=(128,), name='masked_token', dtype='int32')\nX = transformer_model(input_ids, input_masks_ids)[0]\nX = tf.keras.layers.Dropout(0.2)(X)\nX = tf.keras.layers.Dense(6, activation='softmax')\nmodel = tf.keras.Model(inputs=[input_ids, input_masks_ids], outputs = X)(X)\n
\n\n
import numpy as np\nfrom transformers import AutoTokenizer, pipeline, TFDistilBertModel\nfrom scipy.spatial.distance import cosine\ndef transformer_embedding(name,inp,model_name):\n\n    model = model_name.from_pretrained(name)\n    tokenizer = AutoTokenizer.from_pretrained(name)\n    pipe = pipeline('feature-extraction', model=model, \n                tokenizer=tokenizer)\n    features = pipe(inp)\n    features = np.squeeze(features)\n    return features\nz=['The brown fox jumped over the dog','The ship sank in the Atlantic Ocean']\nembedding_features1=transformer_embedding('distilbert-base-uncased',z[0],TFDistilBertModel)\nembedding_features2=transformer_embedding('distilbert-base-uncased',z[1],TFDistilBertModel)\ndistance=1-cosine(embedding_features1[0],embedding_features2[0])\nprint(distance)\n
\n

Thanks.

\n", + "system": "" + }, + { + "instruction": "Add attention layer to Seq2Seq model", + "input": "", + "output": "

the dot products need to be computed on tensor outputs... in encoder you correctly define the encoder_output, in decoder you have to add decoder_outputs, state_h, state_c = decoder_lstm(enc_emb, initial_state=encoder_states)

\n

the dot products now are

\n
attention = dot([decoder_outputs, encoder_outputs], axes=[2, 2])\nattention = Activation('softmax')(attention)\ncontext = dot([attention, encoder_outputs], axes=[2,1])\n
\n

the concatenation doesn't need initial_states. you have to define it in your rnn layer: decoder_outputs, state_h, state_c = decoder_lstm(enc_emb, initial_state=encoder_states)

\n

here the full example

\n

ENCODER + DECODER

\n
# dummy variables\nnum_encoder_tokens = 30\nnum_decoder_tokens = 10\nlatent_dim = 100\n\nencoder_inputs = Input(shape=(None,))\nenc_emb =  Embedding(num_encoder_tokens, latent_dim, mask_zero = True)(encoder_inputs)\nencoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)\nencoder_outputs, state_h, state_c = encoder_lstm(enc_emb)\n# We discard `encoder_outputs` and only keep the states.\nencoder_states = [state_h, state_c]\n\n# Set up the decoder, using `encoder_states` as initial state.\ndecoder_inputs = Input(shape=(None,))\ndec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)\ndec_emb = dec_emb_layer(decoder_inputs)\n# We set up our decoder to return full output sequences,\n# and to return internal states as well. We don't use the\n# return states in the training model, but we will use them in inference.\ndecoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)\ndecoder_outputs, _, _ = decoder_lstm(dec_emb,\n                                     initial_state=encoder_states)\ndecoder_dense = Dense(num_decoder_tokens, activation='softmax')\ndecoder_outputs = decoder_dense(decoder_outputs)\n\n# Define the model that will turn\n# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`\nmodel = Model([encoder_inputs, decoder_inputs], decoder_outputs)\nmodel.summary()\n
\n

DECODER w\\ ATTENTION

\n
# Set up the decoder, using `encoder_states` as initial state.\ndecoder_inputs = Input(shape=(None,))\ndec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)\ndec_emb = dec_emb_layer(decoder_inputs)\n# We set up our decoder to return full output sequences,\n# and to return internal states as well. We don't use the\n# return states in the training model, but we will use them in inference.\ndecoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)\ndecoder_outputs, state_h, state_c = decoder_lstm(dec_emb, initial_state=encoder_states)\nattention = dot([decoder_outputs, encoder_outputs], axes=[2, 2])\nattention = Activation('softmax')(attention)\ncontext = dot([attention, encoder_outputs], axes=[2,1])\ndecoder_outputs = concatenate([context, decoder_outputs])\ndecoder_dense = Dense(num_decoder_tokens, activation='softmax')(decoder_outputs)\n\n# Define the model that will turn\n# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`\nmodel = Model([encoder_inputs, decoder_inputs], decoder_dense)\nmodel.summary()\n
\n", + "system": "" + }, + { + "instruction": "`y` argument is not supported when using `keras.utils.Sequence` as input. error", + "input": "", + "output": "

The data generator you are using returns both the image and its label, the label is extracted from the directory name. if you have the following directory structure

\n\n\n\n

The model will understand that image1 is of class methane and image2 is of class plastic and image3 is of class dsci, so no need to pass the labels.
\nIf you don't have that directory structure, then you might need to define your own generator class based on tf.keras.utils.Sequence class. you can read more about that here

\n", + "system": "" + }, + { + "instruction": "Join metrics of every output in Keras (in multiple output)", + "input": "", + "output": "

I override the problem creating a custom callback

\n\n
class MergeMetrics(Callback):\n\n    def __init__(self,**kargs):\n        super(MergeMetrics,self).__init__(**kargs)\n\n    def on_epoch_begin(self,epoch, logs={}):\n        return\n\n    def on_epoch_end(self, epoch, logs={}):\n        logs['merge_metrics'] = 0.5*logs[\"y1_mse\"]+0.5*logs[\"y2_mse\"]\n
\n\n

I use this callback to merge 2 metrics coming from 2 different outputs. I use a simple problem for example but you can integrate it easily in your problem and integrate it with a validation set

\n\n

this is the dummy example

\n\n
X = np.random.uniform(0,1, (1000,10))\ny1 = np.random.uniform(0,1, 1000)\ny2 = np.random.uniform(0,1, 1000)\n\n\ninp = Input((10))\nx = Dense(32, activation='relu')(inp)\nout1 = Dense(1, name='y1')(x)\nout2 = Dense(1, name='y2')(x)\nm = Model(inp, [out1,out2])\nm.compile('adam','mae', metrics='mse')\n\n\ncheckpoint = MergeMetrics()\nm.fit(X, [y1,y2], epochs=10, callbacks=[checkpoint])\n
\n\n

the printed output

\n\n
loss: ..... y1_mse: 0.0863 - y2_mse: 0.0875 - merge_metrics: 0.0869\n
\n", + "system": "" + }, + { + "instruction": "Working of the Earth Mover Loss method in Keras and input arguments data types", + "input": "", + "output": "

you can see EML as a sort of RMSE for CDF probability functions

\n\n

given N classes, all you need is a normalized probability score for each sample. in neural network domains, this is achieved with softmax activation function as output layer

\n\n

The EML simply compares the CDF of predictions vs realities

\n\n

In a classification problem with 10 classes, for a single sample, we can have these arrays

\n\n

y_true = [0,0,0,1,0,0,0,0,0,0] # the sample belong to the 4th class

\n\n

y_pred = [0.1,0,0,0.9,0,0,0,0,0,0] # probabilities output of softmax layer

\n\n

on them we compute CDFs and get the following scores:

\n\n

CDF_y_true = [0,0,0,1,1,1,1,1,1,1]

\n\n

CDF_y_pred = [0.1,0.1,0.1,1,1,1,1,1,1,1]

\n\n

as defined above, the EML compute the RMSE on this CDFs

\n\n
y_true = np.asarray([0.,0.,0.,1.,0.,0.,0.,0.,0.,0.])\ny_pred = np.asarray([0.1,0.,0.,0.9,0.,0.,0.,0.,0.,0.])\n\ncdf_true = K.cumsum(y_true, axis=-1)\ncdf_pred = K.cumsum(y_pred, axis=-1)\nemd = K.sqrt(K.mean(K.square(cdf_true - cdf_pred), axis=-1))\n
\n\n

In the specific case of NIMA Paper by Google on TID2013, N=10 and the labels are express in the form of float scores. In order to train the network with EML these are the steps to follow:

\n\n\n\n

at the end of the train, our NN is able to produce, on a given image, a probability score for each class.\nwe have to transform this score in a mean quality score with a related standard deviation as defined in the paper.\nto do this we follow the procedure defined in the paper

\n\n

bins = [1,2,3,4,5,6,7,8,9,10]

\n\n

y_pred = [0.1,0,0,0.9,0,0,0,0,0,0] # probabilities output of softmax layer

\n\n

mu_score = sum(bins*y_pred) = 1*0.1 + 2*0 + 3*0 + 4*0.9 + ... + 10*0

\n\n

sigma_score = sum(((bins - mu_score)**2)*y_pred)**0.5

\n\n
bins = np.arange(1,11)\ny_pred = np.asarray([0.1,0.,0.,0.9,0.,0.,0.,0.,0.,0.])\n\nmu_score = np.sum(bins*y_pred)\nstd_score = np.sum(((bins - mu_score)**2)*y_pred)**0.5\n
\n", + "system": "" + }, + { + "instruction": "Problem with KerasRegressor & multiple output", + "input": "", + "output": "

The problem is the input dimension of the layer Input is not 3, but 3*feature_dim. Below is an working example

\n
import numpy as np\nimport tensorflow as tf \nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Input,Dense,Concatenate\nfrom sklearn.model_selection import cross_val_score,KFold\nfrom tensorflow.keras.wrappers.scikit_learn import KerasRegressor\n\n\ndef create_model():\n\n    feature_dim = 10\n    input_data = Input(shape=(3*feature_dim,))\n\n    #create the layers and pass them the input tensor to get the output tensor:\n    layer = [2,2]\n    hidden1Out = Dense(units=layer[0], activation='relu')(input_data)\n    finalOut = Dense(units=layer[1], activation='relu')(hidden1Out)\n\n    u_out = Dense(1, activation='linear', name='u')(finalOut)   \n    v_out = Dense(1, activation='linear', name='v')(finalOut)   \n    p_out = Dense(1, activation='linear', name='p')(finalOut)   \n\n    output = Concatenate()([u_out,v_out,p_out])\n    #define the model's start and end points\n    model = Model(inputs=input_data,outputs=output)    \n\n    model.compile(loss='mean_squared_error', optimizer='adam')\n\n    return model\n\nx_0 = np.random.rand(100,10)\nx_1 = np.random.rand(100,10)\nx_2 = np.random.rand(100,10)\ninput_val = np.hstack([x_0,x_1,x_2])\n\nu = np.random.rand(100,1)\nv = np.random.rand(100,1)\np = np.random.rand(100,1)\noutput_val = np.hstack([u,v,p])\n\nestimator = KerasRegressor(build_fn=create_model,nb_epoch=3,batch_size=8,verbose=False)\nkfold = KFold(n_splits=3, random_state=0)\nresults = cross_val_score(estimator=estimator,X=input_val,y=output_val,cv=kfold)\nprint("Results: %.2f (%.2f) MSE" % (results.mean(), results.std()))\n
\n

As you can see, since the input dimension is 10, inside create_model, I specify the feature_dim.

\n", + "system": "" + }, + { + "instruction": "Keras functional API and TensorFlow Hub", + "input": "", + "output": "

Try This

\n
sentence_encoding_layer = hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4",\n                                         trainable=False,\n                                         input_shape = [],\n                                         dtype = tf.string,\n                                         name = 'U.S.E')\n\ninputs = tf.keras.layers.Input(shape = (), dtype = 'string',name = 'input_layer')\n\nx = sentence_encoding_layer(inputs)\nx = tf.keras.layers.Dense(64,activation = 'relu')(x)\n\noutputs = tf.keras.layers.Dense(1,activation = 'sigmoid',name = 'output_layer')(x)\n\nmodel = tf.keras.Model(inputs,outputs,name = 'Transfer_learning_USE')\nmodel.summary()\n
\n

model.predict([sentence])

\n", + "system": "" + }, + { + "instruction": "Can I write a keras callback that records and returns the total training time?", + "input": "", + "output": "

Try keras.callbacks.Callback(). And, yeah you are right that time will vary based on the batch size.

\n\n
from timeit import default_timer as timer\n\nclass TimingCallback(keras.callbacks.Callback):\n    def __init__(self, logs={}):\n        self.logs=[]\n    def on_epoch_begin(self, epoch, logs={}):\n        self.starttime = timer()\n    def on_epoch_end(self, epoch, logs={}):\n        self.logs.append(timer()-self.starttime)\n\ncb = TimingCallback()\n\nmodel = Sequential()\n# Your code\nmodel.fit(X, y, epochs=epochs, batch_size=batch_size, callbacks=[cb])\nprint(cb.logs)\nprint(sum(cb.logs))\n
\n\n

Read about it from here.

\n", + "system": "" + }, + { + "instruction": "How to install plaidML / plaidML-keras", + "input": "", + "output": "

Disclaimer: I'm on the PlaidML team, and we're actively working to improve the setup experience and documentation around it. We're sorry you were stuck on this. For now, here's some instructions to get you back on track.

\n\n
    \n
  1. Find out where plaidml-setup was installed. Typically, this is some variant of /usr/local/bin or a path to your virtual environment. The prefix of this path (i.e. /usr/local) is important to note for the next step.
  2. \n
  3. Find the plaidml share directory. It's within the same prefix as plaidml-setup, i.e. /usr/local/share/plaidml.
  4. \n
  5. Within the plaidml share directory, there should be a few files: at a minimum, config.json and experimental.json should be in there. If they're not in there, you can copy the files here to your plaidml share directory.
  6. \n
\n\n

After copying those json files over, you should be able to run plaidml-setup with no issue.

\n", + "system": "" + }, + { + "instruction": "Implement Causal CNN in Keras for multivariate time-series prediction", + "input": "", + "output": "

Comments about the given answer

\n

Questions:

\n
\n\n
\n

That answer is sort of strange. I'm not an expert, but I don't see the need to keep independent features with a TimeDistributed layer. But I also cannot say whether it gives a better result or not. At first I'd say it's just unnecessary. But it might bring extra intelligence though, given that it might see relations that involve distant steps between two features instead of just looking at "same steps". (This should be tested)

\n

Nevertheless, there is a mistake in that approach.

\n

The reshapes that are intended to swap lookback and feature sizes are not doing what they are expected to do. The author of the answer clearly wants to swap axes (keeps the interpretation of what is feature, what is lookback), which is different from reshape (mixes everything and data loses meaningfulness)

\n

A correct approach would need actual axis swapping, like model.add(Permute((2,1))) instead of the reshapes.

\n

So, I don't know these answers, but nothing seems to create that need.\nOne sure thing is: you will certainly want the dependent part. A model will not get any near the intelligence of your original model if it doesn't consider relations between features. (Unless you're lucky to have your data completely independent)

\n

Now, explaining the relation between LSTM and Conv1D

\n

An LSTM can be directly compared to a Conv1D and the shapes used are exactly the same, and they mean virtually the same, as long as you're using channels_last.

\n

That said, the shape (samples, input_length, features_or_channels) is the correct shape for both LSTM and Conv1D. In fact, features and channels are exactly the same thing in this case. What changes is how each layer works regarding the input length and calculations.

\n

Concept of filters and kernels

\n

Kernel is the entire tensor inside the conv layer that will be multiplied to the inputs to get the results. A kernel includes its spatial size (kernel_size) and number of filters (output features). And also automatic input filters.

\n

There is not a number of kernels, but there is a kernel_size. The kernel size is how many steps in the length will be joined together for each output step. (This tutorial is great for undestanding 2D convolutions regarding what it does and what the kernel size is - just imagine 1D images instead -- this tutorial doesn't show the number of "filters" though, it's like 1-filter animations)

\n

The number of filters relates directly to the number of features, they're exactly the same thing.

\n
\n

What should filters and kernel be set to?

\n
\n

So, if your LSTM layer is using units=256, meaning it will output 256 features, you should use filters=256, meaning your convolution will output 256 channels/features.

\n

This is not a rule, though, you may find that using more or less filters could bring better results, since the layers do different things after all. There is no need to have all layers with the same number of filters as well!! Here you should go with a parameter tuning. Test to see which numbers are best for your goal and data.

\n

Now, kernel size is something that can't be compared to the LSTM. It's a new thing added to the model.

\n

The number 3 is sort of a very common choice. It means that the convolution will take three time steps to produce one time step. Then slide one step to take another group of three steps to produce the next step and so on.

\n

Dilations

\n

Dilations mean how many spaces between steps the convolution filter will have.

\n\n
\n

What should dilations be set to to represent lookback of 10?

\n
\n

range = 1 + (kernel_size - 1) * dilation_rate

\n

So, with a kernel size = 3:

\n\n

My question to you

\n

If you can explain "exactly" how you're structuring your data, what is the original data and how you're transforming it into the input shape, if you have independent sequences, if you're creating sliding windows, etc. A better understanding of this process could be achieved.

\n", + "system": "" + }, + { + "instruction": "InvalidArgumentError: computed output size would be negative", + "input": "", + "output": "

That's because your Tensor shape is less then the size of conv kernel.

\n\n
\n

e.g. Tensor shape is (None, None, 10, None), but the filter of conv is (X, 16, X, X).

\n
\n\n

10 is less then 16.

\n", + "system": "" + }, + { + "instruction": "Getting OSError: [Errno 16] Device or resource busy: ' when using tf.keras.models.Sequential.fit_generator", + "input": "", + "output": "

It\u2019s trying to remove some temporary directory used to communicate between processes, but the directory is mounted over NFS and some file in it is open. NFS doesn\u2019t support the otherwise-common Unix idiom of unlinking a file while it\u2019s open, so the NFS client emulates it by renaming the file instead and truly removes it after the file is closed.

\n\n

Recursive removal defeats this, of course, by trying to remove the renamed file as well. In some cases, it just renames it again (producing ENOTEMPTY if the recursive removal eventually tries rmdir), but you got EBUSY (perhaps indicating that the file is being executed?).

\n\n

It could be caused in some cases by interactive activity, but if that\u2019s not the problem, make sure that all your resources are cleaned up and your processes joined before letting such an automatic cleanup run.

\n", + "system": "" + }, + { + "instruction": "How to debug Keras in TensorFlow 2.0?", + "input": "", + "output": "

You see that output because the Keras model is being converted to its graph representation, and thus print printes the tf.Tensor graph description.

\n\n

To see the content of a tf.Tensor when using Tensorflow 2.0 you should use tf.print instead of print since the former gets converted to its graph representation while the latter doesn't.

\n", + "system": "" + }, + { + "instruction": "Difference between 3D-tensor and 4D-tensor for images input of DL Keras framework", + "input": "", + "output": "

I you take a look at the Keras documentation of the conv2D layer here you will see that the shape of the input tensor must be 4D.

\n\n
\n

conv2D layer input shape
\n 4D tensor with shape: (batch, channels, rows, cols) if data_format is \"channels_first\" or 4D tensor with shape: (batch, rows, cols, channels) if data_format is \"channels_last\".

\n
\n\n

So the 4th dimension of the shape is mandatory, even if it is only \"1\" as for a grayscaled image.
\nSo in fact, it is not a matter of performance gain nor lack of simplicity, it's only the mandatory input argument's shape.
\nHope it answers your question.

\n", + "system": "" + }, + { + "instruction": "Invalid dimension for image data in plt.imshow()", + "input": "", + "output": "

As per the comment of @sdcbr using np.sqeeze reduces unnecessary dimension. If image is 2 dimensions then imshow function works fine. If image has 3 dimensions then you have to reduce extra 1 dimension. But, for higher dim data you will have to reduce it to 2 dims, so np.sqeeze may be applied multiple times. (Or you may use some other dim reduction functions for higher dim data)

\n\n
import numpy as np  \nimport matplotlib.pyplot as plt\nimg_path = x_test[1]  \nprint(img_path.shape)\nif(len(img_path.shape) == 3):\n    plt.imshow(np.squeeze(img_path))\nelif(len(img_path.shape) == 2):\n    plt.imshow(img_path)\nelse:\n    print(\"Higher dimensional data\")\n
\n", + "system": "" + }, + { + "instruction": "Keras: Difference between AveragePooling1D layer and GlobalAveragePooling1D layer", + "input": "", + "output": "

GlobalAveragePooling1D is same as AveragePooling1D with pool_size=steps. So, for each feature dimension, it takes average among all time steps. The output thus have shape (batch_size, 1, features) (if data_format='channels_last'). They just flatten the second (or third if data_format='channels_first') dimension, that is how you get output shape equal to (batch_size, features).

\n", + "system": "" + }, + { + "instruction": "ELI5 explain_weights and explain_predictions as pandas DataFrame", + "input": "", + "output": "

Try this. It worked for me.

\n
explanation_pred = eli5.explain_prediction_df(estimator=my_model, doc=X_test.iloc[0])\n
\n", + "system": "" + }, + { + "instruction": "Change size of train and test set from MNIST Dataset", + "input": "", + "output": "

There's no such argument in mnist.load_data. Instead you can concatenate data via numpy then split via sklearn (or numpy):

\n\n
from keras.datasets import mnist\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx = np.concatenate((x_train, x_test))\ny = np.concatenate((y_train, y_test))\n\ntrain_size = 0.7\nx_train, x_test, y_train, y_test = train_test_split(x, y, train_size=train_size, random_seed=2019)\n
\n\n

Set a random seed for a reproducibility.

\n\n

Via numpy (if you don't use sklearn):

\n\n
# do the same concatenation\nnp.random.seed(2019)\ntrain_size = 0.7\nindex = np.random.rand(len(x)) < train_size  # boolean index\nx_train, x_test = x[index], x[~index]  # index and it's negation\ny_train, y_test = y[index], y[~index]\n
\n\n

You'll get an arrays of approximately required size (~210xx instead of 21000 test size).

\n\n

The source code of mnist.load_data looks like this function just fetches this data from a URL already split as 60000 / 10000 test, so there's only a concatenation workaround.

\n\n

You could also download the MNIST dataset from http://yann.lecun.com/exdb/mnist/ and preprocess it manually, and then concatenate it (as you need). But, as far as I understand, it was divided into 60000 examples for training and 10000 for testing because this splitting is used in standard benchmarks.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Adam optimizer vs Keras Adam optimizer", + "input": "", + "output": "

You can find a decent documentation about decay in tensorflow:

\n\n
...\nglobal_step = tf.Variable(0, trainable=False)\nstarter_learning_rate = 0.1\nlearning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,\n                                           100000, 0.96, staircase=True)\n\nlearning_step = ( tf.train.GradientDescentOptimizer(learning_rate)\n    .minimize(...my loss..., global_step=global_step)\n)\n
\n\n

tf.train.exponential_decay applies exponential decay to the learning rate.

\n\n

Other decays:

\n\n\n\n

Keras implemented decay in AdamOptimizer similar to below, which is very close to inverse_time_decay in tensorflow:

\n\n
lr = self.lr * (1. / (1. + self.decay * self.iterations))\n
\n", + "system": "" + }, + { + "instruction": "Tying Autoencoder Weights in a Dense Keras Layer", + "input": "", + "output": "

Thanks Mikhail Berlinkov, \nOne imporant remark: This code runs under Keras, but not in eager mode in TF2.0. It runs, but it trains badly.

\n\n

The critical point is, how the object stores the transposed weight. \nself.kernel = K.transpose(self.tied_to.kernel)

\n\n

In non eager mode this creates a graph the right way. In eager mode this fails, probably because the value of a transposed variable is stored at build (== the first call), and then used at subsequent calls.

\n\n

However: the solution is to store the variable unaltered at build, \nand put the transpose operation into the call method.

\n\n

I spent several days to figure this out, and I am happy if this helps anyone.

\n", + "system": "" + }, + { + "instruction": "Why not use Flatten followed by a Dense layer instead of TimeDistributed?", + "input": "", + "output": "

Imagine you have a batch of 4 time steps, each containing a 3-element vector. Let's represent that with this:

\n\n

\"Input

\n\n

Now you want to transform this batch using a dense layer, so you get 5 features per time step. The output of the layer can be represented as something like this:

\n\n

\"Output

\n\n

You consider two options, a TimeDistributed dense layer, or reshaping as a flat input, apply a dense layer and reshaping back to time steps.

\n\n

In the first option, you would apply a dense layer with 3 inputs and 5 outputs to every single time step. This could look like this:

\n\n

\"TimeDistributed

\n\n

Each blue circle here is a unit in the dense layer. By doing this with every input time step you get the total output. Importantly, these five units are the same for all the time steps, so you only have the parameters of a single dense layer with 3 inputs and 5 outputs.

\n\n

The second option would involve flattening the input into a 12-element vector, applying a dense layer with 12 inputs and 20 outputs, and then reshaping that back. This is how it would look:

\n\n

\"Flat

\n\n

Here the input connections of only one unit are drawn for clarity, but every unit would be connected to every input. Here, obviously, you have many more parameters (those of a dense layer with 12 inputs and 20 outputs), and also note that each output value is influenced by every input value, so values in one time step would affect outputs in other time steps. Whether this is something good or bad depends on your problem and model, but it is an important difference with respect to the previous, where each time step input and output were independent. In addition to that, this configuration requires you to use a fixed number of time steps on each batch, whereas the previous works independently of the number of time steps.

\n\n

You could also consider the option of having four dense layers, each applied independently to each time step (I didn't draw it but hopefully you get the idea). That would be similar to the previous one, only each unit would receive input connections only from its respective time step inputs. I don't think there is a straightforward way to do that in Keras, you would have to split the input into four, apply dense layers to each part and merge the outputs. Again, in this case the number of time steps would be fixed.

\n", + "system": "" + }, + { + "instruction": "Keras: Use categorical_crossentropy without one-hot encoded array of targets", + "input": "", + "output": "

If your targets are one-hot encoded, use categorical_crossentropy.\nExamples of one-hot encodings:

\n\n
[1,0,0]\n[0,1,0]\n[0,0,1]\n
\n\n

However, if your targets are integers, use sparse_categorical_crossentropy.\nExamples of integer encodings:

\n\n
1\n2\n3\n
\n", + "system": "" + }, + { + "instruction": "How to use Keras generator with tf.data API", + "input": "", + "output": "

I have tried to reproduce your results with a simple example and I found out that you get different output shapes when one uses batching within the generator function and tf.data.

\n\n

The Keras function train_datagen.flow_from_directory(batch_size=32) already returns the data with shape [batch_size, width, height, depth]. If one uses tf.data.Dataset().batch(32) the output data is batched again into shape [batch_size, batch_size, width, height, depth].

\n\n

This could have caused your issue for some reason.

\n", + "system": "" + }, + { + "instruction": "Keras creating three classes instead of two", + "input": "", + "output": "

I figured it out. the problem was that i still had some hidden folders in my train and validation folder.

\n", + "system": "" + }, + { + "instruction": "How do you train GANs using multiple GPUs with Keras?", + "input": "", + "output": "

The multi_gpu_model can be used in each of the function for generator, discriminator and gan

\n\n
def create_generator():\n  #network architecture \n  generator = Model(inputs=input, outputs=output)\n  generator = multi_gpu_model(generator, gpus=2)\n  generator.compile()\n  return generator\n
\n\n

The same can be done for discriminator and gan.

\n", + "system": "" + }, + { + "instruction": "what is the meaning of (axis = 3) in the BatchNormalization?", + "input": "", + "output": "

It depends on how dimensions of your \"conv1\" variable is ordered. First, note that batch normalization should be performed over channels after a convolution, for example if your dimension order are [batch, height, width, channel], you want to use axis=3. Basically you choose the axis index which represents your channels.

\n", + "system": "" + }, + { + "instruction": "TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type float64 of argument 'x'", + "input": "", + "output": "

I ran into the same problem and I assume the default data type used in the model is float32 while that of numpy is float64, and from_tensor_slices retains that type. To fix it, just change your code:

\n
data = np.random.random((1000,32))\nlabels = np.random.random((1000,10))\n
\n

to

\n
data = np.random.random((1000,32)).astype(np.float32)\nlabels = np.random.random((1000,10)).astype(np.float32)\n
\n

But I do think as a piece of sample code in its tutorial, tensorflow should make sure it runs.

\n

Update: There is a closed issue related to this: https://github.com/tensorflow/tensorflow/issues/22207

\n", + "system": "" + }, + { + "instruction": "Keras Layer Concatenation", + "input": "", + "output": "
input1 = Input(input_shape=...)\ninput2 = Input(...)\ninput3 = Input(...)\nvalues = Input(...)\n\nout1 = Embedding(...)(input1)\nout2 = Embedding(...)(input2)   \nout3 = Embedding(...)(input3)\n\n#make sure values has a shape compatible with the embedding outputs.\n#usually it should have shape (equal_samples, equal_length, features)   \njoinedInput = Concatenate()([out1,out2,out3,values])\n\nout = Dense(...)(joinedInput)\nout = Dense(...)(out)\nout = Dense(2, activation='softmax')(out)\n\nmodel = Model([input1,input2,input3,values], out)\n
\n", + "system": "" + }, + { + "instruction": "Reflection padding Conv2D", + "input": "", + "output": "

The accepted answer above is not working in the current Keras version. Here is the version that's working:

\n\n
class ReflectionPadding2D(Layer):\n    def __init__(self, padding=(1, 1), **kwargs):\n        self.padding = tuple(padding)\n        self.input_spec = [InputSpec(ndim=4)]\n        super(ReflectionPadding2D, self).__init__(**kwargs)\n\n    def compute_output_shape(self, s):\n        \"\"\" If you are using \"channels_last\" configuration\"\"\"\n        return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3])\n\n    def call(self, x, mask=None):\n        w_pad,h_pad = self.padding\n        return tf.pad(x, [[0,0], [h_pad,h_pad], [w_pad,w_pad], [0,0] ], 'REFLECT')\n
\n", + "system": "" + }, + { + "instruction": "keras shapes while UpSampling mismatch", + "input": "", + "output": "

This is a typical problem you face when dealing with decoders or some form of upsampling.

\n

\u00a0What you might no be aware of is that upsampling or deconvolution typically results in an increased height and width that can surpass the dimensions that you would expect.

\n

To be more precise:
\nIn your case you expect and output of shape (148, 84, 1), however, due to the upsampling you ended up with (150, 81, 1).

\n

The solution here is to crop your output using a cropping layer after upsamling:

\n
tf.keras.layers.Cropping2D(cropping=(top_crop, bottom_crop), (left_crop, right_crop)) \n
\n

therefore in your case for example:

\n
tf.keras.layers.Cropping2D(cropping=((1, 1), (1, 2))) \n# or\ntf.keras.layers.Cropping2D(cropping=((0, 2), (2, 1))) \n# or \ntf.keras.layers.Cropping2D(cropping=((2, 0), (1, 2))) \n
\n

This will crop the output in to the expected shape from (150, 81, 1) to (148, 84, 1).

\n

For more details please refer to:\ntf.keras.layers.Cropping2D

\n", + "system": "" + }, + { + "instruction": "Keras - method on_batch_end is slow but only callback I have is checkpoint", + "input": "", + "output": "

This is most probably a Generator (fit_generator()) issue. When using a generator as data source it has to be called at the end of a batch. Consider revisiting your generator code, using multiprocessing (workers > 1) or a higher batchsize (if possible)

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'str' object has no attribute 'ndim'", + "input": "", + "output": "

You are feeding a list of strings to a model which is something it does not expect. You can use keras.preprocessing.text module to convert the text to an integer sequence. More specifically you can prepare data like:

\n\n
from keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import pad_sequences\ntk = Tokenizer()\ntk.fit_on_texts(texts)\nindex_list = tk.texts_to_sequences(texts)\nx_train = pad_sequences(index_list, maxlen=maxlen)\n
\n\n

Now x_train (a n_samples * maxlen ndarray of type np.int) is a legitimate input for the model.

\n", + "system": "" + }, + { + "instruction": "Access internal forget gate value in lstm node", + "input": "", + "output": "

How to get internal weights and parameters?

\n\n

If you are using Keras with Tensorflow backend (which is the recommended backend), you can access to the internal layer values (like weights & biases) using tf.global_variables(). For convenience, create a dictionary which map the layer names to the corresponding tf.Variable:

\n\n
variable_dict = dict([])\nfor variable in tf.global_variable():\n    variable_dict[variable.name] = variable\n
\n\n

After this simple mapping, you can refer to a variable directly by its name. The LSTM layer of Keras (let's say, the model.summary() shows that the LSTM layer is named as 'lstm_1') has the following internal variables: 'lstm_1/kernel:0', 'lstm_1/recurrent_kernel:0', and 'lstm_1/bias:0'. To get their values, you will need to have some understanding of Session and Tensor objects of Tensorflow.

\n\n

Now is the tricky part: how does Keras store the variables in these tensors? The documentation does not specify any information about it, so unfortunately you will have to refer to the source code. Look at the LSTMCell class, especially the build (describes which variables are defined) and call (describes how these variables are used in calculation) methods. Seems like the internal parameters are stored in the variables, named as *_i, *_f, *_c, *_o (e.g. self.kernel_o).

\n\n
\n\n

How to get in-between values during calculation?

\n\n

Unfortunately, you will have to directly modify the calculation graph. It is easier than it sounds: just edit the Tensorflow's code and store these variables somewhere in a named tensor in the global scope.

\n\n
\n\n
\n

If not then is there any other library which gives access to these\n internal gate values in a convenient manner at every step?

\n
\n\n

As you see, Keras with a Tensorflow backend is very convenient. I don't think that you can get any more convenient than that.

\n", + "system": "" + }, + { + "instruction": "Does 1D Convolutional layer support variable sequence lengths?", + "input": "", + "output": "

Keras supports variable lengths by using None in the respective dimension when defining the model.

\n\n

Notice that often input_shape refers to the shape without the batch size.

\n\n

So, the 3D tensor with shape (batch_size, steps, input_dim) suits perfectly a model with input_shape=(steps, input_dim).

\n\n

All you need to make this model accept variable lengths is use None in the steps dimension:

\n\n
input_shape=(None, input_dim)\n
\n\n

Numpy limitation

\n\n

Now, there is a numpy limitation about variable lengths. You cannot create a numpy array with a shape that suits variable lengths.

\n\n

A few solutions are available:

\n\n\n\n

Be careful with layers that don't support variable sizes

\n\n

In convolutional models using variable sizes, you can't for instance, use Flatten, the result of the flatten would have a variable size if this were possible. And the following Dense layers would not be able to have a constant number of weights. This is impossible.

\n\n

So, instead of Flatten, you should start using GlobalMaxPooling1D or GlobalAveragePooling1D layers.

\n", + "system": "" + }, + { + "instruction": "Keras record loss and accuracy of train and test for each batch", + "input": "", + "output": "

I have the same problem. I need each time after calculating the gradient on the batch, counting the loss for the validation set and for the set set.

\n\n

In the Keras API there are remarkable parameters:

\n\n
steps_per_epoch, validation_steps\n
\n\n

They set the number of examples for the era and validation, respectively. So, I wanted to set the size of the epoch in 20 examples, thereby artificially equating it to the size of batch_size. After that I create a callback, which is processed every time after the batch processing is completed:

\n\n
class LossHistory(Callback):\n    def __init__(self):\n        super(Callback, self).__init__()\n        self.losses = []\n        self.val_losses = []\n\n    def on_train_begin(self, logs=None):\n        self.losses = []\n        self.val_losses = []\n\n    def on_batch_end(self, batch, logs=None):\n        self.losses.append(logs.get('loss'))\n        self.val_losses.append(logs.get('val_loss'))\n
\n\n

About this bug I wrote here. So far I'm waiting for an answer. But I have a problem that still requires a solution.

\n\n

Due to the fact that there is no simple answer to this question, it is necessary to crut. For this, you can refer to members of the class Callback. Here lies the validation set, on which you can test. This is done in this way:

\n\n
class LossHistory(Callback):\n    def __init__(self):\n        super(Callback, self).__init__()\n        self.losses = []\n        self.val_losses = []\n\n    def on_train_begin(self, logs=None):\n        self.losses = []\n        self.val_losses = []\n\n    def on_batch_end(self, batch, logs=None):\n        self.losses.append(logs.get('loss'))\n        self.val_losses.append(self.model.evaluate(self.validation_data[0], self.validation_data[1]))\n
\n\n

P.s. logs.get (val_loss) is considered after each era. In this regard, in it, at the first batch of the first era will lie None.

\n", + "system": "" + }, + { + "instruction": "ValueError at /image/ Tensor Tensor("activation_5/Softmax:0", shape=(?, 4), dtype=float32) is not an element of this graph", + "input": "", + "output": "

Your test_image and input of tensorflow model is not match.

\n\n
# Your image shape is (, , 3)\ntest_image = cv2.imread('media/'+request.FILES['test_image'].name)\n\nif test_image is not None:\n    test_image = cv2.resize(test_image, (128, 128))\n    test_image = np.array(test_image)\n    test_image = test_image.astype('float32')\n    test_image /= 255\n    print(test_image.shape)\nelse:\n    print('image didnt load')\n\n# Your image shape is (, , 4)\ntest_image = np.expand_dims(test_image, axis=0)\nprint(test_image)\nprint(test_image.shape)\n\npred = model.predict_classes(test_image)\n
\n\n

The above is just assumption. If you want to debug, i guess you should print your image size and compare with first layout of your model definition. And check whe the size (width, height, depth) is match

\n", + "system": "" + }, + { + "instruction": "element-wise multiplication with broadcasting in keras custom layer", + "input": "", + "output": "

Before multiplying, you need to repeat the elements to increase the shape.\nYou can use K.repeat_elements for that. (import keras.backend as K)

\n\n
class MyLayer(Layer):\n\n    #there are some difficulties for different types of shapes   \n    #let's use a 'repeat_count' instead, increasing only one dimension\n    def __init__(self, repeat_count,**kwargs):\n        self.repeat_count = repeat_count\n        super(MyLayer, self).__init__(**kwargs)\n\n    def build(self, input_shape):\n\n        #first, let's get the output_shape\n        output_shape = self.compute_output_shape(input_shape)\n        weight_shape = (1,) + output_shape[1:] #replace the batch size by 1\n\n\n        self.kernel = self.add_weight(name='kernel',\n                                      shape=weight_shape,\n                                      initializer='ones',\n                                      trainable=True)\n\n\n        super(MyLayer, self).build(input_shape)  # Be sure to call this somewhere!\n\n    #here, we need to repeat the elements before multiplying\n    def call(self, x):\n\n        if self.repeat_count > 1:\n\n             #we add the extra dimension:\n             x = K.expand_dims(x, axis=1)\n\n             #we replicate the elements\n             x = K.repeat_elements(x, rep=self.repeat_count, axis=1)\n\n\n        #multiply\n        return x * self.kernel\n\n\n    #make sure we comput the ouptut shape according to what we did in \"call\"\n    def compute_output_shape(self, input_shape):\n\n        if self.repeat_count > 1:\n            return (input_shape[0],self.repeat_count) + input_shape[1:]\n        else:\n            return input_shape\n
\n", + "system": "" + }, + { + "instruction": "Keras: What is the difference between layers.Input and layers.InputLayer?", + "input": "", + "output": "

I think InputLayer has been deprecated together with the Graph models. I would suggest you use Input, as all the examples on the Keras documentations show.

\n", + "system": "" + }, + { + "instruction": "fit_generator in keras: where is the batch_size specified?", + "input": "", + "output": "

You will need to handle the batch size somehow inside the generator. Here is an example to generate random batches:

\n\n
import numpy as np\ndata = np.arange(100)\ndata_lab = data%2\nwholeData = np.array([data, data_lab])\nwholeData = wholeData.T\n\ndef data_generator(all_data, batch_size = 20):\n\n    while True:        \n\n        idx = np.random.randint(len(all_data), size=batch_size)\n\n        # Assuming the last column contains labels\n        batch_x = all_data[idx, :-1]\n        batch_y = all_data[idx, -1]\n\n        # Return a tuple of (Xs,Ys) to feed the model\n        yield(batch_x, batch_y)\n\nprint([x for x in data_generator(wholeData)])\n
\n", + "system": "" + }, + { + "instruction": "Define pinball loss function in keras with tensorflow backend", + "input": "", + "output": "

I\u2019ve figured this out by myself with Keras backend:

\n\n
def pinball(y_true, y_pred):\n    global i\n    tao = (i + 1) / 10\n    pin = K.mean(K.maximum(y_true - y_pred, 0) * tao +\n                 K.maximum(y_pred - y_true, 0) * (1 - tao))\n    return pin\n
\n", + "system": "" + }, + { + "instruction": "How can I add orthogonality regularization in Keras?", + "input": "", + "output": "

From the docs:

\n\n
\n

Any function that takes in a weight matrix and returns a loss\n contribution tensor can be used as a regularizer

\n
\n\n

Here is the example implemented:

\n\n
from keras import backend as K\n\ndef l1_reg(weight_matrix):\n    return 0.01 * K.sum(K.abs(weight_matrix))\n\nmodel.add(Dense(64, input_dim=64,\n                kernel_regularizer=l1_reg)\n
\n\n

the loss in your post would be:

\n\n
from keras import backend as K\ndef fro_norm(w):\n    return K.sqrt(K.sum(K.square(K.abs(w))))\n\ndef cust_reg(w):\n    m = K.dot(K.transpose(w), w) - np.eye(w.shape)\n    return fro_norm(m)\n
\n\n

Here is a minimal example:

\n\n
import numpy as np\nfrom keras import backend as K\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\n\nX = np.random.randn(100, 100)\ny = np.random.randint(2, size=(100, 1))\n\nmodel = Sequential()\n\n# apply regularization here. applies regularization to the \n# output (activation) of the layer\nmodel.add(Dense(32, input_shape=(100,), \n                activity_regularizer=fro_norm))\nmodel.add(Dense(1))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss=\"binary_crossentropy\",\n              optimizer='sgd',\n              metrics=['accuracy'])\n\nmodel.fit(X, y, epochs=1, batch_size=32)\n
\n\n

Below wouldn't work as hinted by @Marcin's comment LA.norm wouldn't work since the regularizer must return a Tensor LA.norm() doesn't.

\n\n
def orth_norm(w)\n    m = K.dot(k.transpose(w), w) - np.eye(w.shape)\n    return LA.norm(m, 'fro')\n\nfrom keras import backend as K\n\nimport numpy as np\n\ndef orth_norm(w)\n    m = K.dot(k.transpose(w), w) - np.eye(w.shape)\n    return LA.norm(m, 'fro')\n
\n\n

Keras regularizers

\n\n

Frobenias norm

\n", + "system": "" + }, + { + "instruction": "keras: issue using ImageDataGenerator and KFold for fit_generator", + "input": "", + "output": "

At the moment one cannot split a dataset held in the folder using a flow_from_directory generator. This option is simply not implemented. To get the test / train split one need to split the main directory into set of train / test /val directories using e.g. os library in Python.

\n", + "system": "" + }, + { + "instruction": "Combine to Keras functional models", + "input": "", + "output": "

I haven't tried this but according to the documentation functional models are callable, so you can do something like:

\n\n
y = model2(model1(x))\n
\n\n

where x is the data that goes to inputs and y is the result of predictions_2

\n", + "system": "" + }, + { + "instruction": "Name of input and output tensors when loading Keras model to TensorFlow", + "input": "", + "output": "

As of TensorFlow 2.0 (unfortunately they seem to change this often) you can export the model to the SavedModel format -in python- using

\n\n
model.save('MODEL-FOLDER')\n
\n\n

and then inspect the model using the saved_model_cli tool (found inside python folder <yourenv>/bin/saved_model_cli -in anaconda at least)

\n\n
saved_model_cli show --dir /path/to/model/MODEL-FOLDER/ --tag_set serve --signature_def serving_default\n
\n\n

the output will be something like

\n\n
The given SavedModel SignatureDef contains the following input(s):\n  inputs['graph_input'] tensor_info:\n      dtype: DT_DOUBLE\n      shape: (-1, 28, 28)\n      name: serving_default_graph_input:0\nThe given SavedModel SignatureDef contains the following output(s):\n  outputs['graph_output'] tensor_info:\n      dtype: DT_FLOAT\n      shape: (-1, 10)\n      name: StatefulPartitionedCall:0\nMethod name is: tensorflow/serving/predict\n
\n\n

By inspecting the output, you can see the name of the input and output tensors in this case to be, respectively: serving_default_graph_input and StatefulPartitionedCall

\n\n

This is how you find the tensor names.

\n\n

The right way to do this, though, is to define a graph path and its output and input tensors on the model using SignatureDefs. So you's load those SignaturesDefs instead of having to deal with the tensor name's directly.

\n\n

This is a nice reference that explains this better than the official docs, imo:

\n\n

https://sthalles.github.io/serving_tensorflow_models/

\n", + "system": "" + }, + { + "instruction": "Keras dependencies needed for prediction only in AWS Lambda", + "input": "", + "output": "

You can strip your binaries to save space

\n\n
find . | xargs strip\n
\n", + "system": "" + }, + { + "instruction": "Accessing neural network weights and neuron activations", + "input": "", + "output": "

This is covered in the Keras FAQ, you basically want to compute the activations for each layer, so you can do it with this code:

\n\n
from keras import backend as K\n\n#The layer number\nn = 3 \n# with a Sequential model\nget_nth_layer_output = K.function([model.layers[0].input],\n                                  [model.layers[n].output])\nlayer_output = get_nth_layer_output([X])[0]\n
\n\n

Unfortunately you would need to compile and run a function for each layer, but this should be straightforward.

\n\n

To get the weights, you can call get_weights() on any layer.

\n\n
nth_weights = model.layers[n].get_weights()\n
\n", + "system": "" + }, + { + "instruction": "`y` argument is not supported when using `keras.utils.Sequence` as input. error", + "input": "", + "output": "

The data generator you are using returns both the image and its label, the label is extracted from the directory name. if you have the following directory structure

\n\n\n\n

The model will understand that image1 is of class methane and image2 is of class plastic and image3 is of class dsci, so no need to pass the labels.
\nIf you don't have that directory structure, then you might need to define your own generator class based on tf.keras.utils.Sequence class. you can read more about that here

\n", + "system": "" + }, + { + "instruction": "Error using Keras in R. "ValueError: Only input tensors may be passed as positional arguments."", + "input": "", + "output": "

The official solution for now appears to be: uninstall keras and use keras3 instead:\nhttps://github.com/rstudio/keras/issues/1428#issuecomment-2047357912

\n

This worked for me.

\n", + "system": "" + }, + { + "instruction": "Tensorflow augmentation layers not working after importing from tf.keras.applications", + "input": "", + "output": "

I noticed same issue with tf 2.8. It can be solved by add training =True , when you test the augmentation layer:

\n
aug = data_augmentation(image,training=True)\n
\n

The reason is that the augmentation layer behaves differently in training and predicting (inference), i.e. it will do augmentation in training but do nothing in predicting. Ideally, the layer should set the training= argument smartly according the situation. Apparently, it is not smart in above code: it does not know your intention is to test the layer.

\n

But I think you should still leave training argument as default when you build the full model, letting the augmentation layer do the job.

\n", + "system": "" + }, + { + "instruction": "WebSocket not working when trying to send generated answer by keras", + "input": "", + "output": "

I am devastated, I just wasted 2 days into the dumbest possible issue (and fix)

\n

I still had the

\n
while True:\n    question = input("")\n    ints = predict(question)\n    answer = response(ints, json_data)\n    print(answer)\n
\n

in my model file, so the server didn't start. The fix was to delete it and now it works fine.

\n", + "system": "" + }, + { + "instruction": "Expected min_ndim=2, found ndim=1. Full shape received: (None,)", + "input": "", + "output": "

I think you need to explicitly define an input layer with your input shape, since your output layer cannot infer the shape of the tensor coming from the normalization layer:

\n
import tensorflow as tf\n\nsingle_feature_normalizer = tf.keras.layers.Normalization(axis=None)\nfeature = tf.random.normal((314, 1))\nsingle_feature_normalizer.adapt(feature)\n\nsingle_feature_model = tf.keras.models.Sequential([\n    tf.keras.layers.Input(shape=(1,)),\n    single_feature_normalizer,\n    tf.keras.layers.Dense(1)\n])\n
\n

Or define the input shape directly in the normalization layer without using an input layer:

\n
single_feature_normalizer = tf.keras.layers.Normalization(input_shape=[1,], axis=None)\n
\n", + "system": "" + }, + { + "instruction": "Keras, model trains successfully but generating predictions gives ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor", + "input": "", + "output": "

I have similar issue. My model has trained just fine but while setting up inference for seq2seq architecture, I am not able to understand why Embedding layer is causing issues. Here is the model architecture:

\n

Code :

\n
from keras.models import load_model, Model\nfrom keras.layers import Input, Embedding\n\nLATENT_DIM = 128\nEMBEDDING_DIM = 200\nnum_words_output = 2000\n\nmodel = load_model('s2s')\nmodel.summary()\n\nencoder_inputs = model.input[0]  # input_1\nencoder_outputs, state_h_enc, state_c_enc = model.layers[4].output  # lstm_1\nencoder_states = [state_h_enc, state_c_enc]\nencoder_model = Model(encoder_inputs, encoder_states)\n\n\ndecoder_inputs = model.input[1]  # input_2\ndecoder_state_input_h = Input(shape=(LATENT_DIM,))\ndecoder_state_input_c = Input(shape=(LATENT_DIM,))\ndecoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]\n\ndecoder_lstm = model.layers[5]\ndecoder_outputs, state_h_dec, state_c_dec = model.layers[5].output # lstm_2\ndecoder_states = [state_h_dec, state_c_dec]\ndecoder_dense = model.layers[6]\ndecoder_outputs = decoder_dense(decoder_outputs)\n\ndecoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)\n
\n

Error:

\n

line 29:\ndecoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)

\n

ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 16), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'") at layer "embedding". The following previous layers were accessed without issue: []

\n

Model Architecture image

\n", + "system": "" + }, + { + "instruction": "cannot import name 'Sequence' from 'keras.utils'", + "input": "", + "output": "

I was able to replicate your issue, you can refer working code as shown below

\n
import talos\nimport keras\n\nprint(talos.__version__)\nprint(keras.__version__)\n\nfrom keras.utils import Sequence \n
\n

Output:

\n
1.0.0\n2.5.0\n\nImportError                               Traceback (most recent call last)\n<ipython-input-14-064232ea7706> in <module>()\n----> 1 from keras.utils import Sequence # it wont work\n\nImportError: cannot import name 'Sequence' from 'keras.utils' (/usr/local/lib/python3.7/dist-packages/keras/utils/__init__.py)\n
\n

Fixed code:

\n

From Tensorflow 2.x onward, keras is no longer maintained and it became a part of Tensorflow. I would recommend instead of import Sequence from keras, you should try from tensorflow as shown below

\n
from tensorflow.keras.utils import Sequence\n
\n

For more information you can refer here.

\n", + "system": "" + }, + { + "instruction": "Verifying the implementation of Multihead Attention in Transformer", + "input": "", + "output": "

In your implementation, in scaled_dot_product you scaled with query but according to the original paper, they used key to normalize. Apart from that, this implementation seems Ok but not general.

\n
class MultiAttention(tf.keras.layers.Layer):\n    def __init__(self, num_of_heads, out_dim):\n        super(MultiAttention,self).__init__()\n        self.out_dim      = out_dim\n        self.num_of_heads = num_of_heads\n        self.depth        = self.out_dim // self.num_of_heads\n        self.wq = [tf.keras.layers.Dense(self.depth) for i in range(num_of_heads)]\n        self.wk = [tf.keras.layers.Dense(self.depth) for i in range(num_of_heads)]\n        self.wv = [tf.keras.layers.Dense(self.depth) for i in range(num_of_heads)]\n        self.wo = tf.keras.layers.Dense(self.out_dim)\n        \n    def call(self,x):\n        multi_attn = []\n        for i in range(self.num_of_heads):\n            Q = self.wq[i](x)\n            K = self.wk[i](x)\n            V = self.wv[i](x)\n            multi_attn.append(self.scaled_dot_product(Q,K,V))\n\n        multi_head = tf.concat(multi_attn, axis=-1)\n        multi_head_attention = self.wo(multi_head)\n        return multi_head_attention\n\n    def scaled_dot_product(self, q,k,v):\n        qkt = tf.matmul(q, k, transpose_b=True)\n        dk = tf.math.sqrt( tf.cast(k.shape[-1], dtype=tf.float32) )\n        scaled_qkt = qkt/dk\n        softmax = tf.nn.softmax(scaled_qkt, axis=-1)\n        z = tf.matmul(softmax, v)\n        return z\n\nmulti = MultiAttention(num_of_heads=3, out_dim=32)\nsample_ip = tf.random.normal(shape=(2, 2, 32)); print(sample_ip.shape)\nmulti(sample_ip).shape\n
\n

The general transformer architecture can be demonstrated as follows where the first two linear layers represent query and key and responsible to produce attention weights maps and followed by weighted the value in matrix multiplication fashion.

\n\n

Image Source.

\n

I understand you're trying to minimize the original TF tutorial code but I think you should add reference first to your original question. In the original implementation, they also returned weighted probabilities or scores along with the weighted feature maps. I think you shouldn't skip that.

\n
\n

The original code that you're following is more general and efficient optimized.

\n
class MultiHeadAttention(tf.keras.layers.Layer):\n    def __init__(self, d_model, num_heads):\n        super(MultiHeadAttention, self).__init__()\n        self.num_heads = num_heads\n        self.d_model = d_model\n        assert d_model % self.num_heads == 0\n        self.depth = d_model // self.num_heads\n        self.wq = tf.keras.layers.Dense(d_model)\n        self.wk = tf.keras.layers.Dense(d_model)\n        self.wv = tf.keras.layers.Dense(d_model)\n        self.dense = tf.keras.layers.Dense(d_model)\n\n    def scaled_dot_product_attention(self, q, k, v, mask=None):\n        matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)\n        # scale matmul_qk\n        dk = tf.cast(tf.shape(k)[-1], tf.float32)\n        scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)\n        # add the mask to the scaled tensor.\n        if mask is not None: scaled_attention_logits += (mask * -1e9)\n        # softmax is normalized on the last axis (seq_len_k) so that the scores\n        # add up to 1.\n        attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)\n        output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)\n        return output, attention_weights\n\n    def split_heads(self, x, batch_size):\n        """Split the last dimension into (num_heads, depth).\n        Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)\n        """\n        x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))\n        return tf.transpose(x, perm=[0, 2, 1, 3])\n\n    def call(self, v, k, q, mask=None):\n        batch_size = tf.shape(q)[0]\n        q = self.wq(q)  # (batch_size, seq_len, d_model)\n        k = self.wk(k)  # (batch_size, seq_len, d_model)\n        v = self.wv(v)  # (batch_size, seq_len, d_model)\n\n        q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)\n        k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)\n        v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)\n        # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)\n        # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)\n        scaled_attention, attention_weights = self.scaled_dot_product_attention(q, k, v, mask)\n\n        scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)\n        concat_attention = tf.reshape(scaled_attention,  (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)\n        output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)\n        return output, attention_weights\n
\n

FYI, in TF 2.4, the tf.keras.layers.MultiHeadAttention layer is officially added.

\n
layer = tf.keras.layers.MultiHeadAttention(num_heads=2, key_dim=2)\ninput_tensor = tf.keras.Input(shape=[2, 2, 32]); print(input_tensor.shape)\nprint(layer(input_tensor, input_tensor).shape)\n
\n

You can test these two as follows:

\n
# custom layer MHA\nmulti = MultiHeadAttention(d_model=512, num_heads=2)\ny = tf.random.uniform((1, 60, 512))  \nout, attn = multi(y, k=y, q=y, mask=None)\nout.shape, attn.shape\n(TensorShape([1, 60, 512]), TensorShape([1, 2, 60, 60]))\n\n# built-in layer \nlayer = tf.keras.layers.MultiHeadAttention(num_heads=2, key_dim=2)\ny = tf.random.uniform((1, 60, 512))  \nout, attn = layer(y, y, return_attention_scores=True)\nout.shape, attn.shape\n(TensorShape([1, 60, 512]), TensorShape([1, 2, 60, 60]))\n
\n", + "system": "" + }, + { + "instruction": "What is this flatten layer doing in my LSTM?", + "input": "", + "output": "

An LSTM layer consists of different LSTM cells that are processed sequentially. As seen in the figure below, the first cell takes an input/embedding calculates a hidden state and the next cell uses its input and the hidden state at previous time step to compute its own hidden state. Basically the arrows between the cells also pass the hidden states. If you do return_sequences=False, the lstm layer only outputs the very last hidden state! (h_4 in the figure). So, all those information from all inputs and cells are embedded in a single fixed size information and it can not contain lots of information. This is why, your accuracy is not good when you only use the last hidden state.

\n

When you do return_sequences=True, lstm layer outputs every hidden state, so the next layers have access to all hidden states and they contain naturally more information. However, the LSTM layer returns a matrix. You can also see this in your model summary. It returns a matrix of size (None, 500, 128). None is basically number of samples in your batch, you can forget about it. 500 is your input size, and 128 is your hidden state size. The dense layer can not process a matrix, it has to be a vector. That why you need to apply flatten and what it does is basically just to open up the 2D matrix and represent it as 1D vector. Therefore, the size of your Flatten layer is 64000 because 500*128 = 64000. And Of course with more hidden states, the accuracy is better as they contain more information.\n\"An

\n", + "system": "" + }, + { + "instruction": "Error loading Tensorflow keras Model (.h5)", + "input": "", + "output": "

As mentioned in this post, your h5 file only contains weights. You need to save your model architecture in a json file and then use model_from_json, to load model configuration, hence, you can load weights with load_weights.

\n

Another option may be to simply save your model (architecture + weights together) by replacing your last line by

\n
model.save("model.h5")\n
\n

and then to load, you can make use of

\n
model = load_model('model.h5')\n
\n", + "system": "" + }, + { + "instruction": "ExponentialDecay learning rate schedule with 'staircase=True' changes the training behavior even before it should become effective", + "input": "", + "output": "

When using ExponentialDecay, what you're basically doing is to make a decayed learning rate like:

\n
def decayed_learning_rate(step):\n  return initial_learning_rate * decay_rate ^ (step / decay_steps)\n
\n

When you set staircase=True, what happens is that step / decay_steps is an integer division and the rate follows a staircase function.\nNow, let's take a look at the source code:

\n
# ...setup for step function...\nglobal_step_recomp = math_ops.cast(step, dtype) # step is the current step count\np = global_step_recomp / decay_steps\nif self.staircase:\n  p = math_ops.floor(p)\nreturn math_ops.multiply(initial_learning_rate, math_ops.pow(decay_rate, p), name=name)\n
\n

And we can see that we have a variable p that updates at every multiple of decay_steps, so at step 25, 50, 75 and so on... Basically, the learning rate is constant for every 25 steps, not epochs - which is why it updates before the first 25 epochs. A good explanation on the difference can be read at What is the difference between steps and epochs in TensorFlow?

\n", + "system": "" + }, + { + "instruction": "Understanding WeightedKappaLoss using Keras", + "input": "", + "output": "

Let we separate the goal to two sub-goals, we walk through the purpose, concept, mathematical details of Weighted Kappa first, after that we summarize the things to note when we try to use WeightedKappaLoss in tensorflow

\n

PS: you can skip the understand part if you only care about usage

\n
\n

Weighted Kappa detailed explanation

\n

Since the Weighted Kappa can be see as Cohen's kappa + weights, so we need to understand the Cohen's kappa first

\n

Example of Cohen's kappa

\n

Suppose we have two classifier (A and B) trying to classify 50 statements into two categories (True and False), the way they classify those statements wrt each other in a contingency table:

\n
         B\n         True False\nA True   20   5     25 statements A think is true\n  False  10   15    25 statements A think is false\n         30 statements B think is true\n              20 statements B think is false\n
\n

Now suppose we want know: How reliable the prediction A and B made?

\n

What we can do is simply take the percentage of classified statements which A and B agree with each other, i.e proportion of observed agreement denote as Po, so:

\n
Po = (20 + 15) / 50 = 0.7\n
\n

But this is problematic, because there have probability that A and B agree with each other by random chance, i.e proportion of expected chance agreement denote as Pe, if we use observed percentage as expect probability, then:

\n
Pe = (probability statement A think is true) * (probability statement B think is true) +\n     (probability statement A think is false) * (probability statement B think is false) \n   = (25 / 50) * (30 / 50) + \n     (25 / 50) * (20 / 50)\n   = 0.5\n
\n

Cohen's kappa coefficient denote as K that incorporate Po and Pe to give us more robust prediction about reliability of prediction A and B made:

\n

\"enter

\n
K = (Po - Pe) / (1 - Pe) = 1 - (1 - Po) / (1 - Pe) = 1 - (1 - 0.7) / (1 - 0.5) = 0.4\n
\n

We can see the more A and B are agree with each other (Po higher) and less they agree because of chance (Pe lower), the more Cohen's kappa "think" the result is reliable

\n

Now assume A is the labels (ground truth) of statements, then K is telling us how reliable the B's prediction are, i.e how much prediction agree with labels when take random chance into consideration

\n

Weights for Cohen's kappa

\n

We define the contingency table with m classes formally:

\n
                                    classifier 2\n                       class.1  class.2  class... class.k  Sum over row\n               class.1   n11      n12      ...      n1k      n1+  \n               class.2   n21      n22      ...      n2k      n2+  \nclassifier 1   class...  ...      ...      ...      ...      ...  \n               class.k   nk1      nk2      ...      nkk      nk+  \n       Sum over column   n+1      n+2      ...      n+k      N   # total sum of all table cells\n
\n

The table cells contain the counts of cross-classified categories denote as nij, i,j for row and column index respectively

\n

Consider those k ordinal classes are separate from two categorical classes, e.g separate 1, 0 into five classes 1, 0.75, 0.5, 0.25, 0 which have a smooth ordered transition, we cannot say the classes are independent except the first and last class, e.g very good, good, normal, bad, very bad, the very good and good are not independent and the good should closer to bad than to very bad

\n

Since the adjacent classes are interdependent then in order to calculate the quantity related to agreement we need define this dependency, i.e Weights denote as Wij, it assigned to each cell in the contingency table, value of weight (within range [0, 1]) depend on how close two classes are

\n

Now let's look at Po and Pe formula in Weighted Kappa:

\n

\"enter

\n

And Po and Pe formula in Cohen's kappa:

\n

\"enter

\n

We can see Po and Pe formula in Cohen's kappa is special case of formula in Weighted Kappa, where weight = 1 assigned to all diagonal cells and weight = 0 elsewhere, when we calculate K (Cohen's kappa coefficient) using Po and Pe formula in Weighted Kappa we also take dependency between adjacent classes into consideration

\n

Here are two commonly used weighting system:

\n
    \n
  1. Linear weight:
  2. \n
\n

\"enter

\n
    \n
  1. Quadratic weight:
  2. \n
\n

\"enter

\n

Where, |i-j| is the distance between classes and k is the number of classes

\n

Weighted Kappa Loss

\n

This loss is use in case we mentioned before where one classifier is the labels, and the purpose of this loss is to make the model's (another classifier) prediction as reliable as possible, i.e encourage model to make more prediction agree with labels while make less random guess when take dependency between adjacent classes into consideration

\n

The formula of Weighted Kappa Loss given by:

\n

\"enter

\n

It just take formula of negative Cohen's kappa coefficient and get rid of constant -1 then apply natural logarithm on it, where dij = |i-j| for Linear weight, dij = (|i-j|)^2 for Quadratic weight

\n

Following is the source code of Weighted Kappa Loss written with tensroflow, as you can see it just implement the formula of Weighted Kappa Loss above:

\n
import warnings\nfrom typing import Optional\n\nimport tensorflow as tf\nfrom typeguard import typechecked\n\nfrom tensorflow_addons.utils.types import Number\n\nclass WeightedKappaLoss(tf.keras.losses.Loss):\n    @typechecked\n    def __init__(\n        self,\n        num_classes: int,\n        weightage: Optional[str] = "quadratic",\n        name: Optional[str] = "cohen_kappa_loss",\n        epsilon: Optional[Number] = 1e-6,\n        dtype: Optional[tf.DType] = tf.float32,\n        reduction: str = tf.keras.losses.Reduction.NONE,\n    ):\n        super().__init__(name=name, reduction=reduction)\n        warnings.warn(\n            "The data type for `WeightedKappaLoss` defaults to "\n            "`tf.keras.backend.floatx()`."\n            "The argument `dtype` will be removed in Addons `0.12`.",\n            DeprecationWarning,\n        )\n        if weightage not in ("linear", "quadratic"):\n            raise ValueError("Unknown kappa weighting type.")\n\n        self.weightage = weightage\n        self.num_classes = num_classes\n        self.epsilon = epsilon or tf.keras.backend.epsilon()\n        label_vec = tf.range(num_classes, dtype=tf.keras.backend.floatx())\n        self.row_label_vec = tf.reshape(label_vec, [1, num_classes])\n        self.col_label_vec = tf.reshape(label_vec, [num_classes, 1])\n        col_mat = tf.tile(self.col_label_vec, [1, num_classes])\n        row_mat = tf.tile(self.row_label_vec, [num_classes, 1])\n        if weightage == "linear":\n            self.weight_mat = tf.abs(col_mat - row_mat)\n        else:\n            self.weight_mat = (col_mat - row_mat) ** 2\n\n    def call(self, y_true, y_pred):\n        y_true = tf.cast(y_true, dtype=self.col_label_vec.dtype)\n        y_pred = tf.cast(y_pred, dtype=self.weight_mat.dtype)\n        batch_size = tf.shape(y_true)[0]\n        cat_labels = tf.matmul(y_true, self.col_label_vec)\n        cat_label_mat = tf.tile(cat_labels, [1, self.num_classes])\n        row_label_mat = tf.tile(self.row_label_vec, [batch_size, 1])\n        if self.weightage == "linear":\n            weight = tf.abs(cat_label_mat - row_label_mat)\n        else:\n            weight = (cat_label_mat - row_label_mat) ** 2\n        numerator = tf.reduce_sum(weight * y_pred)\n        label_dist = tf.reduce_sum(y_true, axis=0, keepdims=True)\n        pred_dist = tf.reduce_sum(y_pred, axis=0, keepdims=True)\n        w_pred_dist = tf.matmul(self.weight_mat, pred_dist, transpose_b=True)\n        denominator = tf.reduce_sum(tf.matmul(label_dist, w_pred_dist))\n        denominator /= tf.cast(batch_size, dtype=denominator.dtype)\n        loss = tf.math.divide_no_nan(numerator, denominator)\n        return tf.math.log(loss + self.epsilon)\n\n    def get_config(self):\n        config = {\n            "num_classes": self.num_classes,\n            "weightage": self.weightage,\n            "epsilon": self.epsilon,\n        }\n        base_config = super().get_config()\n        return {**base_config, **config}\n
\n
\n

Usage of Weighted Kappa Loss

\n

We can using Weighted Kappa Loss whenever we can form our problem to Ordinal Classification Problems, i.e the classes form a smooth ordered transition and adjacent classes are interdependent, like ranking something with very good, good, normal, bad, very bad, and the output of the model should be like Softmax results

\n

We cannot using Weighted Kappa Loss when we try to predict the vector of scores (0-1) even if they can sum to 1, since the Weights in each elements of vector is different and this loss not ask how different is the value by subtract, but ask how many are the number by multiplication, e.g:

\n
import tensorflow as tf\nfrom tensorflow_addons.losses import WeightedKappaLoss\n\ny_true = tf.constant([[0.1, 0.2, 0.6, 0.1], [0.1, 0.5, 0.3, 0.1],\n                      [0.8, 0.05, 0.05, 0.1], [0.01, 0.09, 0.1, 0.8]])\ny_pred_0 = tf.constant([[0.1, 0.2, 0.6, 0.1], [0.1, 0.5, 0.3, 0.1],\n                      [0.8, 0.05, 0.05, 0.1], [0.01, 0.09, 0.1, 0.8]])\ny_pred_1 = tf.constant([[0.0, 0.1, 0.9, 0.0], [0.1, 0.5, 0.3, 0.1],\n                      [0.8, 0.05, 0.05, 0.1], [0.01, 0.09, 0.1, 0.8]])\n\nkappa_loss = WeightedKappaLoss(weightage='linear', num_classes=4)\nloss_0 = kappa_loss(y_true, y_pred_0)\nloss_1 = kappa_loss(y_true, y_pred_1)\nprint('Loss_0: {}, loss_1: {}'.format(loss_0.numpy(), loss_1.numpy()))\n
\n

Outputs:

\n
# y_pred_0 equal to y_true yet loss_1 is smaller than loss_0\nLoss_0: -0.7053321599960327, loss_1: -0.8015820980072021\n
\n

Your code in Colab is working correctly in the context of Ordinal Classification Problems, since the function you form X->Y is very simple (int of X is Y index + 1), so the model learn it fairly quick and accurate, as we can see K (Cohen's kappa coefficient) up to 1.0 and Weighted Kappa Loss drop below -13.0 (which in practice usually is minimal we can expect)

\n

In summary, you can using Weighted Kappa Loss unless you can form your problem to Ordinal Classification Problems which have labels in one-hot fashion, if you can and trying to solve the LTR (Learning to rank) problems, then you can check this tutorial of implement ListNet and this tutorial of tensorflow_ranking for better result, otherwise you shouldn't using Weighted Kappa Loss, if you can only form your problem to Regression Problems, then you should do the same as your original solution

\n
\n

Reference:

\n

Cohen's kappa on Wikipedia

\n

Weighted Kappa in R: For Two Ordinal Variables

\n

source code of WeightedKappaLoss in tensroflow-addons

\n

Documentation of tfa.losses.WeightedKappaLoss

\n

Difference between categorical, ordinal and numerical variables

\n", + "system": "" + }, + { + "instruction": "What is difference between "Keras backend + Tensorflow" and "Keras from Tensorflow" using CPU(in Tensorflow 2.x)", + "input": "", + "output": "

Not exactly, it's not as simple as that. As per official documentation -

\n

intra_op_parallelism_threads - Certain operations like matrix multiplication and reductions can utilize parallel threads for speedups. A value of 0 means the system picks an appropriate number. Refer this

\n

inter_op_parallelism_threads - Determines the number of parallel threads used by independent non-blocking operations. 0 means the system picks an appropriate number. Refer this

\n

So technically you can not limit the number of CPUs but only the number of parallel threads, which, for the sake of limiting resource consumption, is sufficient.

\n
\n

Regarding the methods, you are using -

\n

The third approach allows you to directly set the environment variables using os library.

\n
import os\n\nos.environ['TF_NUM_INTRAOP_THREADS'] = '2'\nos.environ['TF_NUM_INTEROP_THREADS'] = '4'\n
\n

The second approach is a method in tf2 that does exactly the same (sets environment variables), the difference being that Keras is packaged into tf2 now.

\n
import tensorflow as tf\nfrom tensorflow import keras\n\ntf.config.threading.set_intra_op_parallelism_threads(2)  \ntf.config.threading.set_inter_op_parallelism_threads(4)\n
\n

The first approach is for standalone Keras. This approach will work if keras is set to tensorflow backend. Again, it does the same thing which is set environment variables indirectly.

\n
from keras import backend as K\nimport tensorflow as tf\n\nconfig = tf.ConfigProto(intra_op_parallelism_threads=2, \\ \n                        inter_op_parallelism_threads=4, \\\n                        allow_soft_placement=True, \\\n                        device_count = {'CPU': 1})\nsession = tf.Session(config=config)\nK.set_session(session)\n
\n

If you still have doubts, you can check what happens to the environment variables after running all 3 independently and then check the specific variable using os with -

\n
print(os.environ.get('KEY_THAT_MIGHT_EXIST'))\n
\n

For a better understanding of the topic, you can check this link that details it out quite well.

\n
\n

TLDR; You can use the second or third approach if you are working with tf2. Else use the first or third approach if you are using standalone Keras with tensorflow backend.

\n", + "system": "" + }, + { + "instruction": "How to save a keras subclassed model with positional parameters in Call() method?", + "input": "", + "output": "

I think doing this following change would work

\n
 #def call(self, enc_input, dec_input, training, mask1, mask2, mask3):\n def call(self, enc_input, dec_input, training=False, mask1=None, mask2=None, mask3=None):\n
\n

After digging, i think there is a sanity check that happens on function arguments, if positional arguments are not specified, the parameters after x will be taken as **kwargs arguments(I am not really sure about this).\nbut for the sake of it, if you don't want to set default mapping of arguments, you can just unpack them so each argument goes it its corresponding place as follow:

\n

y = model(*[x,x,False,None,None,None])

\n", + "system": "" + }, + { + "instruction": "'Reduction' parameter in tf.keras.losses", + "input": "", + "output": "

Your assumption is correct as far as I understand.

\n

If you check the github [keras/losses_utils.py][1] lines 260-269\nyou will see that it does performs as expected.\nSUM will sum up the losses in the batch dimension, and SUM_OVER_BATCH_SIZE would divide SUM by the number of total losses (batch size).

\n
def reduce_weighted_loss(weighted_losses,\n                     reduction=ReductionV2.SUM_OVER_BATCH_SIZE):\n  if reduction == ReductionV2.NONE:\n     loss = weighted_losses\n  else:\n     loss = tf.reduce_sum(weighted_losses)\n     if reduction == ReductionV2.SUM_OVER_BATCH_SIZE:\n        loss = _safe_mean(loss, _num_elements(weighted_losses))\n  return loss\n
\n

You can do a easy checking with your previous example just by adding one pair of outputs with 0 loss.

\n
y_true = [[0., 2.], [0., 0.],[1.,1.]]\ny_pred = [[3., 1.], [2., 5.],[1.,1.]]\n\nmae = tf.keras.losses.MeanAbsoluteError(reduction=tf.keras.losses.Reduction.SUM)\nmae(y_true, y_pred).numpy()\n> 5.5\n\nmae = tf.keras.losses.MeanAbsoluteError()\nmae(y_true, y_pred).numpy()\n> 1.8333\n
\n

So, your assumption is correct.\n[1]: https://github.com/keras-team/keras/blob/v2.7.0/keras/utils/losses_utils.py#L25-L84

\n", + "system": "" + }, + { + "instruction": "Save complete tf.keras model without custom objects?", + "input": "", + "output": "

If the purpose is to prevent the loading of loss and metrics, you can use the parameter compile in load_model:

\n
model = tf.keras.models.load_model("some_model", compile=False)\n
\n

This should skip the requirement of loss and metrics/optimizers since the model is not compiled. Of course you cannot train the model now, but it should work fine for inference using model.predict()

\n", + "system": "" + }, + { + "instruction": "Keras: ValueError: logits and labels must have the same shape ((None, 2) vs (None, 1))", + "input": "", + "output": "

I finally found the solution to my question!\nI managed to train the model by setting the loss function to sparse_categorical_crossentropy instead of binary_crossentropy. Then I also changed the activation function of the last layer to softmax that was sigmoid when I asked the question.

\n

Here's the whole code if you're interested:

\n
from keras.models import Sequential\nfrom keras.layers import Conv2D,MaxPooling2D,\\\n     Dropout,Flatten,Dense,Activation,\\\n     BatchNormalization\nmodel=Sequential()\nmodel.add(Conv2D(32,(3,3),activation='relu',input_shape=(128,128,3)))\nmodel.add(BatchNormalization())\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Dropout(0.25))\nmodel.add(Conv2D(64,(3,3),activation='relu'))\nmodel.add(BatchNormalization())\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Dropout(0.25))\nmodel.add(Conv2D(128,(3,3),activation='relu'))\nmodel.add(BatchNormalization())\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Dropout(0.25))\nmodel.add(Flatten())\nmodel.add(Dense(512,activation='relu'))\nmodel.add(BatchNormalization())\nmodel.add(Dropout(0.5))\nmodel.add(Dense(2,activation='softmax'))\nmodel.compile(loss = 'sparse_categorical_crossentropy',\n  optimizer='adam',metrics=['accuracy'])\n
\n", + "system": "" + }, + { + "instruction": "Cascade multiple RNN models for N-dimensional output", + "input": "", + "output": "

As this question has multiple major parts, I've dedicated a Q&A to the core challenge: stateful backpropagation. This answer focuses on implementing the variable output step length.

\n
\n

Description:

\n\n

Code: at bottom.

\n
\n

Example:

\n\n
\n

Point of uncertainty: I'm not entirely sure whether gradients interact between e.g. Out[0][1] and Out[2][0]. I did, however, verify that gradients will not flow horizontally if we write to separate TensorArrays for B's outputs per A's inputs (case 2); reimplementing for cases 4 & 5, grads will differ for both models, including lower one with a complete single horizontal pass.

\n

Thus we must write to a unified TensorArray. For such, as there are no ops leading from e.g. IR[1] to Out[0][1], I can't see how TF would trace it as such - so it seems we're safe. Note, however, that in below example, using steps_at_t=[1]*6 will make gradient flow in the both model horizontally, as we're writing to a single TensorArray and passing hidden states.

\n

The examined case is confounded, however, with B being stateful at all steps; lifting this requirement, we might not need to write to a unified TensorArray for all Out[0], Out[1], etc, but we must still test against something we know works, which is no longer as straightforward.

\n
\n

Example [code]:

\n
import numpy as np\nimport tensorflow as tf\n\n#%%# Make data & models, then fit ###########################################\nx0 = y0 = tf.constant(np.random.randn(2, 3, 4))\nmsn = MultiStatefulNetwork(batch_shape=(2, 3, 4), steps_at_t=[3, 4, 2])\n\n#%%#############################################\nwith tf.GradientTape(persistent=True) as tape:\n    outputs = msn(x0)\n    # shape: (3, 4, 2, 4), 0-padded\n    # We can pad labels accordingly.\n    # Note the (2, 4) model_b's output shape, which is a timestep slice;\n    # model_b is a *slice model*. Careful in implementing various logics\n    # which are and aren't intended to be stateful.\n
\n
\n

Methods:

\n

Not the cleanest, nor most optimal code, but it works; room for improvement.

\n

More importantly: I implemented this in Eager, and have no idea how it'll work in Graph, and making it work for both can be quite tricky. If needed, just run in Graph and compare all values as done in the "cases".

\n
# ideally we won't `import tensorflow` at all; kept for code simplicity\nimport tensorflow as tf\nfrom tensorflow.python.util import nest\nfrom tensorflow.python.ops import array_ops, tensor_array_ops\nfrom tensorflow.python.framework import ops\n\nfrom tensorflow.keras.layers import Input, SimpleRNN, SimpleRNNCell\nfrom tensorflow.keras.models import Model\n\n#######################################################################\nclass MultiStatefulNetwork():\n    def __init__(self, batch_shape=(2, 6, 4), steps_at_t=[]):\n        self.batch_shape=batch_shape\n        self.steps_at_t=steps_at_t\n\n        self.batch_size = batch_shape[0]\n        self.units = batch_shape[-1]\n        self._build_models()\n\n    def __call__(self, inputs):\n        outputs = self._forward_pass_a(inputs)\n        outputs = self._forward_pass_b(outputs)\n        return outputs\n\n    def _forward_pass_a(self, inputs):\n        return self.model_a(inputs, training=True)\n\n    def _forward_pass_b(self, inputs):\n        return model_rnn_outer(self.model_b, inputs, self.steps_at_t)\n\n    def _build_models(self):\n        ipt = Input(batch_shape=self.batch_shape)\n        out = SimpleRNN(self.units, return_sequences=True)(ipt)\n        self.model_a = Model(ipt, out)\n\n        ipt  = Input(batch_shape=(self.batch_size, self.units))\n        sipt = Input(batch_shape=(self.batch_size, self.units))\n        out, state = SimpleRNNCell(4)(ipt, sipt)\n        self.model_b = Model([ipt, sipt], [out, state])\n\n        self.model_a.compile('sgd', 'mse')\n        self.model_b.compile('sgd', 'mse')\n\n\ndef inner_pass(model, inputs, states):\n    return model_rnn(model, inputs, states)\n\n\ndef model_rnn_outer(model, inputs, steps_at_t=[2, 2, 4, 3]):\n    def outer_step_function(inputs, states):\n        x, steps = inputs\n        x = array_ops.expand_dims(x, 0)\n        x = array_ops.tile(x, [steps, *[1] * (x.ndim - 1)])  # repeat steps times\n        output, new_states = inner_pass(model, x, states)\n        return output, new_states\n\n    (outer_steps, steps_at_t, longest_step, outer_t, initial_states,\n     output_ta, input_ta) = _process_args_outer(model, inputs, steps_at_t)\n\n    def _outer_step(outer_t, output_ta_t, *states):\n        current_input = [input_ta.read(outer_t), steps_at_t.read(outer_t)]\n        output, new_states = outer_step_function(current_input, tuple(states))\n\n        # pad if shorter than longest_step.\n        # model_b may output twice, but longest in `steps_at_t` is 4; then we need\n        # output.shape == (2, *model_b.output_shape) -> (4, *...)\n        # checking directly on `output` is more reliable than from `steps_at_t`\n        output = tf.cond(\n            tf.math.less(output.shape[0], longest_step),\n            lambda: tf.pad(output, [[0, longest_step - output.shape[0]],\n                                    *[[0, 0]] * (output.ndim - 1)]),\n            lambda: output)\n\n        output_ta_t = output_ta_t.write(outer_t, output)\n        return (outer_t + 1, output_ta_t) + tuple(new_states)\n\n    final_outputs = tf.while_loop(\n        body=_outer_step,\n        loop_vars=(outer_t, output_ta) + initial_states,\n        cond=lambda outer_t, *_: tf.math.less(outer_t, outer_steps))\n\n    output_ta = final_outputs[1]\n    outputs = output_ta.stack()\n    return outputs\n\n\ndef _process_args_outer(model, inputs, steps_at_t):\n    def swap_batch_timestep(input_t):\n        # Swap the batch and timestep dim for the incoming tensor.\n        # (samples, timesteps, channels) -> (timesteps, samples, channels)\n        # iterating dim0 to feed (samples, channels) slices expected by RNN\n        axes = list(range(len(input_t.shape)))\n        axes[0], axes[1] = 1, 0\n        return array_ops.transpose(input_t, axes)\n\n    inputs = nest.map_structure(swap_batch_timestep, inputs)\n\n    assert inputs.shape[0] == len(steps_at_t)\n    outer_steps = array_ops.shape(inputs)[0]  # model_a_steps\n    longest_step = max(steps_at_t)\n    steps_at_t = tensor_array_ops.TensorArray(\n        dtype=tf.int32, size=len(steps_at_t)).unstack(steps_at_t)\n\n    # assume single-input network, excluding states which are handled separately\n    input_ta = tensor_array_ops.TensorArray(\n        dtype=inputs.dtype,\n        size=outer_steps,\n        element_shape=tf.TensorShape(model.input_shape[0]),\n        tensor_array_name='outer_input_ta_0').unstack(inputs)\n\n    # TensorArray is used to write outputs at every timestep, but does not\n    # support RaggedTensor; thus we must make TensorArray such that column length\n    # is that of the longest outer step, # and pad model_b's outputs accordingly\n    element_shape = tf.TensorShape((longest_step, *model.output_shape[0]))\n\n    # overall shape: (outer_steps, longest_step, *model_b.output_shape)\n    # for every input / at each step we write in dim0 (outer_steps)\n    output_ta = tensor_array_ops.TensorArray(\n        dtype=model.output[0].dtype,\n        size=outer_steps,\n        element_shape=element_shape,\n        tensor_array_name='outer_output_ta_0')\n\n    outer_t = tf.constant(0, dtype='int32')\n    initial_states = (tf.zeros(model.input_shape[0], dtype='float32'),)\n\n    return (outer_steps, steps_at_t, longest_step, outer_t, initial_states,\n            output_ta, input_ta)\n\n\ndef model_rnn(model, inputs, states):\n    def step_function(inputs, states):\n        output, new_states = model([inputs, *states], training=True)\n        return output, new_states\n\n    initial_states = states\n    input_ta, output_ta, time, time_steps_t = _process_args(model, inputs)\n\n    def _step(time, output_ta_t, *states):\n        current_input = input_ta.read(time)\n        output, new_states = step_function(current_input, tuple(states))\n\n        flat_state = nest.flatten(states)\n        flat_new_state = nest.flatten(new_states)\n        for state, new_state in zip(flat_state, flat_new_state):\n            if isinstance(new_state, ops.Tensor):\n                new_state.set_shape(state.shape)\n\n        output_ta_t = output_ta_t.write(time, output)\n        new_states = nest.pack_sequence_as(initial_states, flat_new_state)\n        return (time + 1, output_ta_t) + tuple(new_states)\n\n    final_outputs = tf.while_loop(\n        body=_step,\n        loop_vars=(time, output_ta) + tuple(initial_states),\n        cond=lambda time, *_: tf.math.less(time, time_steps_t))\n\n    new_states = final_outputs[2:]\n    output_ta = final_outputs[1]\n    outputs = output_ta.stack()\n    return outputs, new_states\n\n\ndef _process_args(model, inputs):\n    time_steps_t = tf.constant(inputs.shape[0], dtype='int32')\n\n    # assume single-input network (excluding states)\n    input_ta = tensor_array_ops.TensorArray(\n        dtype=inputs.dtype,\n        size=time_steps_t,\n        tensor_array_name='input_ta_0').unstack(inputs)\n\n    # assume single-output network (excluding states)\n    output_ta = tensor_array_ops.TensorArray(\n        dtype=model.output[0].dtype,\n        size=time_steps_t,\n        element_shape=tf.TensorShape(model.output_shape[0]),\n        tensor_array_name='output_ta_0')\n\n    time = tf.constant(0, dtype='int32', name='time')\n    return input_ta, output_ta, time, time_steps_t\n
\n", + "system": "" + }, + { + "instruction": "Why does Keras loss drop dramatically after the first epoch?", + "input": "", + "output": "

That sounds right to me. Remember, there is an inverse relationship between loss and accuracy, so as loss decreases, accuracy increases.

\n

My understanding is that, during the first epoch, you basically have a neural network with more-or-less random initial state. After the first epoch, the weights of the neural network will be adjusted often by minimize the loss function (which as previously states is effectively the same as maximizing accuracy). So, at the beginning of the second epoch, your loss should be a lot better (i.e. lower). That means that your neural network is learning.

\n", + "system": "" + }, + { + "instruction": "make input features map from expansion tensor in keras", + "input": "", + "output": "

If doing series expansion as described, if the input has C channels and the expansion has T terms, the expanded input should have C*T channels and otherwise be the same shape. Thus, the original input and the function being approximated up to each term should be concatenated along the channel dimension. It is a bit easier to do this with a transpose and reshape than an actual concatenate.

\n

Here is example code for a convolutional network trained on CIFAR10:

\n
inputs = tf.keras.Input(shape=(32, 32, 3))\n\nx = inputs\nn_terms = 2\nc = tf.constant([1, -1/6])\np = tf.constant([1, 3], dtype=tf.float32)\n\nterms = []\nfor i in range(n_terms):\n    m = c[i] * tf.math.pow(x, p[i])\n    terms.append(m)\nexpansion = tf.math.cumsum(terms)\nexpansion_terms_last = tf.transpose(expansion, perm=[1, 2, 3, 4, 0])\nx = tf.reshape(expansion_terms_last, tf.constant([-1, 32, 32, 3*n_terms])) \n\nx = Conv2D(32, (3, 3), input_shape=(32,32,3*n_terms))(x)\n
\n

This assumes the original network (without expansion) would have a first layer that looks like this:

\n
x = Conv2D(32, (3, 3), input_shape=(32,32,3))(inputs)\n
\n

and the rest of the network is exactly the same as it would be without expansion.

\n

terms contains a list of c_i * x ^ p_i from the original; expansion contains the sum of the terms (1st, then 1st and 2nd, etc), in a single tensor (where T is the first dimension). expansion_terms_last moves the T dimension to be last, and the reshape changes the shape from (..., C, T) to (..., C*T)

\n

The output of model.summary() then looks like this:

\n
__________________________________________________________________________________________________\nLayer (type)                    Output Shape         Param #     Connected to                     \n==================================================================================================\ninput_4 (InputLayer)            [(None, 32, 32, 3)]  0                                            \n__________________________________________________________________________________________________\ntf_op_layer_Pow_6 (TensorFlowOp [(None, 32, 32, 3)]  0           input_4[0][0]                    \n__________________________________________________________________________________________________\ntf_op_layer_Pow_7 (TensorFlowOp [(None, 32, 32, 3)]  0           input_4[0][0]                    \n__________________________________________________________________________________________________\ntf_op_layer_Mul_6 (TensorFlowOp [(None, 32, 32, 3)]  0           tf_op_layer_Pow_6[0][0]          \n__________________________________________________________________________________________________\ntf_op_layer_Mul_7 (TensorFlowOp [(None, 32, 32, 3)]  0           tf_op_layer_Pow_7[0][0]          \n__________________________________________________________________________________________________\ntf_op_layer_x_3 (TensorFlowOpLa [(2, None, 32, 32, 3 0           tf_op_layer_Mul_6[0][0]          \n                                                                 tf_op_layer_Mul_7[0][0]          \n__________________________________________________________________________________________________\ntf_op_layer_Cumsum_3 (TensorFlo [(2, None, 32, 32, 3 0           tf_op_layer_x_3[0][0]            \n__________________________________________________________________________________________________\ntf_op_layer_Transpose_3 (Tensor [(None, 32, 32, 3, 2 0           tf_op_layer_Cumsum_3[0][0]       \n__________________________________________________________________________________________________\ntf_op_layer_Reshape_3 (TensorFl [(None, 32, 32, 6)]  0           tf_op_layer_Transpose_3[0][0]    \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D)               (None, 30, 30, 32)   1760        tf_op_layer_Reshape_3[0][0]      \n
\n

On CIFAR10, this network trains slightly better with expansion - maybe 1% accuracy gain (from 71 to 72%).

\n

Step by step explanation of the code using sample data:

\n
# create a sample input\nx = tf.convert_to_tensor([[1,2,3],[4,5,6],[7,8,9]], dtype=tf.float32) # start with H=3, W=3\nx = tf.expand_dims(x, axis=0) # add batch dimension N=1\nx = tf.expand_dims(x, axis=3) # add channel dimension C=1\n# x is now NHWC or (1, 3, 3, 1)\n\nn_terms = 2 # expand to T=2\nc = tf.constant([1, -1/6])\np = tf.constant([1, 3], dtype=tf.float32)\n\nterms = []\nfor i in range(n_terms):\n    # this simply calculates m = c_i * x ^ p_i\n    m = c[i] * tf.math.pow(x, p[i])\n    terms.append(m)\nprint(terms)\n# list of two tensors with shape NHWC or (1, 3, 3, 1)\n\n# calculate each partial sum\nexpansion = tf.math.cumsum(terms)\nprint(expansion.shape)\n# tensor with shape TNHWC or (2, 1, 3, 3, 1)\n\n# move the T dimension last\nexpansion_terms_last = tf.transpose(expansion, perm=[1, 2, 3, 4, 0])\nprint(expansion_terms_last.shape)\n# tensor with shape NHWCT or (1, 3, 3, 1, 2)\n\n# stack the last two dimensions together\nx = tf.reshape(expansion_terms_last, tf.constant([-1, 3, 3, 1*2])) \nprint(x.shape)\n# tensor with shape NHW and C*T or (1, 3, 3, 2)\n# if the input had 3 channels for example, this would be (1, 3, 3, 6)\n# now use this as though it was the input\n
\n

Key assumptions (1) The c_i and p_i are not learned parameters, therefore the "expansion neurons" are not actually neurons, they are just a multiply and sum node (althrough neurons sounds cooler :) and (2) the expansion happens for each input channel independently, thus C input channels expanded to T terms each produce C*T input features, but the T features from each channel are calculated completely independently of the other channels (it looks like that in the diagram), and (3) the input contains all the partial sums (ie c_1 * x ^ p_1, c_1 * x ^ p_1 + c_2 * x ^ p_2 and so forth) but does not contain the terms (again, looks like it in the diagram)

\n", + "system": "" + }, + { + "instruction": "keras-tuner error in hyperparameter tuning", + "input": "", + "output": "

you should use objective='root_mean_squared_error'

\n\n
    tuner = RandomSearch(\n    build_model_test,\n    objective='root_mean_squared_error',\n    max_trials=20,\n    executions_per_trial=3,\n    directory='my_dir',\n    project_name='helloworld') \n
\n\n

I would rather use 'val_root_mean_squared_error' as most probably you are interested to decrease the error on the validation dataset.

\n", + "system": "" + }, + { + "instruction": "Output layer for Binary Classification in Keras", + "input": "", + "output": "

The general tendency is to use multiple output nodes with sigmoid curve for multi-label classification. Often, a softmax is used for multiclass classification, where softmax predicts the probabilities of each output and we choose class with highest probability. For binary classification, we can choose a single neuron output passed through sigmoid, and then set a threshold to choose the class, or use two neuron output and then perform a softmax. In either of the cases, thresholding is possible.It is rather easy to plot a ROC curve with single neuron output, as you'll have to threshold over one value. So, you can easily go with model.add(Dense(1, activation='sigmoid'))

\n", + "system": "" + }, + { + "instruction": "Understanding Generative Adversarial Networks", + "input": "", + "output": "

If discriminator outputs 0.5 probability directly in the beginning of the network it means that the weights of the discriminator are not being updated and it plays no role in training, which further indicates it is not able to differentiate between real and fake image coming from the generator. To solve this issue try to add Gaussian noise as an input to the discriminator or do label smoothing which are very simple and effective techniques.\nIn answer to your this question, that The results are all based on the losses other than the adversarial loss , the trick that can be used is try to train the network first on all the losses except the adversarial loss and then fine tune on the adversarial losses, hope it helps.\nFor the second part of your questions, the generated images seem to face the problem of mode collpase where they tend to learn color, degradation from 1 image and pass the same to the other images , try to solve it out by either decreasing the batch size or using unrolled gans,

\n", + "system": "" + }, + { + "instruction": "VotingClassifier with pipelines as estimators", + "input": "", + "output": "

The problem is with the KerasClassifier. It does not provide the _estimator_type, which was checked in _validate_estimator.

\n\n

It is not the problem of using pipeline. Pipeline provides this information as a property. See here.

\n\n

Hence, the quick fix is setting _estimator_type='classifier'.

\n\n

A reproducible example:

\n\n
# Define pipelines\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.preprocessing import MinMaxScaler, Normalizer\nfrom sklearn.ensemble import VotingClassifier\nfrom keras.wrappers.scikit_learn import KerasClassifier\nfrom sklearn.datasets import make_classification\nfrom keras.layers import Dense\nfrom keras.models import Sequential\n\nX, y = make_classification()\n\n# DTC pipeline\nfeaturiser = MinMaxScaler()\ndtc = DecisionTreeClassifier()\ndtc_pipe = Pipeline([('featuriser', featuriser), ('dtc', dtc)])\n\n# SVC pipeline\nscaler = Normalizer()\nsvc = SVC(C=100, gamma=0.001, kernel='rbf')\nsvc_pipe = Pipeline(\n    [('scaler', scaler), ('svc', svc)])\n\n# Keras pipeline\ndef get_model():\n    # create model\n    model = Sequential()\n    model.add(Dense(10, input_dim=20, activation='relu'))\n    model.add(Dense(1, activation='sigmoid'))\n    # Compile model\n    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n    return model\n\n\ncnn = KerasClassifier(build_fn=get_model)\ncnn._estimator_type = \"classifier\"\ncnn_pipe = Pipeline([('scaler', scaler), ('cnn', cnn)])\n\n\n# Make an ensemble\nensemble = VotingClassifier(estimators=[('dtc', dtc_pipe), \n                                        ('svc', svc_pipe),\n                                        ('cnn', cnn_pipe)], \n                            voting='hard')\n\nensemble.fit(X, y)\n\n
\n", + "system": "" + }, + { + "instruction": "How to share weights and not biases in Keras Dense layers", + "input": "", + "output": "

One of the ways you can achieve this is by defining a custom bias layer, and here is how you could do this.\nPS: Change input shapes/ initializer according to your need.

\n\n
import tensorflow as tf\nprint('TensorFlow:', tf.__version__)\n\nclass BiasLayer(tf.keras.layers.Layer):\n    def __init__(self, units, *args, **kwargs):\n        super(BiasLayer, self).__init__(*args, **kwargs)\n        self.bias = self.add_weight('bias',\n                                    shape=[units],\n                                    initializer='zeros',\n                                    trainable=True)\n\n    def call(self, x):\n        return x + self.bias\n\nz1 = tf.keras.Input(shape=[1])\nz2 = tf.keras.Input(shape=[1])\nz3 = tf.keras.Input(shape=[1])\nz4 = tf.keras.Input(shape=[1])\nz5 = tf.keras.Input(shape=[1])\n\ndense_layer = tf.keras.layers.Dense(units=10, use_bias=False)\n\n\nop1 = BiasLayer(units=10)(dense_layer(z1))\nop2 = BiasLayer(units=10)(dense_layer(z2))\nop3 = BiasLayer(units=10)(dense_layer(z3))\nop4 = BiasLayer(units=10)(dense_layer(z4))\nop5 = BiasLayer(units=10)(dense_layer(z5))\n\nmodel = tf.keras.Model(inputs=[z1, z2, z3, z4, z5], outputs=[op1, op2, op3, op4, op5])\nmodel.summary()\n
\n\n

Output:

\n\n
TensorFlow: 2.1.0-dev20200107\nModel: \"model\"\n__________________________________________________________________________________________________\nLayer (type)                    Output Shape         Param #     Connected to                     \n==================================================================================================\ninput_1 (InputLayer)            [(None, 1)]          0                                            \n__________________________________________________________________________________________________\ninput_2 (InputLayer)            [(None, 1)]          0                                            \n__________________________________________________________________________________________________\ninput_3 (InputLayer)            [(None, 1)]          0                                            \n__________________________________________________________________________________________________\ninput_4 (InputLayer)            [(None, 1)]          0                                            \n__________________________________________________________________________________________________\ninput_5 (InputLayer)            [(None, 1)]          0                                            \n__________________________________________________________________________________________________\ndense (Dense)                   (None, 10)           10          input_1[0][0]                    \n                                                                 input_2[0][0]                    \n                                                                 input_3[0][0]                    \n                                                                 input_4[0][0]                    \n                                                                 input_5[0][0]                    \n__________________________________________________________________________________________________\nbias_layer (BiasLayer)          (None, 10)           10          dense[0][0]                      \n__________________________________________________________________________________________________\nbias_layer_1 (BiasLayer)        (None, 10)           10          dense[1][0]                      \n__________________________________________________________________________________________________\nbias_layer_2 (BiasLayer)        (None, 10)           10          dense[2][0]                      \n__________________________________________________________________________________________________\nbias_layer_3 (BiasLayer)        (None, 10)           10          dense[3][0]                      \n__________________________________________________________________________________________________\nbias_layer_4 (BiasLayer)        (None, 10)           10          dense[4][0]                      \n==================================================================================================\nTotal params: 60\nTrainable params: 60\nNon-trainable params: 0\n__________________________________________________________________________________________________\n
\n", + "system": "" + }, + { + "instruction": "WARNING:tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2", + "input": "", + "output": "

tl;dr to avoid this, cast your input to float32

\n
X = tf.cast(iris[:, :3], tf.float32) \ny = tf.cast(iris[:, 3], tf.float32)\n
\n

or with numpy:

\n
X = np.array(iris[:, :3], dtype=np.float32)\ny = np.array(iris[:, 3], dtype=np.float32)\n
\n

Explanation

\n

By default, Tensorflow uses floatx, which defaults to float32, which is standard for deep learning. You can verify this:

\n
import tensorflow as tf\ntf.keras.backend.floatx()\n
\n
Out[3]: 'float32'\n
\n

The input you provided (the Iris dataset), is of dtype float64, so there is a mismatch between Tensorflow's default dtype for weights, and the input. Tensorflow doesn't like that, because casting (changing the dtype) is costly. Tensorflow will generally throw an error when manipulating tensors of different dtypes (e.g., comparing float32 logits and float64 labels).

\n

The "new behavior" it's talking about:

\n
\n

Layer my_model_1 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2

\n
\n

Is that it will automatically cast the input dtype to float32. Tensorflow 1.X probably threw an exception in this situation, although I can't say I've ever used it.

\n", + "system": "" + }, + { + "instruction": "What is the first parameter (gradients) of the backward method, in pytorch?", + "input": "", + "output": "

\nTo fully answer your question, it'd require a somewhat longer explanation that evolves around the details of how Backprop or, more fundamentally, the chain rule works.

\n\n

The short programmatic answer is that the backwards function of a Variable computes the gradient of all variables in the computation graph attached to that Variable. (To clarify: if you have a = b + c, then the computation graph (recursively) points first to b, then to c, then to how those are computed, etc.) and cumulatively stores (sums) these gradients in the .grad attribute of these Variables. When you then call opt.step(), i.e. a step of your optimizer, it adds a fraction of that gradient to the value of these Variables.

\n\n

That said, there are two answers when you look at it conceptually: If you want to train a Machine Learning model, you normally want to have the gradient with respect to some loss function. In this case, the gradients computed will be such that the overall loss (a scalar value) will decrease when applying the step function. In this special case, we want to compute the gradient to a specific value, namely the unit length step (so that the learning rate will compute the fraction of the gradients that we want). This means that if you have a loss function, and you call loss.backward(), this will compute the same as loss.backward(torch.FloatTensor([1.])).

\n\n

While this is the common use case for backprop in DNNs, it is only a special case of general differentiation of functions. More generally, the symbolic differentiation packages (autograd in this case, as part of pytorch) can be used to compute gradients of earlier parts of the computation graph with respect to any gradient at a root of whatever subgraph you choose. This is when the key-word argument gradient comes in useful, since you can provide this \"root-level\" gradient there, even for non-scalar functions!

\n\n

To illustrate, here's a small example:

\n\n
a = nn.Parameter(torch.FloatTensor([[1, 1], [2, 2]]))\nb = nn.Parameter(torch.FloatTensor([[1, 2], [1, 2]]))\nc = torch.sum(a - b)\nc.backward(None)  # could be c.backward(torch.FloatTensor([1.])) for the same result\nprint(a.grad, b.grad) \n
\n\n

prints:

\n\n
Variable containing:\n 1  1\n 1  1\n[torch.FloatTensor of size 2x2]\n Variable containing:\n-1 -1\n-1 -1\n[torch.FloatTensor of size 2x2]\n
\n\n

While

\n\n
a = nn.Parameter(torch.FloatTensor([[1, 1], [2, 2]]))\nb = nn.Parameter(torch.FloatTensor([[1, 2], [1, 2]]))\nc = torch.sum(a - b)\nc.backward(torch.FloatTensor([[1, 2], [3, 4]]))\nprint(a.grad, b.grad)\n
\n\n

prints:

\n\n
Variable containing:\n 1  2\n 3  4\n[torch.FloatTensor of size 2x2]\n Variable containing:\n-1 -2\n-3 -4\n[torch.FloatTensor of size 2x2]\n
\n\n

and

\n\n
a = nn.Parameter(torch.FloatTensor([[0, 0], [2, 2]]))\nb = nn.Parameter(torch.FloatTensor([[1, 2], [1, 2]]))\nc = torch.matmul(a, b)\nc.backward(torch.FloatTensor([[1, 1], [1, 1]]))  # we compute w.r.t. a non-scalar variable, so the gradient supplied cannot be scalar, either!\nprint(a.grad, b.grad)\n
\n\n

prints

\n\n
Variable containing:\n 3  3\n 3  3\n[torch.FloatTensor of size 2x2]\n Variable containing:\n 2  2\n 2  2\n[torch.FloatTensor of size 2x2]\n
\n\n

and

\n\n
a = nn.Parameter(torch.FloatTensor([[0, 0], [2, 2]]))\nb = nn.Parameter(torch.FloatTensor([[1, 2], [1, 2]]))\nc = torch.matmul(a, b)\nc.backward(torch.FloatTensor([[1, 2], [3, 4]]))  # we compute w.r.t. a non-scalar variable, so the gradient supplied cannot be scalar, either!\nprint(a.grad, b.grad)\n
\n\n

prints:

\n\n
Variable containing:\n  5   5\n 11  11\n[torch.FloatTensor of size 2x2]\n Variable containing:\n 6  8\n 6  8\n[torch.FloatTensor of size 2x2]\n
\n", + "system": "" + }, + { + "instruction": "ValueError: Unexpected result of `predict_function` (Empty batch_outputs). Please use `Model.compile(..., run_eagerly=True)`", + "input": "", + "output": "

Change the axis dimension in expand_dims to 1 and slice your data like this, since it is 2D:

\n
import tensorflow as tf\nimport numpy as np\n\ntf.random.set_seed(42)\n\n# Create some regression data\nX_regression = np.expand_dims(np.arange(0, 1000, 5), axis=1)\ny_regression = np.expand_dims(np.arange(100, 1100, 5), axis=1)\n\n# Split it into training and test sets\nX_reg_train = X_regression[:150, :]\nX_reg_test = X_regression[150:, :]\n\ny_reg_train = y_regression[:150, :]\ny_reg_test = y_regression[150:, :]\n\ntf.random.set_seed(42)\n\n# Recreate the model\nmodel_3 = tf.keras.Sequential([\n  tf.keras.layers.Dense(100),\n  tf.keras.layers.Dense(10),\n  tf.keras.layers.Dense(1)\n])\n\n# Change the loss and metrics of our compiled model\nmodel_3.compile(loss=tf.keras.losses.mae, # change the loss function to be regression-specific\n                optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),\n                metrics=['mae']) # change the metric to be regression-specific\n\n# Fit the recompiled model\nmodel_3.fit(X_reg_train, y_reg_train, epochs=100)\n\nmodel_3.predict(X_reg_test)\n
\n", + "system": "" + }, + { + "instruction": "How to code a residual block using two layers of a basic CNN algorithm built with "tensorflow.keras"?", + "input": "", + "output": "

Residual Block from ResNet Architecture is the following :

\n

\"Residual

\n

You need to use the Keras functionnal API because Sequential models are too limited. Its implementation in Keras is :

\n
from tensorflow.keras import layers\n\ndef resblock(x, kernelsize, filters):\n    fx = layers.Conv2D(filters, kernelsize, activation='relu', padding='same')(x)\n    fx = layers.BatchNormalization()(fx)\n    fx = layers.Conv2D(filters, kernelsize, padding='same')(fx)\n    out = layers.Add()([x,fx])\n    out = layers.ReLU()(out)\n    out = layers.BatchNormalization()(out)\n    return out\n
\n

BatchNormalisation()layer is not essential but may be a solid option to increase its accuracy. x needs also to have the same number of filters than the filters parameter.

\n", + "system": "" + }, + { + "instruction": "Run into the following issue: build_tensor_flow is not supported in Eager Mode", + "input": "", + "output": "

Is because you are using tensorflow v2. You have to use the tensorflow v2 compatibility and disable eager mode.

\n\n

Be careful with the tensorflow imports that you use, for example if you use tensorflow_core, be sure that you are using all the dependencies from \"tensorflow\".\nYou have to add before your code:

\n\n
import tensorflow as tf\nif tf.executing_eagerly():\n   tf.compat.v1.disable_eager_execution()\n
\n", + "system": "" + }, + { + "instruction": "How to solve "No Algorithm Worked" Keras Error?", + "input": "", + "output": "

add the following to your code:

\n
from tensorflow.compat.v1 import ConfigProto\nfrom tensorflow.compat.v1 import InteractiveSession\n\nconfig = ConfigProto()\nconfig.gpu_options.allow_growth = True\nsession = InteractiveSession(config=config)\n
\n

And then restart the python kernel.

\n", + "system": "" + }, + { + "instruction": "Accuracy no longer improving after switching to Dataset", + "input": "", + "output": "

There are somethings you must check.

\n\n\n\n

These two are the most troublesome and probable things.

\n\n

Later you might want to check whether x from the dataset is within the same range you trained before (not crucial, but might change a little the performance), if the number of channels are the same, etc.

\n\n
\n\n

For the relus, there are solutions like this one.

\n", + "system": "" + }, + { + "instruction": "Unable to save TensorFlow Keras LSTM model to SavedModel format", + "input": "", + "output": "

In addition to the answer of The Guy with The Hat:

\n\n

The .h5 part is sufficient to tell keras to store it as keras model save.

\n\n
model.save('path_to_saved_model/model.h5') \n
\n\n

should do the trick.

\n", + "system": "" + }, + { + "instruction": "How to substitute `keras.layers.merge._Merge` in `tensorflow.keras`", + "input": "", + "output": "

I have slightly modified your code to use tf.random_uniform instead of K.random_uniform and it's working fine on 1.13.1 and 1.14.0 (full snippet and resulting model.summary() below).

\n\n
import tensorflow as tf\nprint(tf.__version__)\n\n\nclass RandomWeightedAverage(tf.keras.layers.Layer):\n    def __init__(self, batch_size):\n        super().__init__()\n        self.batch_size = batch_size\n\n    def call(self, inputs, **kwargs):\n        alpha = tf.random_uniform((self.batch_size, 1, 1, 1))\n        return (alpha * inputs[0]) + ((1 - alpha) * inputs[1])\n\n    def compute_output_shape(self, input_shape):\n        return input_shape[0]\n\n\nx1 = tf.keras.layers.Input((32, 32, 1))\nx2 = tf.keras.layers.Input((32, 32, 1))\n\ny = RandomWeightedAverage(4)(inputs=[x1, x2])\n\nmodel = tf.keras.Model(inputs=[x1, x2], outputs=[y])\n\nprint(model.summary())\n
\n\n

\"model

\n", + "system": "" + }, + { + "instruction": "How to fine-tune a keras model with existing plus newer classes?", + "input": "", + "output": "

CNN's are prone to forgetting the previously learned knowledge when retrained for a new task on a novel domain and this phenomenon is often called catastrophic forgetting, which is an active and challenging research domain.

\n

Coming to the point, one obvious way to enable a model to classify new classes along with old classes is to train from scratch on the accumulated (old+new) dataset (which is time consuming.)

\n

In contrast, several alternative approaches have been proposed in the literature of (class-incremental) continual learning to tackle this scenario in the recent years:

\n
    \n
  1. Firstly, you can use a small subset of the old dataset along with the new dataset to train your new model, refered as rehearsal-based approach. Note that you can train a GAN to generate pseudo samples of old classes instead of storing a subset of raw samples. As depicted in the figure, while training, distillation loss is used to mimic the prediction of old model (weight is frizzed) to the new model and it helps to avoid forgetting old knowledge:\n\"enter
  2. \n
  3. Secondly, as the contributions of each neuron in a model are not equal, while training the new model you may instead only update neurons that are less important for old classes so that we can retain old knowledge. You can check out the Elastic Weight Consolidation (EWC) paper for more details.
  4. \n
  5. Thirdly, you can grow your model dynamically to extract features that are specific for new classes without harming the weights that are important for old classes. You can check out Dynamically Extendable Network (DEN) for more details.
  6. \n
\n", + "system": "" + }, + { + "instruction": "Is there any implementation of mtcnn face detection in tensorflow 2.0?", + "input": "", + "output": "

Here is the code for running MTCNN face_detection in real time:

\n
import torch\nfrom torchvision import transforms \nfrom PIL import Image \nimport numpy as np \nimport cv2 \nfrom mtcnn import MTCNN\n\ncap = cv2.VideoCapture(0)\ndetector = MTCNN()\n\n\nwhile True:\n\n    ret,frame = cap.read()\n\n    output = detector.detect_faces(frame)\n\n    for single_output in output:\n        x,y,w,h = single_output['box']\n        cv2.rectangle(frame,pt1=(x,y), pt2=(x+w,y+h),color=(255,0,0),thickness=2)\n    cv2.imshow('win',frame)\n\n    if cv2.waitKey(1) & 0xFF == ('x'):\n        break\n\ncv2.destroyAllWindows()\n
\n", + "system": "" + }, + { + "instruction": "Implementation of the latest "Lookahead Optimizer" paper in Keras?", + "input": "", + "output": "

EDIT (2020.03): There is an official implementation in tensorflow! https://www.tensorflow.org/addons/api_docs/python/tfa/optimizers/Lookahead

\n\n
\n\n

Today when I want to start implementing it, I found that somebody has already done it! (Of course, when I asked this question, it cannot be Googled.)

\n\n

Here is the link: https://github.com/bojone/keras_lookahead\n(For the non-Chinese readers, I have slightly modified the repo: https://github.com/fzyzcjy/keras_lookahead.)

\n\n

And the usage is like:

\n\n
model.compile(optimizer=Adam(1e-3), loss='mse') # Any optimizer\nlookahead = Lookahead(k=5, alpha=0.5) # Initialize Lookahead\nlookahead.inject(model) # add into model\n
\n\n

Looking into his code, the core of the implementation is the modification of the model.train_function, i.e. model.train_function = ..., to achieve the two sets of updates.

\n\n

In addition, it seems that the \"hacking\" trick of the repo comes from the following article (judging from his code and comments): https://kexue.fm/archives/5879/comment-page-1 (Sorry it is a non-English page)

\n", + "system": "" + }, + { + "instruction": "How to fix"NameError: name 'load_model' is not defined"", + "input": "", + "output": "

What you are missing here is, load_model function is inside models class so you have to reference models class.

\n\n
from keras import models    \nmodel = models.load_model('filename.h5')\n
\n\n

In order to do it your way, you have to use import as following

\n\n
from keras.models import load_model\n
\n\n

PS: This next line might help you in future. If you are trying to load weights, use function:

\n\n
model.load_weight('weights_file.h5')\n
\n", + "system": "" + }, + { + "instruction": "Keras: What is the difference between model and layers?", + "input": "", + "output": "

In Keras, a network is a directed acyclic graph (DAG) of layers. A model is a network with added training and evaluation routines.

\n\n

The framework allows you to build network DAGs out of both individual layers and other DAGs. The latter is what you're seeing in the example and what seems to be causing the confusion.

\n", + "system": "" + }, + { + "instruction": "How to make it append tensorboard logs to previous runs?", + "input": "", + "output": "

To resume a previous training run, you should set the argument initial_epoch of model.fit. By doing so, the new information will be appended to the existing TensorBoard logs.

\n", + "system": "" + }, + { + "instruction": "Does Keras's LSTM really take into account the cell state and previous output?", + "input": "", + "output": "

You have to give the previous prediction to the LSTM state. If you call predict the LSTM will be initialized every time, it will not remember the state from previous predictions.

\n\n

Typically (e.g if you generate text with an lstm) you have a loop where you do something like this:

\n\n
# pick a random seed\nstart = numpy.random.randint(0, len(dataX)-1)\npattern = dataX[start]\nprint \"Seed:\"\nprint \"\\\"\", ''.join([int_to_char[value] for value in pattern]), \"\\\"\"\n# generate characters\nfor i in range(1000):\n    x = numpy.reshape(pattern, (1, len(pattern), 1))\n    x = x / float(n_vocab)\n    prediction = model.predict(x, verbose=0)\n    index = numpy.argmax(prediction)\n    result = int_to_char[index]\n    seq_in = [int_to_char[value] for value in pattern]\n    sys.stdout.write(result)\n    pattern.append(index)\n    pattern = pattern[1:len(pattern)]\nprint \"\\nDone.\"\n
\n\n

(example copied from machinelearningmastery.com)

\n\n

The important thing are this lines:

\n\n
pattern.append(index)\npattern = pattern[1:len(pattern)]\n
\n\n

Here they append the next character to the pattern and then drop the first character to have an input length that matches the expectation from the lstm. Then the bring it to a numpy array (x = np.reshape(...)) and predict from the model with the generated output. So to answer your first question you need to feed in the output again.

\n\n

For the second question the 10 corresponds to the number of lstm cells that you have in a layer. If you don't use \"return_sequences=True\" it corresponds to the output size of that layer.

\n", + "system": "" + }, + { + "instruction": "Session keyword arguments are not support during eager execution. You passed: {'learning_rate': 1e-05}", + "input": "", + "output": "

Mentioning the Solution in this Section (even though it is present in Comment's section), for the benefit of the community.

\n\n

The issue is resolved by removing learning_rate = learning_rate in model.compile.

\n\n

Correct Code is mentioned below:

\n\n
def my_model(learning_rate = 5e-3):\n    model = Sequential([\n             Flatten(input_shape = (28, 28)),\n             BatchNormalization(), \n             Dropout(rate = 0.2), \n             Dense(300, activation = \"elu\", kernel_initializer = \"he_normal\"),\n             Dropout(rate = 0.2), \n             BatchNormalization(), \n             Dense(300, activation = \"elu\", kernel_initializer = \"he_normal\"),\n             Dropout(rate = 0.2), \n             BatchNormalization(), \n             Dense(10, activation = \"softmax\",kernel_initializer = \"he_normal\")])\n    opt = Adam(lr = learning_rate)\n    model.summary()\n    model.compile(loss = \"sparse_categorical_crossentropy\", optimizer = opt, metrics = [\"accuracy\"]) \n    return model\n
\n", + "system": "" + }, + { + "instruction": "Macro metrics (recall/F1...) for multiclass CNN", + "input": "", + "output": "

let me answer both question but in the opposite order:

\n\n

You can't use Recall as a base for a custom loss: It is not convex! If you do not fully understand why Recall or precision or f1 can't be used as a loss, please take the time to see the role of the loss (it is afterall a huge parameter in your model).

\n\n

Indeed, the round is intended for a binary problem. As they say, if it's not you then it's the other. But in your case it's wrong. Let's go throw the code:

\n\n
val_predict = (np.asarray(self.model.predict(self.validation_data[0]))).round()\n
\n\n

from the inside out, he take the data (self.validation_data[0;]) and predict a number (1 neuron as output). As such he compute the probability of being a 1. If this probability is over 0.5, then the round transforms it into a 1, if it is under, it transforms it to a 0. As you can see, it is wrong for you. In some case you won't predict any class. Following this mistake, the rest is also wrong.

\n\n

Now, the solution. You want to compute the mean Recall at every step. by the way, \"but it only works on validation set\". yes that is intended, you use the validation to validate the model, not the train, else it is cheating.

\n\n

so Recall is equal to true positive over all positives. Let's do that!

\n\n
def recall(y_true, y_pred):\n     recall = 0\n     pred = K.argmax(y_pred)\n     true = K.argmax(y_true)\n     for i in range(16):\n         p = K.cast(K.equal(pred,i),'int32')\n         t = K.cast(K.equal(true,i),'int32')\n         # Compute the true positive\n         common = K.sum(K.dot(K.reshape(t,(1,-1)),K.reshape(p,(-1,1))))\n         # divide by all positives in t\n         recall += common/ (K.sum(t) + K.epsilon)\n     return recall/16\n
\n\n

This gives you the mean recall for all classes.\nyou could print the value for every class.

\n\n

Tell me if you have any question!

\n\n

for an implementation of the binary Recall, see this question from which the code is adapted.

\n", + "system": "" + }, + { + "instruction": "Keras/Tensorflow Conv1D expected input shape", + "input": "", + "output": "

Why is Keras expecting 3 dimensions?

\n\n
\n
\n
\n

The three dimensions are (batch_size, feature_size, channels).

\n
\n
\n
\n\n

Define a 1D Conv layer

\n\n
Conv1D(32, (3), activation='relu' , input_shape=( 29, 1 ))\n
\n\n

Feed (4000, 29, 1) samples to this layer.

\n\n

Simple example:

\n\n
from keras import models, layers\nimport numpy as np\n\nx = np.ones((10, 29, 1))\ny = np.zeros((10,))\nmodel = models.Sequential()\nmodel.add(layers.Conv1D(32, (3), activation='relu' , input_shape=( 29,1)))\nmodel.add(layers.Flatten())\nmodel.add(layers.Dense(1, activation='sigmoid'))\nmodel.compile(loss='binary_crossentropy', optimizer= \"adam\", metrics=['accuracy'])\nprint(model.summary())\nmodel.fit(x,y)\n
\n", + "system": "" + }, + { + "instruction": "Integrate Keras to SKLearn Pipeline?", + "input": "", + "output": "

It looks like you are passing your 11 columns of original data through your various column transformers and the number of dimensions is expanding to 30,513 (after count vectorizing your text, one hot encoding etc). Your neural network architecture is set up to accept only 11 input features but is being passed your (now transformed) 30,513 features, which is what error 1 is explaining.

\n\n

You therefore need to amend the input_dim of your neural network to match the number of features being created in the feature extraction pipeline.

\n\n

One thing you could do is add an intermediate step between them with something like SelectKBest and set that to something like 20,000 so that you know exactly how many features will eventually be passed to the classifier.

\n\n

This is a good guide and flowchart on the Google machine learning website - link - look at the flow chart - here you can see they have a 'select top k features' step in the pipeline before training a model.

\n\n

So, try updating these parts of your code to:

\n\n
def keras_classifier_wrapper():\n    clf = Sequential()\n    clf.add(Dense(32, input_dim=20000, activation='relu'))\n    clf.add(Dense(2, activation='softmax'))\n    clf.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[\"accuracy\"])\n    return clf\n
\n\n

and

\n\n
from sklearn.feature_selection import SelectKBest\nselect_best_features = SelectKBest(k=20000)\n\nPL = Pipeline(steps=[('feature_engineer', FE),\n                     ('select_k_best', select_best_features),\n                     ('keras_clf', clf)])\n
\n", + "system": "" + }, + { + "instruction": "What is the difference between "Dropout", "Monte-Carlo Dropout" and "Channel-wise Dropout"?", + "input": "", + "output": "

You are correct that MC Dropout is applied during inference as well, unlike regular dropout. If you google it you could easily find plenty of information regarding both.

\n\n

Regarding channel-wise dropout, my understanding is that instead of dropping particular neurons, it drops the entire channels.

\n\n

Now the implementation in Keras (I'm going to use tf.keras).

\n\n

MC Dropout

\n\n

As usual Keras you define a custom layer that applies dropout regardless of whether it is training or testing so we can just use tf.nn.dropout() with constant dropout rate:

\n\n
import tensorflow as tf\n\nclass MCDropout(tf.keras.layers.Layer):\n    def __init__(self, rate):\n        super(MCDropout, self).__init__()\n        self.rate = rate\n\n    def call(self, inputs):\n        return tf.nn.dropout(inputs, rate=self.rate)\n
\n\n

Usage example:

\n\n
import tensorflow as tf\nimport numpy as np\n\nmodel = tf.keras.models.Sequential()\nmodel.add(tf.keras.layers.Conv2D(filters=6, kernel_size=3))\nmodel.add(MCDropout(rate=0.5))\nmodel.add(tf.keras.layers.Flatten())\nmodel.add(tf.keras.layers.Dense(2))\nmodel.compile(optimizer=tf.keras.optimizers.SGD(0.001),\n              loss='binary_crossentropy',\n              metrics=['accuracy'])\n\n# generate dummy data for illustration\nx_train = np.random.normal(size=(10, 4, 4, 3))\nx_train = np.vstack([x_train, 2*np.random.normal(size=(10, 4, 4, 3))])\ny_train = [[1, 0] for _ in range(10)] + [[0, 1] for _ in range(10)]\ny_train = np.array(y_train)\n\nmodel.fit(x_train,\n          y_train,\n          epochs=2,\n          batch_size=10,\n          validation_data=(x_train, y_train))\n
\n\n

Channel-Wise Dropout

\n\n

Here you can use the same tf.nn.dropout() function, however, you have to specify the noise shape. The documentation of tf.nn.dropout() gives the exact example of how to achieve dropped channels:

\n\n
\n

shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together.

\n
\n\n

This is what we're going to do in the call() method:

\n\n
class ChannelWiseDropout(tf.keras.layers.Layer):\n    def __init__(self, rate):\n        super(ChannelWiseDropout, self).__init__()\n        self.rate = rate\n\n    def call(self, inputs):\n        shape = tf.keras.backend.shape(inputs)\n        noise_shape = (shape[0], 1, 1, shape[-1])\n        return tf.nn.dropout(inputs,\n                             rate=self.rate,\n                             noise_shape=noise_shape)\n
\n\n

Applying it to some example:

\n\n
model = tf.keras.models.Sequential()\nmodel.add(tf.keras.layers.InputLayer(input_shape=(4, 4, 3)))\nmodel.add(tf.keras.layers.Conv2D(filters=3, kernel_size=3))\nmodel.add(ChannelWiseDropout(rate=0.5))\n\nx_train = np.random.normal(size=(1, 4, 4, 3))\n\nwith tf.Session() as sess:\n    sess.run(tf.global_variables_initializer())\n    res = sess.run(model.output, feed_dict={model.inputs[0]:x_train})\n    print(res[:, :, :, 0])\n    print(res[:, :, :, 1])\n    print(res[:, :, :, 2])\n# [[[2.5495746  1.3060737 ]\n#   [0.47009617 1.0427766 ]]]\n# [[[-0.  0.]\n#   [-0. -0.]]]                <-- second and third channels were dropped\n# [[[-0. -0.]\n#   [-0. -0.]]]\n
\n\n

Note

\n\n

I'm using tf.__version__ == '1.13.1'. Older versions of tf use keep_prob = 1 - rate instead of rate argument.

\n", + "system": "" + }, + { + "instruction": "Does normalizing images by dividing by 255 leak information between train and test set?", + "input": "", + "output": "

Your guess is correct, dividing an image by 255 simply rescales the image from 0-255 to 0-1. (Converting it to float from int makes computation convenient too) However, neither is required. When zero-centering the data,the mean, however, can't leak into the testing set: (http://cs231n.github.io/neural-networks-2/#datapre)

\n\n

x_train = (x_train - x_train_mean)

\n\n

x_test = (x_test - x_train_mean)

\n\n

Moreover, you can use sklearn's Pipeline class(https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) and use fit() and/or fit_transform() methods to simplify the process.If you're using Keras, there's a wrapper for it

\n", + "system": "" + }, + { + "instruction": "How to use featurewise_center=True together with flow_from_directory in ImageDataGenerator?", + "input": "", + "output": "

featurewise_center transforms the images to 0 mean. This is done by using the formulae

\n\n
\n

X = X - mean(X)

\n
\n\n

But for the ImageDataGenerator to do this transformation it needs to know the mean of the dataset and fit method on the ImageDataGenerator does exactly this operation of calculating these statistics.

\n\n

As the keras docs explain

\n\n
\n

Fits the data generator to some sample data. This computes the\n internal data stats related to the data-dependent transformations,\n based on an array of sample data.

\n
\n\n

If the dataset can be fully loaded into the memory, we can do so by loading all the images into a numpy array and running the fit on it.

\n\n

Sample code (RGB images of 256x256) :

\n\n\n\n
from keras.layers import Input, Dense, Flatten, Conv2D\nfrom keras.models import Sequential\nfrom keras.preprocessing.image import ImageDataGenerator\nimport numpy as np\nfrom pathlib import Path\nfrom PIL import Image\n\nheight = width = 256 \n\ndef read_pil_image(img_path, height, width):\n        with open(img_path, 'rb') as f:\n            return np.array(Image.open(f).convert('RGB').resize((width, height)))\n\ndef load_all_images(dataset_path, height, width, img_ext='png'):\n    return np.array([read_pil_image(str(p), height, width) for p in \n                                    Path(dataset_path).rglob(\"*.\"+img_ext)]) \n\ntrain_datagen = ImageDataGenerator(featurewise_center=True)\ntrain_datagen.fit(load_all_images('./images/', height, width))\n\ntrain_generator = train_datagen.flow_from_directory(\n        './images/',\n        target_size=(height, width),\n        batch_size=32,\n        class_mode='binary',\n        color_mode='rgb')\n\nmodel = Sequential()\nmodel.add(Conv2D(1,(3,3), input_shape=(height,width,3)))\nmodel.add(Flatten())\nmodel.add(Dense(1))\nmodel.compile('adam', 'binary_crossentropy')\n\nmodel.fit_generator(train_generator)\n
\n\n

But what if the data cannot be fully loaded into memory ? One approach is to sample the images randomly from the dataset.

\n\n

Normally we use mean of training data only to do mean normalization and use the same mean for normalization validation/test data. It will be bit tricky to do the same via the datagenerator.

\n", + "system": "" + }, + { + "instruction": "Different accuracy by fit() and evaluate() in Keras with the same dataset", + "input": "", + "output": "

After some digging into Keras issues, I found this.

\n\n

The reason for this is that when you use fit, At each batch of the training data the weights are updated. The loss value returned by the fit method is not the mean of the loss of the final model, but the mean of the loss of all slightly different models used on each batch.

\n\n

On the other hand, when you use to evaluate, the same model is used on the whole dataset. And this model actually doesn't even appear in the loss of the fit method since even at the last batch of training, the loss computed is used to update the model's weights.

\n\n

To sum everything up, fit and evaluate have two completely different behaviours.

\n\n

Reference:-

\n\n
    \n
  1. Keras_issues_thread
  2. \n
  3. Keras_official_doc
  4. \n
\n", + "system": "" + }, + { + "instruction": "How to define ConvLSTM encoder_decoder in Keras?", + "input": "", + "output": "

Below is my approach for encoder-decoder-based solution with ConvLSTM.

\n
def convlstm(input_shape):\n\n    print(np.shape(input_shape))\n\n    inpTensor = Input((input_shape))\n\n    #encoder\n    net1 = ConvLSTM2D(filters=32, kernel_size=3,\n                   padding='same', return_sequences=True)(inpTensor)\n\n    max_pool1 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, \n    padding='same')(net1)\n\n    bn1 = BatchNormalization(axis=1)(max_pool1)\n\n    dp1 = Dropout(0.2)(bn1)\n\n    net2 = ConvLSTM2D(filters=64, kernel_size=3,\n                    padding='same', return_sequences=True)(dp1)\n\n    max_pool2 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, \n    padding='same')(net2)\n\n    bn2 = BatchNormalization(axis=1)(max_pool2)\n\n    dp2 = Dropout(0.2)(bn2)\n\n    net3 = ConvLSTM2D(filters=128, kernel_size=3,\n                   padding='same', return_sequences=True)(dp2)\n\n    max_pool3 = MaxPooling3D(pool_size=(2, 2, 2), strides=2, \n    padding='same')(net3)\n\n    bn3 = BatchNormalization(axis=1)(max_pool3)\n\n    dp3 = Dropout(0.2)(bn3)\n\n\n    #decoder\n    net4 = ConvLSTM2D(filters=128, kernel_size=3,\n                    padding='same', return_sequences=True)(dp3)\n\n    up1 = UpSampling3D((2, 2, 2))(net4)\n\n    net5= ConvLSTM2D(filters=64, kernel_size=3,\n                    padding='same', return_sequences=True)(up1)\n\n    up2 = UpSampling3D((2, 2, 2))(net5)\n\n    net6 = ConvLSTM2D(filters=32, kernel_size=3,\n                    padding='same', return_sequences=False)(up2)\n\n    up3 = UpSampling2D((2, 2))(net6)\n\n    out = Conv2D(filters=1, kernel_size=(3, 3), activation='sigmoid',\n                  padding='same', data_format='channels_last')(up3)\n\n    #or use only return out\n    return Model(inpTensor, out)\n
\n", + "system": "" + }, + { + "instruction": "How to preprocess training set for VGG16 fine tuning in Keras?", + "input": "", + "output": "

ImageDataGenerator has a preprocessing_function argument which allows you to pass the same preprocess_input function that you are using during inference. This function will do the rescaling for you, so can omit the scaling:

\n\n
from keras.applications.vgg16 import preprocess_input\ntrain_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)\n
\n\n

Most of the pretrained models in keras_applications use the same preprocessing function. You can inspect the docstring to see what it does:

\n\n
def preprocess_input(x, data_format=None, mode='caffe', **kwargs):\n    \"\"\"Preprocesses a tensor or Numpy array encoding a batch of images.\n    # Arguments\n        x: Input Numpy or symbolic tensor, 3D or 4D.\n            The preprocessed data is written over the input data\n            if the data types are compatible. To avoid this\n            behaviour, `numpy.copy(x)` can be used.\n        data_format: Data format of the image tensor/array.\n        mode: One of \"caffe\", \"tf\" or \"torch\".\n            - caffe: will convert the images from RGB to BGR,\n                then will zero-center each color channel with\n                respect to the ImageNet dataset,\n                without scaling.\n            - tf: will scale pixels between -1 and 1,\n                sample-wise.\n            - torch: will scale pixels between 0 and 1 and then\n                will normalize each channel with respect to the\n                ImageNet dataset.\n    # Returns\n        Preprocessed tensor or Numpy array.\n
\n", + "system": "" + }, + { + "instruction": "Explaining CNN (Keras) outputs with LIME", + "input": "", + "output": "

I managed to solve the problem now. \nHere comes the solution for those who might have an interest in it.

\n\n

In short, the trick was to pick the right columns from the numpy arrays in get_predict_proba_fn_of_class. While I had five independent classification scores that do not add up to one, I had to add the negative scores for every label's classification scores in a new column (e.g. for 0.67 I added 1-0.67) and then pick the original and new column.

\n\n
from lime.lime_text import LimeTextExplainer, TextDomainMapper\nprint(encoder.classes_)\n##https://lime-ml.readthedocs.io/en/latest/lime.html#module-lime.lime_text\n\n\ndef flatten_predict(i):\n    global model   \n    # catch single string input and convert to list\n    if i.__class__ != list:\n        i = [i]\n    # list for predictions\n    predStorage = []\n    # loop through input list and predict\n    for textInput in i:\n        textInput = preprocess(textInput)\n        textInput = make_predictable(textInput)\n        pred = model.predict(textInput)\n        pred = np.append(pred, 1-pred, axis=1)\n        predStorage.extend(pred)\n\n    return np.asarray(predStorage)\n\n\ndef get_predict_proba_fn_of_class(strings):\n    def rewrapped_predict(strings): \n        pred = flatten_predict(strings)\n        index = np.where(pred==label)[1][0]\n        preds = pred[:, index::5]\n        return preds\n\n    return rewrapped_predict\n\n\nstring=\"Der Arzt wei\u00df, was er tut und hat mir alles genau erkl\u00e4rt.\"\nprint(\"Simple model prediction:\", model.predict(make_predictable(preprocess(string))))\n\nlabels_to_explain = flatten_predict(string)\nprint(\"labels_to_explain:\", labels_to_explain)\n\nexplanation_for_label = {}\nfor index, label in enumerate(labels_to_explain[0]):\n    if index < (len(labels_to_explain[0])/2):\n        actual_classes = [encoder.classes_[index], 'None']\n        explainer = LimeTextExplainer(class_names=actual_classes)\n        wrapped = get_predict_proba_fn_of_class(string)  # function returns function!\n        explanation_for_label[label] = explainer.explain_instance(string, wrapped)\n        explanation_for_label[label].show_in_notebook()\n
\n", + "system": "" + }, + { + "instruction": "Why am I receive AlreadyExistsError?", + "input": "", + "output": "

Adding the following code before the line model = Sequential() will stop this error.

\n\n
from tensorflow.core.protobuf import rewriter_config_pb2\nfrom tensorflow.keras.backend import set_session\ntf.keras.backend.clear_session()  # For easy reset of notebook state.\n\nconfig_proto = tf.ConfigProto()\noff = rewriter_config_pb2.RewriterConfig.OFF\nconfig_proto.graph_options.rewrite_options.arithmetic_optimization = off\nsession = tf.Session(config=config_proto)\nset_session(session)\n
\n", + "system": "" + }, + { + "instruction": "How to add Attention layer between two LSTM layers in Keras", + "input": "", + "output": "

Attention is meant to iteratively take one decoder output value (the last hidden state) and then using this 'query', 'attend' to all the 'values' which is nothing but the entire list of encoder output.

\n

So input1 = decoder hidden state of prev timestep: the 'key'

\n

input2 = all encoder hidden states: the 'value's

\n

output = the context: weighted sum of all the encoder hidden states

\n

Use the context, prev hidden state of decoder and the prev translated output to generate the next word and a new hidden output state and then repeat the above process all over again until 'EOS' is encountered.

\n

Your attention logic itself is perfect (excluding the last line involving the decoder). But the rest of your code is missing. If you can share the complete code, I can help you with the error. I see no mistake in the attention logic you have defined.

\n

For more specific details, please refer https://towardsdatascience.com/create-your-own-custom-attention-layer-understand-all-flavours-2201b5e8be9e

\n", + "system": "" + }, + { + "instruction": "Speeding up TFRecords feed into Keras model on CloudML for GPU", + "input": "", + "output": "

In my case, I was using a custom setup.py file which used a CPU-only Tensorflow version.

\n\n

I am kicking myself, please install tensorflow-gpu instead.

\n", + "system": "" + }, + { + "instruction": "Tensorflow: ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`", + "input": "", + "output": "

If you set the shape after batching, you will need to set it to [None, 100] to include the batch axis:

\n\n
x['reviews'].set_shape([None, 100])\n
\n", + "system": "" + }, + { + "instruction": "I trained a keras model on google colab. Now not able to load it locally on my system.", + "input": "", + "output": "
    \n
  1. Make sure you have the newest version of Keras and tensorflow (which are 2.4.4 and 1.11.0) by running either pip install keras tensorflow or conda install keras tensorflow.

  2. \n
  3. In case it is Google Colab that uses deprecated objects, you may need to use custom objects:

  4. \n
\n\n
from keras.utils import CustomObjectScope\nfrom keras.initializers import glorot_uniform\n\nwith CustomObjectScope({'GlorotUniform': glorot_uniform()}):\n    model = load_model('my_model.h5')\n
\n\n

Not sure if this is your case though.

\n", + "system": "" + }, + { + "instruction": "Early Stopping with a Cross-Validated Metric in Keras", + "input": "", + "output": "

Keras really does have a greatly customizable callback functionality as can be seen \nhere.

\n\n

If you are not satisfied with keras current EarlyStopping function, which for me does the work of checking the validation loss during training, you could create a custom callback function. Also custom callback functions can be chained.

\n\n

If your issue is accessing the model inside the callback, then self is the variable you want to access as can be seen in this answer. I don't completely understand why you want to \"redevelop\" the model during test phase. BUT then with that, you could still use callbacks, after the EarlyStopping callback, you could create another callback function that could \"redevelop\" the model.

\n\n

If you want to access the models deeper variables you could use, Keras backend.

\n\n

I hope I helped.

\n", + "system": "" + }, + { + "instruction": "Tensorflow Keras - AttributeError: Layer features has no inbound nodes", + "input": "", + "output": "

When defining a network in Keras, the first layer added needs to have input_shape added.

\n\n

See the docs here: https://keras.io/getting-started/sequential-model-guide/#specifying-the-input-shape

\n\n

So for MNIST, you should have something like input_shape=(28,28,1)

\n\n

There's a nice example here: https://www.kaggle.com/adityaecdrid/mnist-with-keras-for-beginners-99457

\n", + "system": "" + }, + { + "instruction": "How to install Keras if the required version of setuptools is older than default?", + "input": "", + "output": "
\n

tensorflow 1.9.0 has requirement setuptools<=39.1.0, but you'll have setuptools 39.2.0 which is incompatible

\n
\n\n

i got same error.To downgrade your setuptools version you can use

\n\n
pip install setuptools==39.1.0\n
\n\n

hope this helps further for keras installations.

\n", + "system": "" + }, + { + "instruction": "Getting error "Could not import PIL.Image. The use of `array_to_img` requires PIL."", + "input": "", + "output": "

You will want to install the PIL package.

\n\n

How do I install PIL/Pillow for Python 3.6?

\n\n
pip install pillow\n
\n\n

or

\n\n
pip3 install pillow\n
\n", + "system": "" + }, + { + "instruction": "Keras model (tensorflow backend) trains perfectly well in python 3.5 but very bad in python 2.7", + "input": "", + "output": "

I found the solution for my problem (thanks to user1735003 for the tip regarding my data).
\nThe reason for my bad results was a wrong data-implementation due to the differences regarding Python 2.x and Python 3.x. When implementing my image-data I use

\n\n
for i in range(len(evaluation_files)):\n    input_evaluation = np.divide(ndimage.imread(evaluation_files[i]), 255)\n
\n\n

But here is the Problem: In Python 3.x this works perfectly fine, since a division of two integers results in a float, but in Python 2.x the result is an integer as well, so my input_evalution list only consists of zeros. I need to divide by 255.0 (make the result a float).

\n\n
input_evaluation = np.divide(ndimage.imread(evaluation_files[i]), 255.0)\n
\n\n

Alternatively import division from __future__ to get floating point results from integer divisions already in python 2.

\n\n
from __future__ import division\n
\n\n

There are some mayor differences when using Python 2.x or Python 3.x which you can see very nicely for example on http://sebastianraschka.com/Articles/2014_python_2_3_key_diff.html .

\n\n

I also managed training my model on Python 3.5, save it using model.save('my_model') and load it in Python 2.7 using keras.models.load_model('my_model'), which works perfectly fine.

\n\n

One would also easily just save the weights using model.save_weights('my_weights'), create a new model of the same architecture(!) in Python 2.7 and load the weights into that model using model.load_weights('my_weights'), but since just loading the model itself works perfectly fine that way is much easier.

\n", + "system": "" + }, + { + "instruction": "Nested while loop in tensorflow", + "input": "", + "output": "

tf.while_loop can be tricky to use, make sure to read the documentation carefully. The return value of the body must have the same structure than the loop variables, and the return value of the tf.while_loop operation is the final value of the variables. In order to make a computation, you should pass an additional loop variable to store the partial results. You could do something like this:

\n\n
def body1(i, result):\n    global data\n    N = len(data) * positive_samples\n    j = tf.constant(0)\n    condition2 = lambda j, i, result: tf.less(j, N)\n    result = 0\n\n    def body2(j, i, result):\n        global similarity_matrix, U, V\n        result_j = (tf.transpose(U[:, i]) * V[:, j])\n        return j + 1, i, result + result_j\n\n    j, i, result = tf.while_loop(condition2, body2, loop_vars=[j, i, result])\n    return i + 1, result\n\ndef loss_function(x):\n    global data\n    N = len(data)*positive_samples\n    i = tf.constant(0)\n    result = tf.constant(0, dtype=tf.float32)\n    condition1 = lambda i, result: tf.less(i, N)\n    i, result = tf.while_loop(condition1, body1, [i, result])\n    return result\n
\n\n

It is not clear from your code where x is to be used. In this case, though, the result of the operation should be equal to simply:

\n\n
result = tf.reduce_sum(tf.linalg.matmul(U, V, transpose_a=True))\n
\n\n

Which will also be much faster.

\n", + "system": "" + }, + { + "instruction": "Deep Learning implementation in Tensorflow or Keras give drastic different results", + "input": "", + "output": "

The answer was in the Keras implementation of softmax where they subtract an unexpected max:

\n\n
def softmax(x, axis=-1):\n    # when x is a 2 dimensional tensor\n    e = K.exp(x - K.max(x, axis=axis, keepdims=True))\n    s = K.sum(e, axis=axis, keepdims=True)\n    return e / s\n
\n\n
\n\n

Here is the Tensorflow implementation updated with the max hack and the good results associated

\n\n
import tensorflow as tf\n\ntf.reset_default_graph()\nx = inputs = tf.placeholder(tf.float32, shape=[None, shape[1], shape[0], 3])\ntargets = tf.placeholder(tf.float32, shape=[None, shape[1], shape[0], 2])\n\nfor d in range(4):\n    x = tf.layers.conv2d(x, filters=np.exp2(d+4), kernel_size=[3,3], strides=[1,1], padding=\"SAME\", activation=tf.nn.relu)\n    x = tf.layers.max_pooling2d(x, strides=[2,2], pool_size=[2,2], padding=\"SAME\")\n\nx = tf.layers.conv2d(x, filters=2, kernel_size=[1,1])\nlogits = tf.image.resize_images(x, [shape[1], shape[0]], align_corners=True)\n# The misterious hack took from Keras\nlogits = logits - tf.expand_dims(tf.reduce_max(logits, axis=-1), -1)\nprediction = tf.nn.softmax(logits)\n\nloss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=targets, logits=logits))\noptimizer = tf.train.RMSPropOptimizer(learning_rate=0.001).minimize(loss)\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n\ndef run(mode, x_batch, y_batch):\n    if mode == 'TRAIN':\n        return sess.run([loss, optimizer], feed_dict={inputs: x_batch, targets: y_batch})\n    else:\n        return sess.run([loss, prediction], feed_dict={inputs: x_batch, targets: y_batch})\n
\n\n

\"Tensorflow\n\"Tensorflow

\n\n

Huge thanks to Simon for pointing this out on the Keras implementation :-)

\n", + "system": "" + }, + { + "instruction": "Converting Tensor to Numpy Array - Custom Loss function In keras", + "input": "", + "output": "

EDIT: 1 Here is an untested substitution:

\n\n

(took the liberty of normalizing the variable names )

\n\n
def get_balance(x_true, x_pred):\n\n    x_true = K.tf.round(x_true)\n    x_pred = K.tf.round(x_pred)\n\n    # didnt see the  need for these\n    # NumberOfBars = (x_true)\n    # NumberOfHours = NumberOfBars/60\n\n    trade_index = K.tf.not_equal(x_pred[:,1], 0 )\n\n    ##remove predictions that are not tradable\n    x_true_tradeable = K.tf.boolean_mask(x_true[:,0], trade_index)\n    x_pred_tradeable = K.tf.boolean_mask(x_pred[:,0], trade_index)\n\n    cm = K.tf.confusion_matrix(x_true_tradeable, x_pred_tradeable)\n\n    correct_predictions = cm[0,0]+cm[1,1]\n    wrong_predictions = cm[1,0]+cm[0,1]\n    total_trades = correction_predictions + wrong_predictions\n    accuracy = (correct_predictions/total_trades)*100\n\n    return accuracy \n
\n\n

Original Answer

\n\n

Welcome to SO. As you might know we need to compute the the gradient on the loss function. We can't compute the gradient correctly on numpy arrays (they're just constants).

\n\n

What is done ( in keras/theano which are the backends one uses with keras) is automatic differentiation on Tensors (e.g tf.placeholder()).This is not the entire story but what you should know at this point is that tf / theano gives us gradients by default on operators like tf.max, tf.sum.

\n\n

What that means for you is all the operations on tensors (y_true and y_pred) should be rewritten to use tf / theano operators.

\n\n

I'll comment with what I think would be rewritten and you can substitute accordingly and test.

\n\n

See tf.round used as K.tf.round where K is the reference to the keras backend imported as \nimport keras.backend as K

\n\n
x_true = np.round(x_true)  \nx_pred = np.round(x_pred)\n
\n\n

Grab the shape of the tensor x_true. K.shape. Compute the ratio over a constant could remain as\nit as Here

\n\n
NumberOfBars = len(x_true) \nNumberOfHours = NumberOfBars/60\n
\n\n

See tf.where used as K.tf.where

\n\n
TradeIndex = np.where( x_pred[:,1] == 0 )[0] \n
\n\n

You could mask the tensor w/ a condition instead of deleting - see masking

\n\n
##remove predictions that are not tradable\nx_true = np.delete(x_true[:,0], TradeIndex) \nx_pred = np.delete(x_pred[:,0], TradeIndex)\n
\n\n

See tf.confusion_matrix

\n\n
CM = confusion_matrix(x_true, x_pred)\n
\n\n

The computation that follow are computation overs constants and so remain essentially the same ( conditioned on
\nwhatever changes have to made given the new API )

\n\n

Hopefully I can update this answer with a valid substitution that runs. But I hope this sets on the right path.

\n\n

A suggestion on coding style: I see you use three version of variable naming in your code choose one and stick with it.

\n", + "system": "" + }, + { + "instruction": "First training epoch is very slow", + "input": "", + "output": "

Based on this issue:

\n\n
\n

The first epoch takes the same time, but the counter also takes into\n account the time taken by building the part of the computational graph\n that deals with training (a few seconds). This used to be done during\n the compile step, but now it is done lazily one demand to avoid\n unnecessary work.

\n
\n", + "system": "" + }, + { + "instruction": "1d CNN audio in keras", + "input": "", + "output": "

Thank you very much, I modified my code in this way:

\n\n
input_shape = (1500,40)                             \nmodel_input = Input(shape=input_shape, name='input')                    \nlayer = model_input                                                     \nlayer = Conv1D(filters=40,\n               kernel_size=10,\n               padding='valid',\n               activation='relu')(layer)\nlayer = BatchNormalization(axis=2)(layer)                           \nlayer = MaxPooling1D(pool_size=40,\n                     padding='same')(layer)                             \nlayer = Dropout(self.params.drop_rate)(layer)                           \nlayer = LSTM(40, return_sequences=True,\n             activation=self.params.lstm_activation)(layer)         \nlayer = Dropout(self.params.lstm_dropout)(layer)\nlayer = Dense(40, activation = 'relu')(layer)\nlayer = BatchNormalization(axis = 2)(layer)                \nmodel_output = TimeDistributed(Dense(2,\n                                     activation='sigmoid'))(layer) \n
\n\n

I was actually thinking that maybe I have to permute my axes in order to make maxpooling layer work on my 40 mel feature axis...

\n", + "system": "" + }, + { + "instruction": "How to implement a custom layer wit multiple outputs in Keras?", + "input": "", + "output": "

In the call method of your layer, where you perform the layer calculations, you can return a list of tensors:

\n\n
def call(self, inputTensor):\n\n    #calculations with inputTensor and the weights you defined in \"build\"\n    #inputTensor may be a single tensor or a list of tensors\n\n    #output can also be a single tensor or a list of tensors\n    return [output1,output2,output3]\n
\n\n

Take care of the output shapes:

\n\n
def compute_output_shape(self,inputShape):\n\n    #calculate shapes from input shape    \n    return [shape1,shape2,shape3]\n
\n\n

The result of using the layer is a list of tensors. \nNaturally, some kinds of keras layers accept lists as inputs, others don't.
\nYou have to manage the outputs properly using a functional API Model. You're probably going to have problems using a Sequential model while having multiple outputs.

\n\n

I tested this code on my machine (Keras 2.0.8) and it works perfectly:

\n\n
from keras.layers import *\nfrom keras.models import *\nimport numpy as np\n\nclass Lay(Layer):\n    def init(self):\n        super(Lay,self).__init__()\n\n    def build(self,inputShape):\n        super(Lay,self).build(inputShape)\n\n    def call(self,x):\n        return [x[:,:1],x[:,-1:]]\n\n    def compute_output_shape(self,inputShape):\n        return [(None,1),(None,1)]\n\n\ninp = Input((2,))\nout = Lay()(inp)\nprint(type(out))\n\nout = Concatenate()(out)\nmodel = Model(inp,out)\nmodel.summary()\n\ndata = np.array([[1,2],[3,4],[5,6]])\nprint(model.predict(data))\n\nimport keras\nprint(keras.__version__)\n
\n", + "system": "" + }, + { + "instruction": "Keras Callback instance has no attribute 'set_model'", + "input": "", + "output": "

set_model is a method defined in keras.callbacks.Callback. To write a custom callback, you'll have to subclass keras.callbacks.Callback. Otherwise, your callback will lack some necessary methods used by Keras internally.

\n\n

Changing the first line into the following line should work.

\n\n
class TimingCallback(keras.callbacks.Callback):\n
\n", + "system": "" + }, + { + "instruction": "Import statments when using Tensorflow contrib keras", + "input": "", + "output": "

Try this with recent versions of tensorflow:

\n\n
from tensorflow.python.keras.models import Sequential\nfrom tensorflow.python.keras.layers import LSTM, TimeDistributed, Dense, ...\n
\n", + "system": "" + }, + { + "instruction": "Big HDF5 dataset, how to efficienly shuffle after each epoch", + "input": "", + "output": "

Yeah, shuffling improves performance since running the data in the same order each time may get you stuck in suboptimal areas.

\n\n

Don't shuffle the entire data. Create a list of indices into the data, and shuffle that instead. Then move sequentially over the index list and use its values to pick data from the data set.

\n", + "system": "" + }, + { + "instruction": "Audio classification with Keras: presence of human voice", + "input": "", + "output": "

Sounds can be seen as a 1D image and be worked with with 1D convolutions. \nOften, dilated convolutions may do a good work, see Wave Nets

\n\n

Sounds can also be seen as sequences and be worked with RNN layers (but maybe they're too bulky in amount of data for that)

\n\n

For your case, you need only one output with a 'sigmoid' activation at the end and a 'binary_crossentropy' loss.

\n\n\n\n

When to use 'softmax'?

\n\n

The softmax function is good for multiclass problems (not your case) where you want only one class as a result. All the results of a softmax function will sum 1. It's intended to be like a probability of each class.

\n\n

It's mainly used at the final layer, because you only get classes as the final result.

\n\n

It's good for cases when only one class is correct. And in this case, it goes well with the loss categorical_crossentropy.

\n\n

Relu and other activations in the middle of the model

\n\n

These are not very ruled. There are lots of possibilities. I often see relu in image convolutional models.

\n\n

Important things to know are they \"ranges\". What are the limits of their outputs?

\n\n\n", + "system": "" + }, + { + "instruction": "How do I combine two keras generator functions", + "input": "", + "output": "

Using zip() to combine generators leads to generation of an infinite iterator.\nUse this instead:

\n\n
def combine_generator(gen1, gen2):\n    while True:\n        yield(next(gen1), next(gen2))\n
\n\n

Modified code would look something like this:

\n\n
datagen_args = dict(rotation_range=10,\n                    width_shift_range=0.1,\n                    height_shift_range=0.1,\n                    horizontal_flip=True)\n\nin_gen1 = ImageDataGenerator(**datagen_args)\nin_gen2 = ImageDataGenerator(**datagen_args)\n\ndef combine_generator(gen1, gen2):\n    while True:\n        yield(next(gen1), next(gen2))\n\ntrain_generator = combine_generator(in_gen1, in_gen2)\n\nmodel.fit(train_generator.flow([pair_df[:, 0,::],pair_df[:, 1,::]],\n                          y_train,batch_size=16), epochs, verbose = 1)\n
\n\n

See this thread for further reference.

\n", + "system": "" + }, + { + "instruction": "MemoryError in keras.utils.np_utils.to_categorical", + "input": "", + "output": "

You do not actually need one-hot encoded labels, you can use integer labels with the sparse_categorical_crossentropy loss, which accepts integer labels.

\n\n

This way there should not be an out of memory error. Another alternative is to make a generator (to use with fit_generator) and one-hot encode labels on the fly.

\n", + "system": "" + }, + { + "instruction": "Calculating gradient norm wrt weights with keras", + "input": "", + "output": "

There are several placeholders related to the gradient computation process in Keras:

\n\n
    \n
  1. Input x
  2. \n
  3. Target y
  4. \n
  5. Sample weights: even if you don't provide it in model.fit(), Keras still generates a placeholder for sample weights, and feed np.ones((y.shape[0],), dtype=K.floatx()) into the graph during training.
  6. \n
  7. Learning phase: this placeholder will be connected to the gradient tensor only if there's any layer using it (e.g. Dropout).
  8. \n
\n\n

So, in your provided example, in order to compute the gradients, you need to feed x, y and sample_weights into the graph. That's the underlying reason of the error.

\n\n

Inside Model._make_train_function() there are the following lines showing how to construct the necessary inputs to K.function() in this case:

\n\n
inputs = self._feed_inputs + self._feed_targets + self._feed_sample_weights\nif self.uses_learning_phase and not isinstance(K.learning_phase(), int):\n    inputs += [K.learning_phase()]\n\nwith K.name_scope('training'):\n    ...\n    self.train_function = K.function(inputs,\n                                     [self.total_loss] + self.metrics_tensors,\n                                     updates=updates,\n                                     name='train_function',\n                                     **self._function_kwargs)\n
\n\n

By mimicking this function, you should be able to get the norm value:

\n\n
def get_gradient_norm_func(model):\n    grads = K.gradients(model.total_loss, model.trainable_weights)\n    summed_squares = [K.sum(K.square(g)) for g in grads]\n    norm = K.sqrt(sum(summed_squares))\n    inputs = model.model._feed_inputs + model.model._feed_targets + model.model._feed_sample_weights\n    func = K.function(inputs, [norm])\n    return func\n\ndef main():\n    x = np.random.random((128,)).reshape((-1, 1))\n    y = 2 * x\n    model = Sequential(layers=[Dense(2, input_shape=(1,)),\n                               Dense(1)])\n    model.compile(loss='mse', optimizer='rmsprop')\n    get_gradient = get_gradient_norm_func(model)\n    history = model.fit(x, y, epochs=1)\n    print(get_gradient([x, y, np.ones(len(y))]))\n
\n\n

Execution output:

\n\n
Epoch 1/1\n128/128 [==============================] - 0s - loss: 2.0073     \n[4.4091368]\n
\n\n

Note that since you're using Sequential instead of Model, model.model._feed_* is required instead of model._feed_*.

\n", + "system": "" + }, + { + "instruction": "Is this basic convolution performed in ordinary convolutional neural network?", + "input": "", + "output": "

Yes they are just using the usual convolution. It may be easier for you to look at page 54, where they simplified the equation. Here is the relevant screenshot:

\n

\"enter

\n

As you can see, what they describe here is just the typical convolution operator.

\n", + "system": "" + }, + { + "instruction": "Keras LSTM: Injecting already-known *future* values into prediction", + "input": "", + "output": "

I think a clean way of doing this is to introduce 2*look_ahead new features, where for each 0 <= i < look_ahead 2*i-th feature is an indicator whether the value of the i-th time step is known and (2*i+1)-th is the value itself (0 if not known). Accordingly, you can generate training data with these features to make your model take into account these known values.

\n", + "system": "" + }, + { + "instruction": "when do you use Input shape vs batch_shape in keras?", + "input": "", + "output": "

From the Keras source code:

\n\n
\n

Arguments

\n\n
    shape: A shape tuple (integer), not including the batch size.\n        For instance, `shape=(32,)` indicates that the expected input\n        will be batches of 32-dimensional vectors.\n    batch_shape: A shape tuple (integer), including the batch size.\n        For instance, `batch_shape=(10, 32)` indicates that\n        the expected input will be batches of 10 32-dimensional vectors.\n        `batch_shape=(None, 32)` indicates batches of an arbitrary number\n        of 32-dimensional vectors.\n
\n
\n\n

The batch size is how many examples you have in your training data.

\n\n

You can use any. Personally I never used \"batch_shape\". When you use \"shape\", your batch can be any size, you don't have to care about it.

\n\n

shape=(32,) means exactly the same as batch_shape=(None,32)

\n", + "system": "" + }, + { + "instruction": "Understanding Keras weight matrix of each layers", + "input": "", + "output": "

You are creating a Dense() layer of 32 units. Dense layers are (as your comment in the code indicates) \"fully-connected layers\", that means each feature in the data is connected to every layer. You also have 2000 features in your data elements.

\n\n

Therefore the array you are getting has 2000 elements, one for each feature, and each one with 32 weights, one for each hidden layer, hence the shape you get.

\n\n

From the keras docs we can see the example:

\n\n
model = Sequential()\nmodel.add(Dense(32, input_shape=(16,)))\n# now the model will take as input arrays of shape (*, 16)\n# and output arrays of shape (*, 32)\n\n# after the first layer, you don't need to specify\n# the size of the input anymore:\nmodel.add(Dense(32))\n
\n\n

In your case * is 2000 so your output weights should be of shape (2000,32) as you are getting. This seems to be the convention Keras uses for their outputs. Either way you could transform your data to give it other shapes, as a (N, M) array has the same number of elements as a (M, N) array.

\n", + "system": "" + }, + { + "instruction": "Keras 1D CNN: How to specify dimension correctly?", + "input": "", + "output": "
    \n
  1. Yes, your dataset should be a 3d tensor.

  2. \n
  3. The correct input shape (for tensorflow backend) is (sample_number,sample_size,channel_number). You can check that from your error message, \"the expected dimension was (None, 3197, 1)\".

  4. \n
\n\n

'None' refers to an arbitrary size dimension, as it is expected to the number of samples used in training.

\n\n

So in your situation the correct shape is (570, 3197, 1).

\n\n

If you happen to use theano backend you should put your channel dimension first:\n(sample_number,channel_number,sample_size) or in your paricular case

\n\n

(570,1, 3197)

\n", + "system": "" + }, + { + "instruction": "concatenate (merge) layer keras with tensorflow", + "input": "", + "output": "

You cannot use a Sequential model for creating branches, that doesn't work.

\n\n

You must use the functional API for that:

\n\n
from keras.models import Model    \nfrom keras.layers import *\n
\n\n

It's ok to have each branch as a sequential model, but the fork must be in a Model.

\n\n
#in the functional API you create layers and call them passing tensors to get their output:\n\nconc = Concatenate()([model1.output, model2.output])\n\n    #notice you concatenate outputs, which are tensors.     \n    #you cannot concatenate models\n\n\nout = Dense(100, activation='relu')(conc)\nout = Dropout(0.5)(out)\nout = Dense(10, activation='softmax')(out)\n\nmodelall = Model([model1.input, model2.input], out)\n
\n\n

It wasn't necessary here, but usually you create Input layers in the functional API:

\n\n
inp = Input((shape of the input))\nout = SomeLayer(blbalbalba)(inp)\n....\nmodel = Model(inp,out)\n
\n", + "system": "" + }, + { + "instruction": "dump weights of cnn in json using keras", + "input": "", + "output": "

Keras does not have any built-in way to export the weights to JSON.

\n\n

Solution 1:

\n\n

For now you can easily do it by iterating over the weights and saving it to the JSON file.

\n\n
weights_list = model.get_weights()\n
\n\n

will return a list of all weight tensors in the model, as Numpy arrays.

\n\n

Then, all you have to do next is to iterate over this list and write to the file:

\n\n
for i, weights in enumerate(weights_list):\n    writeJSON(weights)\n
\n\n

Solution 2:

\n\n
import json\nweights_list = model.get_weights()\nprint json.dumps(weights_list.tolist())\n
\n", + "system": "" + }, + { + "instruction": "Keras custom metric iteration", + "input": "", + "output": "

If you are comfortable using tensorflow, then you can try using this code instead:

\n\n
def cindex_score(y_true, y_pred):\n\n    g = tf.subtract(tf.expand_dims(y_pred, -1), y_pred)\n    g = tf.cast(g == 0.0, tf.float32) * 0.5 + tf.cast(g > 0.0, tf.float32)\n\n    f = tf.subtract(tf.expand_dims(y_true, -1), y_true) > 0.0\n    f = tf.matrix_band_part(tf.cast(f, tf.float32), -1, 0)\n\n    g = tf.reduce_sum(tf.multiply(g, f))\n    f = tf.reduce_sum(f)\n\n    return tf.where(tf.equal(g, 0), 0.0, g/f)\n
\n\n

Here is some code that verifies that both approaches are equivalent:

\n\n
def _ref(J, K):\n    _sum = 0\n    _pair = 0\n    for _i in range(1, len(J)):\n        for _j in range(0, _i):\n            if _i is not _j:\n                if(J[_i] > J[_j]):\n                  _pair +=1\n                  _sum +=  1* (K[_i] > K[_j]) + 0.5 * (K[_i] == K[_j])\n    return 0 if _pair == 0 else _sum / _pair\n\ndef _raw(J, K):\n\n    g = tf.subtract(tf.expand_dims(K, -1), K)\n    g = tf.cast(g == 0.0, tf.float32) * 0.5 + tf.cast(g > 0.0, tf.float32)\n\n    f = tf.subtract(tf.expand_dims(J, -1), J) > 0.0\n    f = tf.matrix_band_part(tf.cast(f, tf.float32), -1, 0)\n\n    g = tf.reduce_sum(tf.multiply(g, f))\n    f = tf.reduce_sum(f)\n\n    return tf.where(tf.equal(g, 0), 0.0, g/f)\n\n\nfor _ in range(100):\n    with tf.Session() as sess:\n        inputs = [tf.placeholder(dtype=tf.float32),\n                  tf.placeholder(dtype=tf.float32)]\n        D = np.random.randint(low=10, high=1000)\n        data = [np.random.rand(D), np.random.rand(D)]\n\n        r1 = sess.run(_raw(inputs[0], inputs[1]),\n                      feed_dict={x: y for x, y in zip(inputs, data)})\n        r2 = _ref(data[0], data[1])\n\n        assert np.isclose(r1, r2)\n
\n\n

Please note that this only works for 1D-tensors (rarely a case you will have in keras).

\n", + "system": "" + }, + { + "instruction": "ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory", + "input": "", + "output": "

If changing your path variable does not work, running the following in your terminal is what usually solves this for me:

\n\n
sudo ldconfig /usr/local/cuda/lib64\n
\n\n

(sometimes cuda may need to be something like cuda-8.0, depending on your installation)

\n", + "system": "" + }, + { + "instruction": "Keras merge layer warning", + "input": "", + "output": "

Answering my own question:

\n\n
# Custom Merge\ndef euclid_dist(v):\n    return (v[0] - v[1])**2\n\ndef out_shape(shapes):\n    return shapes[0]\n\nmerged_vector = Lambda(euclid_dist, output_shape=out_shape)([l1, l2])\n
\n", + "system": "" + }, + { + "instruction": "How can I use tensorflow metric function within keras models?", + "input": "", + "output": "

Ok - I think I found an error. This funtion returns a tuple with (auc, other stuff). You need to overcome this by defying your own auc by:

\n\n
def auc_tf_metric(y_true, y_pred):\n    return tensorflow.metrics.auc(y_true, y_pred)[0]\n
\n", + "system": "" + }, + { + "instruction": "How to get the batch size inside lambda layer", + "input": "", + "output": "

Just as @pitfall says, the second argument of K.tile should be a tensor.\nAnd according to the doc of keras backend, K.shape returns a tensor and K.int_shape returns a tuple of int or None entries. So the correct way is to use K.shape. Following is the MWE:

\n\n
import keras.backend as K\nfrom keras.layers import Input, Lambda\nfrom keras.models import Model\nimport numpy as np\n\nbatch_size = 8\nop_len = ip_len = 10\n\ndef func(X):\n    return K.tile(K.mean(X, axis=0, keepdims=True), (K.shape(X)[0], 1))\n\nip = Input((ip_len,))\nlbd = Lambda(lambda x:func(x))(ip)\n\nmodel = Model(ip, lbd)\nmodel.summary()\n\nmodel.compile('adam', loss='mse')\n\nX = np.random.randn(batch_size*100, ip_len)\nY = np.random.randn(batch_size*100, op_len)\n#no parameters to train!\n#model.fit(X,Y,batch_size=batch_size)\n\n#prediction\nnp_result = np.tile(np.mean(X[:batch_size], axis=0, keepdims=True), \n                    (batch_size,1))\npred_result = model.predict(X[:batch_size])\nprint(np.allclose(np_result, pred_result))\n
\n", + "system": "" + }, + { + "instruction": "Keras - Pop and re-add layers, but layer does not disconnect", + "input": "", + "output": "

When you drop layers, you need to recompile your model in order for it to have any effect.

\n\n

So use

\n\n
model.compile(loss=...,optimizer=..., ...)\n
\n\n

before printing the summary and it should integrate the changes correctly.

\n\n

Edit :

\n\n

What you are trying to do is actually really complex with a Sequential mode. This is the solution I can come up with (if there is any better please tell me) for your Sequential model:

\n\n
model = load_model('model1.h5')                                                                         \nlayer1 = model.layers.pop() # Copy activation_6 layer                                      \nlayer2 = model.layers.pop() # Copy classification layer (dense_2)                          \n\nmodel.add(Dense(512, name='dense_3'))\nmodel.add(Activation('softmax', name='activation_7'))\n\n# get layer1 config\nlayer1_config = layer1.get_config()\nlayer2_config = layer2.get_config()\n# change the name of the layers otherwise it complains\nlayer1_config['name'] = layer1_config['name'] + '_new'\nlayer2_config['name'] = layer2_config['name'] + '_new'\n\n# import the magic function\nfrom keras.utils.layer_utils import layer_from_config\n# re-add new layers from the config of the old ones \nmodel.add(layer_from_config({'class_name':type(l2), 'config':layer2_config}))\nmodel.add(layer_from_config({'class_name':type(l1), 'config':layer1_config}))\n\nmodel.compile(...)\n\nprint(model.summary())\n
\n\n

The hack is in the fact that your layers have layer1.input and layer1.output properties that I couldn't change.

\n\n

A way around that is to use a Functionnal API model. This allows you to define what comes in and what goes out of your layers.

\n\n

First you need to define your pop() function, to properly relink the layers every time you pop one, the function comes from this github issue:

\n\n
def pop_layer(model):\n    if not model.outputs:\n        raise Exception('Sequential model cannot be popped: model is empty.')\n\n    popped_layer = model.layers.pop()\n    if not model.layers:\n        model.outputs = []\n        model.inbound_nodes = []\n        model.outbound_nodes = []\n    else:\n        model.layers[-1].outbound_nodes = []\n        model.outputs = [model.layers[-1].output]\n    model.built = False\n    return popped_layer\n
\n\n

it just removes every output links of the last layer and changes the outputs of the model to be the new last layer. Now you can use this in :

\n\n
model = load_model('model1.h5')                                                                         \nlayer1 = model.layers.pop() # Copy activation_6 layer                                      \nlayer2 = model.layers.pop() # Copy classification layer (dense_2)     \n\n# take model.outputs and feed a Dense layer\nh = Dense(512,name='dense_3')(model.outputs)\nh = Activation('relu', name=('activation_7')(h)\n# apply\nh = layer2(h)\noutput = layer1(h)\n\nmodel = Model(input=model.input, output=output)\nmodel.compile(...)\nmodel.summary()\n
\n\n

There are probably better solutions than this, but this is what I would do.

\n\n

I hope this helps.

\n", + "system": "" + }, + { + "instruction": "How do stateful bidirectional RNNs work in Keras", + "input": "", + "output": "

One may think about a Bidirectional layer in a following manner:

\n\n
forward = Recurrent(..)(input)\nbackward = Recurrent(..., reverse_input=True)(input)\noutput = merge([forward, backward], ...)\n
\n\n

So - as you can see - you are losing the temporal orientation. You are analysing the input both from its beginning and end. In this case - setting stateful=True simply takes its starting state from a previous sample accordingly to direction of a bidirectional branch (forward takes from forward, backward takes from backward).

\n\n

This makes your model losing the interpretation - that samples from concurrent batches might be interpreted as a compact sequence divided into batches.

\n", + "system": "" + }, + { + "instruction": "Accuracy from callback and progress bar in Keras doesnt match", + "input": "", + "output": "

on_batch_end() type callback function gets the accuracy of the batch that just got trained. Whereas the logs printed by keras is the average over all the batches that it has seen in the current epoch. You can easily observe that in your logs.. say in first 2 batches one accuracy was 0.0 and 1.0, which made the overall accuracy over 2 batches seen as 0.5000. here is exactly where the average is calculated.

\n\n

Also accuracy as a metric is usually reported from epoch to epoch so you can change the callback to on_epoch_end().

\n", + "system": "" + }, + { + "instruction": "Hacky way to augment multichannel images in Keras", + "input": "", + "output": "

I tried that anyway and it seems to work without too many side effects. I didn't explore every possible problem, though. So if you have multichannel data (like satellite imagery, etc) you can try this hack. There are couple of places where you need to augment the augmentation code :)

\n", + "system": "" + }, + { + "instruction": "Can't install Keras", + "input": "", + "output": "

Thanks the support guys!

\n\n

For whatever reason my conda env was stuck on setuptools-27 even after running

\n\n
pip install --upgrade setuptools\n
\n\n

However, running

\n\n
pip install --upgrade setuptools --ignore-installed\n
\n\n

Got my env up to setuptools-28.8.0 and now have no Keras issues

\n", + "system": "" + }, + { + "instruction": "How to solve loss = Nan issue in Keras LSTM network?", + "input": "", + "output": "

I changed the activation function of dense layer to 'softmax' (in my case it's about a multi-class classification), and it works.

\n", + "system": "" + }, + { + "instruction": "Backpropagation through time in stateful RNNs", + "input": "", + "output": "

The back propagation horizon is limited to the second dimension of the input sequence. i.e. if your data is of type (num_sequences, num_time_steps_per_seq, data_dim) then back prop is done over a time horizon of value num_time_steps_per_seq Take a look at

\n\n
\n

https://github.com/fchollet/keras/issues/3669

\n
\n", + "system": "" + }, + { + "instruction": "How can you load all batch data into GPU memory in Keras (Theano backend)?", + "input": "", + "output": "

Just hard-wire your data into the model as a non-trainable embedding matrix (Embedding layer with your custom initializer). Then instead of the training data you pass a bunch of indices to the model.fit method.

\n", + "system": "" + }, + { + "instruction": "How to recover original values after a model predict in keras?", + "input": "", + "output": "

Ok, I think that I got what is your problem correctly so I will try to explain you how to deal with data normalization :

\n\n

1. Assumption about distribiution of inputs and outputs : usually in neural network training - what you assume is that your data (both input and output) comes from some probability distribiutions : let's call it X for input and Y of output. There are some reasons to make this distribiution to be zero mean and with unit standard deviation during the training phase.

\n\n

2. Statistical part of data normalization and recovery : because of that - you have to solve another task during training your network. This task is to estimate the mean and standard deviation of both input distribution X and output distribution Y. You are doing that by simply applying empirical mean and standard deviation to your training data.

\n\n

3. Application phase - inputs : when you apply your model to new input you are also assuming that your input comes from distribiution X so you also need to standarize it to be zero mean and unit standard deviation and here is a funny part - you can use both training set and a set of new data to obtain even better estimation of mean and standard deviation of X but to avoid overfitting in validation case - you usually use the mean and standard deviation obtained during training phase to make new data standarized.

\n\n

4. Application phase - outputs : this part is trickier because when you apply your network to new standarized inputs you get new outputs from Y* ~ (Y - mean'(Y)) / sd'(Y) where mean'(Y) and sd'(Y) are estimation of mean and standard deviation obtained empirically from your training set and Y is original distribiution of your output. It's because during your training set you feed your optimizer with output data from this distribiution. So to make your outputs to be restandarized you need to apply transformation: Y* * sd'(Y) + mean'(Y). which is reverse to standarization transformation.

\n\n

SUMMARY:

\n\n

Your training and application phase looks following :

\n\n
    \n
  1. You are obtaining statistics needed for both training phase and application phase by computing empirical mean and standard deviation of your training inputs (mean'(X) and sd'(X) and empirical mean and standard deviation of your outputs (mean'(Y) and sd'(Y)). It's important to store them because they will be needed in application phase.
  2. \n
  3. You standarize your both input and output data to be zero mean and unit standard deviation and train your model on them.
  4. \n
  5. During application phase you standarize your inputs by subtracting it by stored mean'(X) and dividing by stored sd'(X) to obtain new output Y*
  6. \n
  7. You destandarize your outputs using stored mean'(Y) and sd'(Y) - obtained during training phase - by transformation (Y* * sd'(Y) + mean'(Y).
  8. \n
\n\n

I hope that this answer will solve your problem and leave you with no doubts about details of standarization and destandarization of your data :)

\n", + "system": "" + }, + { + "instruction": "TypeError: 'numpy.float64' object is not iterable Keras", + "input": "", + "output": "

As you can see here :

\n\n

https://github.com/fchollet/keras/blob/master/keras/engine/training.py

\n\n

The evaluate method returns only test loss (or losses). So assigning result of this method to a pair results in error.

\n", + "system": "" + }, + { + "instruction": "Keras CNN images and kernel size mismatch even after image transformation to fit", + "input": "", + "output": "

You specified input as (depth,width,height). So the array you must must have dimensions (N,depth,width,height), where N is the number of training examples.

\n\n

The input you are actually passing, (4000, 232, 300, 3), doesn't match. It should be reshaped to be (4000, depth, width, height). This means you'll have to resize each image, and reorder the axes.

\n", + "system": "" + }, + { + "instruction": "How can you load all batch data into GPU memory in Keras (Theano backend)?", + "input": "", + "output": "

Just hard-wire your data into the model as a non-trainable embedding matrix (Embedding layer with your custom initializer). Then instead of the training data you pass a bunch of indices to the model.fit method.

\n", + "system": "" + }, + { + "instruction": "keras AssertionError: Duplicate registrations for type 'experimentalOptimizer'", + "input": "", + "output": "

Same problem : I completely uninstalled the libraries (nightly...), and reinstalled the latest stable version (pip3 install tensorflow)\nI'm on linux, don't have your configuration, don't know which libraries are installed.\nAnyway I hope this helps.

\n

So : uninstall keras, tensorboard, tensorflow, tensorflow-gpu if it's installed, idem for tb-nightly, keras-nightly, tf-nightly

\n

Re-install without specifications tensorflow, it will take care of needed lib.

\n", + "system": "" + }, + { + "instruction": "Filling up shuffle buffer (this may take a while)", + "input": "", + "output": "

Note that this is not an error, but a log message: https://github.com/tensorflow/tensorflow/blob/42b5da6659a75bfac77fa81e7242ddb5be1a576a/tensorflow/core/kernels/data/shuffle_dataset_op.cc#L138

\n

It seems you may be choosing too large a dataset if it's taking too long: https://github.com/tensorflow/tensorflow/issues/30646

\n

You can address this by lowering your buffer size: https://support.huawei.com/enterprise/en/doc/EDOC1100164821/2610406b/what-do-i-do-if-training-times-out-due-to-too-many-dataset-shuffle-operations

\n", + "system": "" + }, + { + "instruction": "Create a weighted MSE loss function in Tensorflow", + "input": "", + "output": "

I want to underline that you have two possibilities according to your problem:

\n

[1] If the weights are equal for all your samples:

\n

You can build a loss wrapper. Here a dummy example:

\n
n_sample = 200\nX = np.random.uniform(0,1, (n_sample,10))\ny = np.random.uniform(0,1, (n_sample,100))\nW = np.random.uniform(0,1, (100,)).astype('float32')\n\ndef custom_loss_wrapper(weights):\n    def loss(true, pred):\n        sum_weights = tf.reduce_sum(weights) * tf.cast(tf.shape(pred)[0], tf.float32)\n        resid = tf.sqrt(tf.reduce_sum(weights * tf.square(true - pred)))\n        return resid/sum_weights\n    return loss\n\ninp = Input((10,))\nx = Dense(256)(inp)\npred = Dense(100)(x)\n\nmodel = Model(inp, pred)\nmodel.compile('adam', loss=custom_loss_wrapper(W))\n\nmodel.fit(X, y, epochs=3)\n
\n

[2] If the weights are different between samples:

\n

You should build your model using add_loss in order to dynamically take into account the weights for each sample. Here a dummy example:

\n
n_sample = 200\nX = np.random.uniform(0,1, (n_sample,10))\ny = np.random.uniform(0,1, (n_sample,100))\nW = np.random.uniform(0,1, (n_sample,100))\n\ndef custom_loss(true, pred, weights):\n    sum_weights = tf.reduce_sum(weights)\n    resid = tf.sqrt(tf.reduce_sum(weights * tf.square(true - pred)))\n    return resid/sum_weights\n\ninp = Input((10,))\ntrue = Input((100,))\nweights = Input((100,))\nx = Dense(256)(inp)\npred = Dense(100)(x)\n\nmodel = Model([inp,true,weights], pred)\nmodel.add_loss(custom_loss(true, pred, weights))\nmodel.compile('adam', loss=None)\n\nmodel.fit([X,y,W], y=None, epochs=3)\n
\n

When using add_loss you should pass all the tensors involved in the loss as input layers and pass them inside the loss for the computation.

\n

At inference time you can compute predictions as always, simply removing the true and weights as input:

\n
final_model = Model(model.input[0], model.output)\nfinal_model.predict(X)\n
\n", + "system": "" + }, + { + "instruction": "AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'", + "input": "", + "output": "

Imports work perfectly with Tensorflow 2.4.1, Try this way

\n
from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, LSTM\n
\n", + "system": "" + }, + { + "instruction": "How to suppress all autograph warnings from Tensorflow?", + "input": "", + "output": "

Try this:

\n
tf.autograph.set_verbosity(0)\n
\n

If that doesn't work, perhaps the logging module could be of help here:

\n
import logging\nlogging.getLogger("tensorflow").setLevel(logging.ERROR) # this goes *before* tf import\nimport tensorflow as tf\n
\n

Thanks @Shahrokh Bah for adding that comment.

\n", + "system": "" + }, + { + "instruction": "model_config = json_utils.decode(model_config.decode('utf-8')) AttributeError: 'str' object has no attribute 'decode'", + "input": "", + "output": "

There seems to be an issue with h5py and it has to be installed in a certain version to ensure compatibility with tensorflow. This is what did the trick for me:

\n
pip uninstall h5py\npip install h5py==2.10.0\n
\n", + "system": "" + }, + { + "instruction": "What is the index argument from the __getitem__(...) method in tf.keras.utils.Sequence?", + "input": "", + "output": "

I think the index number of __getitem__ is related to the total amount of the samples and the batch_size you assigned.

\n

For example, I'm playing with fer2013plus dataset now and for testing I have 3944 images. My test_generator is created like this:

\n
test_generator = ImageDataGenerator().flow_from_directory(test_dir,\n                                                  target_size=(48,48),\n                                                  color_mode='grayscale',\n                                                  batch_size=32,\n                                                  class_mode='categorical')\n
\n

When I call test_generator.__getitem__(), the index is 0 to 123. Otherwise the err pops out as ValueError: Asked to retrieve element 124, but the Sequence has length 124

\n", + "system": "" + }, + { + "instruction": "Error using shap with SimpleRNN sequential model", + "input": "", + "output": "

The owner of the shap repo said:

\n
\n

The fundamental issue here is that DeepExplainer does not yet support TF 2.0.

\n
\n

That was on 11 Dec 2019. Is this still the case? Try it with Tensorflow 1.15 and see if that works.

\n

Another issue on the shap repo about this (2 Jun 2020) says:

\n
\n

Alright, thank you. I did not see the post by Lundberg. I will stick to the workaround of using TF 1.15 until a new version of SHAP is released.

\n
\n", + "system": "" + }, + { + "instruction": "TF 2.3.0 training keras model using tf dataset with sample weights does not apply to metrics", + "input": "", + "output": "

Upon some debugging and doc reading, i found there's weighted_metrics argument in .compile, which i should use instead of metrics=. I confirmed this fixed my test case in the shared colab.

\n
model.compile(optimizer = Adam(learning_rate=1e-4),\n             loss = SparseCategoricalCrossentropy(),\n             weighted_metrics = [SparseCategoricalAccuracy()])\n
\n", + "system": "" + }, + { + "instruction": "How to identify the activation function at a specific layer of a loaded keras model?", + "input": "", + "output": "

You need to access .activation attribute of each layer (if it has one). Try this code sample:

\n
for i, layer in enumerate (model.layers):\n    print (i, layer)\n    try:\n        print ("    ",layer.activation)\n    except AttributeError:\n        print('   no activation attribute')\n
\n

Output example:

\n
0 <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x000001E72A499C88>\n     <function relu at 0x000001E727D9E558>\n1 <tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x000001E72A49C388>\n   no activation attribute\n2 <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x000001E72A49AB48>\n     <function relu at 0x000001E727D9E558>\n3 <tensorflow.python.keras.layers.pooling.MaxPooling2D object at 0x000001E72A49A3C8>\n   no activation attribute\n4 <tensorflow.python.keras.layers.core.Flatten object at 0x000001E72A48CD88>\n   no activation attribute\n5 <tensorflow.python.keras.layers.core.Dropout object at 0x000001E72A484D88>\n   no activation attribute\n6 <tensorflow.python.keras.layers.core.Dense object at 0x000001E72A484A48>\n     <function softmax at 0x000001E727D95E58>\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow DecodeJPEG: Expected image (JPEG, PNG, or GIF), got unknown format starting with '\\000\\000\\000\\000\\000\\000\\000\\00'", + "input": "", + "output": "

It's hard to know exactly what is going on without having the file at hand, but what is probably happening here is that you have files in your dataset that have either a .jpg, .jpeg, .png or .gif extension but that are not actually JPEG, PNG of GIF images. Thus, TensorFlow isn't able to properly load them.

\n

One way to overcome this problem would be to check your files that are supposedly images and get rid of the ones that aren't actual JPEG, PNG or GIF images.

\n

Checking if a file is a valid JPEG, PNG or GIF image is definitely more complicated than it seems, but checking for the file signature / magic number (that is, the first few bytes of your file) is a good start and should most of the time solve your problems.

\n

So, practically, you could do so in many different ways, one of which being checking for each picture individually if it is valid or not, with some function of this sort:

\n
def is_image(filename, verbose=False):\n\n    data = open(filename,'rb').read(10)\n\n    # check if file is JPG or JPEG\n    if data[:3] == b'\\xff\\xd8\\xff':\n        if verbose == True:\n             print(filename+" is: JPG/JPEG.")\n        return True\n\n    # check if file is PNG\n    if data[:8] == b'\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a':\n        if verbose == True:\n             print(filename+" is: PNG.")\n        return True\n\n    # check if file is GIF\n    if data[:6] in [b'\\x47\\x49\\x46\\x38\\x37\\x61', b'\\x47\\x49\\x46\\x38\\x39\\x61']:\n        if verbose == True:\n             print(filename+" is: GIF.")\n        return True\n\n    return False\n
\n

You would then be able to get rid of your non valid images by doing something like this (this would delete your non valid images):

\n
import os\n\n# go through all files in desired folder\nfor filename in os.listdir(folder):\n     # check if file is actually an image file\n     if is_image(filename, verbose=False) == False:\n          # if the file is not valid, remove it\n          os.remove(os. path. join(folder, filename))\n
\n

Now, as I said, this would probably solve your problem but please note that the function is_image will not be able to tell for sure if a file can or cannot be read as a JPG, JPEG, PNG or GIF image. It is only a quick and dirty solution that will get the vast majority of errors alike away, but not all.

\n", + "system": "" + }, + { + "instruction": "How to set custom weights in layers?", + "input": "", + "output": "

You can do it using set_weights method.

\n\n

For example, if you want to set the weights of your LSTM Layer, it can be accessed using model.layers[0] and if your Custom Weights are, say in an array, named, my_weights_matrix, then you can set your Custom Weights to First Layer (LSTM) using the code shown below:

\n\n
model.layers[0].set_weights([my_weights_matrix])\n
\n\n

If you don't want your weights to be modified during Training, then you have to Freeze that Layer using the code, model.layers[0].trainable = False.

\n\n

Please let me know if you face any other issue and I will be Happy to Help you.

\n\n

Hope this helps. Happy Learning!

\n", + "system": "" + }, + { + "instruction": "Machine learning regression model predicts same value for every image", + "input": "", + "output": "

People saying regression and CNN are two completely different things clearly have missed some basic learnings in their ML course. Yes they are completely different! But should not be compared ;)

\n

CNN is a type of deep neural network usually which became quite famous for its use on images. Therefore it is a framework to solve problem, and can solve both regression AND classification problems.

\n

Regression refers to the type of output you are predicting. So comparing the two directly is quite stupid to be honest.

\n

I cant comment on the specific people misleading you in this section, since I need a specific number of points to do so.

\n

However, back to the problem. Do you encounter this problem before or after saving it? If you encounter it before, I would try scaling your output values to an easier distribution. If it happens after you save, I would look into versions of your framework and the documentation of how they save it.

\n

It could also just be that there is no information in the pictures.

\n", + "system": "" + }, + { + "instruction": "Tensorflow 2: Getting "WARNING:tensorflow:9 out of the last 9 calls to <function> triggered tf.function retracing. Tracing is expensive"", + "input": "", + "output": "

TF/DR: Root-cause of this error is due to change in shape of train_data which varies from batch to batch. Fixing the size/shape of train_data resolves this tracing warning. I changed the following line, then everything works as expected. Full gist is here

\n
padded_shapes = ([9000], ())#None.\n
\n

Details:

\n

As mentioned in the warning message

\n
\n

WARNING:tensorflow:10 out of the last 11 calls to <function train_step\nat 0x7f4825f6d400> triggered tf.function retracing. Tracing is\nexpensive and the excessive number of tracings could be due to (1)\ncreating @tf.function repeatedly in a loop, (2) passing tensors with\ndifferent shapes, (3) passing Python objects instead of tensors. For\n(1), please define your @tf.function outside of the loop. For (2),\n@tf.function has experimental_relax_shapes=True option that relaxes\nargument shapes that can avoid unnecessary retracing.

\n
\n

this retracing warning happens because of the three reasons mentioned in the warning message. Reason (1) is not the root-cause because @tf.function is not called in a loop, also reason (3) is not the root-cause because both the arguments of train_step and test_step are tensor objects. So the root-cause is the reason (2) mentioned in the warning.

\n

When I printed the size of train_data, it printed different sizes. So I tried to pad train_data so that shape is same for all the batches.

\n
 padded_shapes = ([9000], ())#None.  # this line throws tracing error as the shape of text is varying for each step in an epoch.\n    # as the data size is varying, tf.function will start retracing it\n    # For the demonstration, I used 9000 as max length, but please change it accordingly \n
\n", + "system": "" + }, + { + "instruction": "Keras: Validation accuracy stays the exact same but validation loss decreases", + "input": "", + "output": "

A couple of observations for your problem:

\n
    \n
  1. Though not particularly familiar with the dataset, I trust that it is used in many circumstances without problems. However, you could try to check for its balance. In train_test_split() there is a parameter called stratify which, if fed the y, it will ensure the same number of samples for each class are in training set and test set proportionally.
  2. \n
  3. Your phenomenon with validation loss and validation accuracy is not something out of the ordinary. Imagine that in the first epochs, the neural network considers some ground truth positive examples (ys) with GT == 1 with 55% confidence. While the training advances, the neural network learns better, and now it is 90% confident for a ground truth positive example (ys) with GT == 1. Since the threshold for calculating the accuracy is 50% , in both situations you have the same accuracy. Nevertheless, the loss has changed significantly, since 90% >> 55%.
  4. \n
  5. You training seems to advance(slowly but surely). Have you considered using Adam as an off-the-shelves optimizer?
  6. \n
  7. If the low accuracy is still maintained over some epochs, you may very well suffer from a well known phenomenon called underfitting, in which your model is unable to capture the dependencies between your data. To mitigate/avoid underfitting altogether, you may want to use a more complex model (2 LSTMs / 2 GRUs) stacked.
  8. \n
  9. At this stage, remove the Dropout() layer, since you have underfitting, not overfitting.
  10. \n
  11. Decrease the batch_size. Very big batch_size can lead to local minima, rendering you network unable to properly learn/generalize.
  12. \n
  13. If none of these work, try starting with a lower learning rate, say 0.00001 instead of 0.0001.
  14. \n
  15. Reiterate over the dataset preprocessing steps. Ensure the sentences are converted properly.
  16. \n
\n", + "system": "" + }, + { + "instruction": "ValueError: ('Unrecognized keyword arguments:', dict_keys(['ragged'])) while loading keras model", + "input": "", + "output": "

Just adding onto what Justin mentioned already. This issue was encountered when I tried to run a inference script using a model trained using TensorFlow 2.3.1 in an environment that had TensorFlow 1.15 installed.

\n", + "system": "" + }, + { + "instruction": "Question about Permutation Importance on LSTM Keras", + "input": "", + "output": "

eli5's scikitlearn implementation for determining permutation importance can only process 2d arrays while keras' LSTM layers require 3d arrays. This error is a known issue but there appears to be no solution yet.

\n\n

I understand this does not really answer your question of getting eli5 to work with LSTM (because it currently can't), but I encountered the same problem and used another library called SHAP to get the feature importance of my LSTM model. Here is some of my code to help you get started:

\n\n
import shap\nDE = shap.DeepExplainer(model, X_train) # X_train is 3d numpy.ndarray\nshap_values = DE.shap_values(X_validate_np, check_additivity=False) # X_validate is 3d numpy.ndarray\n\nshap.initjs()\nshap.summary_plot(\n    shap_values[0], \n    X_validate,\n    feature_names=list_of_your_columns_here,\n    max_display=50,\n    plot_type='bar')\n\n
\n\n

Here is an example of the graph which you can get:

\n\n

\"enter

\n\n

Hope this helps.

\n", + "system": "" + }, + { + "instruction": "Difficulty in GAN training", + "input": "", + "output": "

Mode collapse results in the generator finding a few values or small range of values that do the best at fooling the discriminator. Since your range of generated values is fairly narrow, I believe you are experiencing mode collapse. You can train for different durations and plot the results to see when collapse occurs. Sometimes, if you train long enough, it will fix itself and start learning again. There are a billion recommendations on how to train GANs, I collected bunch and then brute force my way through them for each GAN. You could try only training the discriminator every other cycle, in order to give the generator a chance to learn. Also, several people recommend not training the discriminator on real and fake data at the same time (I haven't done it so can't say what, if any, the impact is). You might also want to try adding in some batch normalization layers. Jason Brownlee has a bunch of good articles on training GANs, you may want to start there.

\n", + "system": "" + }, + { + "instruction": "experimental_list_devices attribute missing in tensorflow_core._api.v2.config", + "input": "", + "output": "

I found the answer here - https://github.com/keras-team/keras/issues/13684.\nI had the same issue for load_model() from keras under Anaconda:

\n\n
\n

AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'experimental_list_devices'

\n
\n\n

I found source of problem in

\n\n
\n

...\\anaconda3\\envs\\tf_env\\Lib\\site-packages\\keras\\backend\\tensorflow_backend.py

\n
\n\n

In line 506 I changed line

\n\n
_LOCAL_DEVICES = tf.config.experimental_list_devices()\n
\n\n

to

\n\n
devices = tf.config.list_logical_devices()\n\n_LOCAL_DEVICES = [x.name for x in devices]\n
\n\n

And it works

\n", + "system": "" + }, + { + "instruction": "How to apply l2 normalization to a layer in keras?", + "input": "", + "output": "

Do as the example you mentioned. It's ok.

\n\n

You need a Layer for every operation in the model, backend operations are no exception, and that's the reason for the Lambda layer. (Keras needs layers to do its magic).

\n\n
import keras.backend as K\n\nautoencoder.add(Lambda(lambda x: K.l2_normalize(x,axis=1)))\n
\n\n

If you are planning to get the encoder's output later for other things, I suggest you create the encoder and decoder as separate models:

\n\n
encoder = Sequential()\n#.... add encoder layers ....\n#encoder.add(....)\n\ndecoder = Sequential()\n#.... add decoder layers ....\n#decoder.add(....)\n
\n\n

Finally create the autoencoder as another model, for training:

\n\n
from keras.layers import Input\nfrom keras.models import Model\n\nautoInput = Input(shape_of_the_encoder_input)\nencodedData = encoder(autoInput)\ndecodedData = decoder(encodedData)\n\nautoencoder = Model(autoInput, decodedData)\n
\n", + "system": "" + }, + { + "instruction": "What is the reason of taking batch size or number of neurons as a power of 2 in neural networks?", + "input": "", + "output": "

If you use NVIDIA GPUs (the most popular choice for deep learning), the choice of channel size for convolutions and fully-connected layers mostly has to do with enabling Tensor cores, which as the name implies are used for efficient Tensor and matrix operations (and therefore for convolutions). To quote the NVIDIA guide on performance for deep learning:

\n\n
\n

Choose the number of input and output channels to be divisible by 8 to enable Tensor Cores

\n
\n\n

Similar guidelines are given regarding batch size, however the reason for those is quantization.

\n", + "system": "" + }, + { + "instruction": "ModuleNotFoundError: No module named 'tensorflow.keras'", + "input": "", + "output": "

That's because you have an older version of TensorFlow.

\n
pip install tensorflow==1.14.0\n
\n

This will uninstall the previous version and will install 1.14.0.

\n

This worked for me :)

\n", + "system": "" + }, + { + "instruction": "Batch normalization destroys validation performances", + "input": "", + "output": "

I'm not sure if this is what you are asking, but batch normalization is still active during validation, it's just that the parameters are defined and set during training and not altered during validation.

\n\n

As for why batch normalization is not good for your model/problem in general, it's like any hyper parameter, some work well with some scenarios, not well with others. Do you know if this is the best placement for BN within your network? Other than that would need to know more about your data and problem to give any further guesses.

\n", + "system": "" + }, + { + "instruction": "How to visualize RNN/LSTM gradients in Keras/TensorFlow?", + "input": "", + "output": "

Gradients can be fetched w.r.t. weights or outputs - we'll be needing latter. Further, for best results, an architecture-specific treatment is desired. Below code & explanations cover every possible case of a Keras/TF RNN, and should be easily expandable to any future API changes.

\n\n
\n\n

Completeness: code shown is a simplified version - the full version can be found at my repository, See RNN (this post included w/ bigger images); included are:

\n\n\n\n
\n\n

I/O dimensionalities (all RNNs):

\n\n\n\n
\n\n

Visualization methods:

\n\n\n\n
# for below examples\ngrads = get_rnn_gradients(model, x, y, layer_idx=1) # return_sequences=True\ngrads = get_rnn_gradients(model, x, y, layer_idx=2) # return_sequences=False\n
\n\n
\n\n

EX 1: one sample, uni-LSTM, 6 units -- return_sequences=True, trained for 20 iterations
\nshow_features_1D(grads[0], n_rows=2)

\n\n\n\n

\"enter

\n\n
\n\n

EX 2: all (16) samples, uni-LSTM, 6 units -- return_sequences=True, trained for 20 iterations
\nshow_features_1D(grads, n_rows=2)
\nshow_features_2D(grads, n_rows=4, norm=(-.01, .01))

\n\n\n\n

\"enter\n\"enter

\n\n
\n\n

EX 3: all (16) samples, uni-LSTM, 6 units -- return_sequences=True, trained for 200 iterations
\nshow_features_1D(grads, n_rows=2)
\nshow_features_2D(grads, n_rows=4, norm=(-.01, .01))

\n\n\n\n

\"enter\n\"enter

\n\n
\n\n

EX 4: 2D vs. 1D, uni-LSTM: 256 units, return_sequences=True, trained for 200 iterations
\nshow_features_1D(grads[0])
\nshow_features_2D(grads[:, :, 0], norm=(-.0001, .0001))

\n\n\n\n

\"enter

\n\n
\n\n

EX 5: bi-GRU, 256 units (512 total) -- return_sequences=True, trained for 400 iterations
\nshow_features_2D(grads[0], norm=(-.0001, .0001), reflect_half=True)

\n\n\n\n

\n\n
\n\n

EX 6: 0D, all (16) samples, uni-LSTM, 6 units -- return_sequences=False, trained for 200 iterations
\nshow_features_0D(grads)

\n\n\n\n

\n\n
\n\n

EX 7: LSTM vs. GRU vs. SimpleRNN, unidir, 256 units -- return_sequences=True, trained for 250 iterations
\nshow_features_2D(grads, n_rows=8, norm=(-.0001, .0001), show_xy_ticks=[0,0], show_title=False)

\n\n\n\n

\"enter

\n\n
\n\n

Visualization functions:

\n\n
def get_rnn_gradients(model, input_data, labels, layer_idx=None, layer_name=None, \n                      sample_weights=None):\n    if layer is None:\n        layer = _get_layer(model, layer_idx, layer_name)\n\n    grads_fn = _make_grads_fn(model, layer, mode)\n    sample_weights = sample_weights or np.ones(len(input_data))\n    grads = grads_fn([input_data, sample_weights, labels, 1])\n\n    while type(grads) == list:\n        grads = grads[0]\n    return grads\n\ndef _make_grads_fn(model, layer):\n    grads = model.optimizer.get_gradients(model.total_loss, layer.output)\n    return K.function(inputs=[model.inputs[0],  model.sample_weights[0],\n                              model._feed_targets[0], K.learning_phase()], outputs=grads) \n\ndef _get_layer(model, layer_idx=None, layer_name=None):\n    if layer_idx is not None:\n        return model.layers[layer_idx]\n\n    layer = [layer for layer in model.layers if layer_name in layer.name]\n    if len(layer) > 1:\n        print(\"WARNING: multiple matching layer names found; \"\n              + \"picking earliest\")\n    return layer[0]\n\n\ndef show_features_1D(data, n_rows=None, label_channels=True,\n                     equate_axes=True, max_timesteps=None, color=None,\n                     show_title=True, show_borders=True, show_xy_ticks=[1,1], \n                     title_fontsize=14, channel_axis=-1, \n                     scale_width=1, scale_height=1, dpi=76):\n    def _get_title(data, show_title):\n        if len(data.shape)==3:\n            return \"((Gradients vs. Timesteps) vs. Samples) vs. Channels\"\n        else:        \n            return \"((Gradients vs. Timesteps) vs. Channels\"\n\n    def _get_feature_outputs(data, subplot_idx):\n        if len(data.shape)==3:\n            feature_outputs = []\n            for entry in data:\n                feature_outputs.append(entry[:, subplot_idx-1][:max_timesteps])\n            return feature_outputs\n        else:\n            return [data[:, subplot_idx-1][:max_timesteps]]\n\n    if len(data.shape)!=2 and len(data.shape)!=3:\n        raise Exception(\"`data` must be 2D or 3D\")\n\n    if len(data.shape)==3:\n        n_features = data[0].shape[channel_axis]\n    else:\n        n_features = data.shape[channel_axis]\n    n_cols = int(n_features / n_rows)\n\n    if color is None:\n        n_colors = len(data) if len(data.shape)==3 else 1\n        color = [None] * n_colors\n\n    fig, axes = plt.subplots(n_rows, n_cols, sharey=equate_axes, dpi=dpi)\n    axes = np.asarray(axes)\n\n    if show_title:\n        title = _get_title(data, show_title)\n        plt.suptitle(title, weight='bold', fontsize=title_fontsize)\n    fig.set_size_inches(12*scale_width, 8*scale_height)\n\n    for ax_idx, ax in enumerate(axes.flat):\n        feature_outputs = _get_feature_outputs(data, ax_idx)\n        for idx, feature_output in enumerate(feature_outputs):\n            ax.plot(feature_output, color=color[idx])\n\n        ax.axis(xmin=0, xmax=len(feature_outputs[0]))\n        if not show_xy_ticks[0]:\n            ax.set_xticks([])\n        if not show_xy_ticks[1]:\n            ax.set_yticks([])\n        if label_channels:\n            ax.annotate(str(ax_idx), weight='bold',\n                        color='g', xycoords='axes fraction',\n                        fontsize=16, xy=(.03, .9))\n        if not show_borders:\n            ax.set_frame_on(False)\n\n    if equate_axes:\n        y_new = []\n        for row_axis in axes:\n            y_new += [np.max(np.abs([col_axis.get_ylim() for\n                                     col_axis in row_axis]))]\n        y_new = np.max(y_new)\n        for row_axis in axes:\n            [col_axis.set_ylim(-y_new, y_new) for col_axis in row_axis]\n    plt.show()\n\n\ndef show_features_2D(data, n_rows=None, norm=None, cmap='bwr', reflect_half=False,\n                     timesteps_xaxis=True, max_timesteps=None, show_title=True,\n                     show_colorbar=False, show_borders=True, \n                     title_fontsize=14, show_xy_ticks=[1,1],\n                     scale_width=1, scale_height=1, dpi=76):\n    def _get_title(data, show_title, timesteps_xaxis, vmin, vmax):\n        if timesteps_xaxis:\n            context_order = \"(Channels vs. %s)\" % \"Timesteps\"\n        if len(data.shape)==3:\n            extra_dim = \") vs. Samples\"\n            context_order = \"(\" + context_order\n        return \"{} vs. {}{} -- norm=({}, {})\".format(context_order, \"Timesteps\",\n                                                     extra_dim, vmin, vmax)\n\n    vmin, vmax = norm or (None, None)\n    n_samples = len(data) if len(data.shape)==3 else 1\n    n_cols = int(n_samples / n_rows)\n\n    fig, axes = plt.subplots(n_rows, n_cols, dpi=dpi)\n    axes = np.asarray(axes)\n\n    if show_title:\n        title = _get_title(data, show_title, timesteps_xaxis, vmin, vmax)\n        plt.suptitle(title, weight='bold', fontsize=title_fontsize)\n\n    for ax_idx, ax in enumerate(axes.flat):\n        img = ax.imshow(data[ax_idx], cmap=cmap, vmin=vmin, vmax=vmax)\n        if not show_xy_ticks[0]:\n            ax.set_xticks([])\n        if not show_xy_ticks[1]:\n            ax.set_yticks([])\n        ax.axis('tight')\n        if not show_borders:\n            ax.set_frame_on(False)\n\n    if show_colorbar:\n        fig.colorbar(img, ax=axes.ravel().tolist())\n\n    plt.gcf().set_size_inches(8*scale_width, 8*scale_height)\n    plt.show()\n\n\ndef show_features_0D(data, marker='o', cmap='bwr', color=None,\n                     show_y_zero=True, show_borders=False, show_title=True,\n                     title_fontsize=14, markersize=15, markerwidth=2,\n                     channel_axis=-1, scale_width=1, scale_height=1):\n    if color is None:\n        cmap = cm.get_cmap(cmap)\n        cmap_grad = np.linspace(0, 256, len(data[0])).astype('int32')\n        color = cmap(cmap_grad)\n        color = np.vstack([color] * data.shape[0])\n    x = np.ones(data.shape) * np.expand_dims(np.arange(1, len(data) + 1), -1)\n\n    if show_y_zero:\n        plt.axhline(0, color='k', linewidth=1)\n    plt.scatter(x.flatten(), data.flatten(), marker=marker,\n                s=markersize, linewidth=markerwidth, color=color)\n    plt.gca().set_xticks(np.arange(1, len(data) + 1), minor=True)\n    plt.gca().tick_params(which='minor', length=4)\n\n    if show_title:\n        plt.title(\"(Gradients vs. Samples) vs. Channels\",\n                  weight='bold', fontsize=title_fontsize)\n    if not show_borders:\n        plt.box(None)\n    plt.gcf().set_size_inches(12*scale_width, 4*scale_height)\n    plt.show()\n
\n\n
\n\n

Full minimal example: see repository's README

\n\n
\n\n

Bonus code:

\n\n\n\n
rnn_cell = model.layers[1].cell          # unidirectional\nrnn_cell = model.layers[1].forward_layer # bidirectional; also `backward_layer`\nprint(rnn_cell.__dict__)\n
\n\n

For more convenient code, see repo's rnn_summary

\n\n
\n\n

Bonus fact: if you run above on GRU, you may notice that bias has no gates; why so? From docs:

\n\n
\n

There are two variants. The default one is based on 1406.1078v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original 1406.1078v1 and has the order reversed.

\n \n

The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. Use 'reset_after'=True and recurrent_activation='sigmoid'.

\n
\n", + "system": "" + }, + { + "instruction": "Keras: TPU models must have constant shapes for all operations", + "input": "", + "output": "

I agree with the comments - to get it to work you would need to adjust the various variable output shapes (e.g. None, None, 768) to fixed sizes (other than the first batch dimension). Maybe you could do this with simple padding. If you can loop through the saved model layers and load the weights to a new model that you write with padded dimensions, it may even work. I would say that's more trouble than it's worth considering TPU ready versions are available already.

\n\n

I suggest moving away from Keras for this model. The official TensorFlow XLNet implementation should work with TPUs without modification. It also comes with pre-trained checkpoints. https://github.com/zihangdai/xlnet

\n\n

It uses the standard TPUEstimator class to send a the model function to the TPU worker so you won't need to mess around with tf.contrib.tpu.keras_to_tpu_model.

\n\n

The example given in the repository can be run in colab where $TPU_NAME is $COLAB_TPU_ADDR and you upload the pretrained checkpoints and the imdb data to a bucket colab can access.

\n\n
python run_classifier.py \\\n  --use_tpu=True \\\n  --tpu=${TPU_NAME} \\\n  --do_train=True \\\n  --do_eval=True \\\n  --eval_all_ckpt=True \\\n  --task_name=imdb \\\n  --data_dir=${IMDB_DIR} \\\n  --output_dir=${GS_ROOT}/proc_data/imdb \\\n  --model_dir=${GS_ROOT}/exp/imdb \\\n  --uncased=False \\\n  --spiece_model_file=${LARGE_DIR}/spiece.model \\\n  --model_config_path=${GS_ROOT}/${LARGE_DIR}/model_config.json \\\n  --init_checkpoint=${GS_ROOT}/${LARGE_DIR}/xlnet_model.ckpt \\\n  --max_seq_length=512 \\\n  --train_batch_size=32 \\\n  --eval_batch_size=8 \\\n  --num_hosts=1 \\\n  --num_core_per_host=8 \\\n  --learning_rate=2e-5 \\\n  --train_steps=4000 \\\n  --warmup_steps=500 \\\n  --save_steps=500 \\\n  --iterations=500\n
\n", + "system": "" + }, + { + "instruction": "TensorFlow 2.0 Keras layers with custom tensors as variables", + "input": "", + "output": "

Answer based on the comment below

\n

Given you have:

\n
kernel = createTheKernelVarBasedOnWhatYouWant() #shape (784, 64)\nbias = createTheBiasVarBasedOnWhatYouWant() #shape (64,)\n
\n

Make a simple function copying the code from Dense:

\n
def custom_dense(x):\n    inputs, kernel, bias = x\n\n    outputs = K.dot(inputs, kernel)\n    outputs = K.bias_add(outputs, bias, data_format='channels_last')\n    return outputs\n
\n

Use the function in a Lambda layer:

\n
layer = Lambda(custom_dense)\nhiddens = layer([x, kernel, bias])\n
\n
\n

Warning: kernel and bias must be produced from a Keras layer, or come from an kernel = Input(tensor=the_kernel_var) and bias = Input(tensor=bias_var)

\n
\n
\n

If the warning above is bad for you, you can always use kernel and bias "from outside", like:

\n
def custom_dense(inputs):\n    outputs = K.dot(inputs, kernel) #where kernel is not part of the arguments anymore\n    outputs = K.bias_add(outputs, bias, data_format='channels_last')\n    return outputs\n\nlayer = Lambda(custom_dense)\nhiddens = layer(x)\n
\n

This last option makes it a bit more complicated to save/load models.

\n

Old answer

\n

You should probably use a Keras Dense layer and set its weights in a standard way:

\n
layer = tf.keras.layers.Dense(64, name='the_layer')\nlayer.set_weights([np.random.rand(784, 64), np.random.rand(64)])\n
\n

If you need that these weights are not trainable, before compiling the keras model you set:

\n
model.get_layer('the_layer').trainable=False\n
\n

If you want direct access to the variables as tensors, they are:

\n
kernel = layer.kernel    \nbias = layer.bias\n
\n
\n

There are plenty of other options, but that depends on your exact intention, which is not clear in your question.

\n", + "system": "" + }, + { + "instruction": "What does "model.trainable = False" mean in Keras?", + "input": "", + "output": "

A Keras Model is trainable by default - you have two means of freezing all the weights:

\n\n
    \n
  1. model.trainable = False before compiling the model
  2. \n
  3. for layer in model.layers: layer.trainable = False - works before & after compiling
  4. \n
\n\n

(1) must be done before compilation since Keras treats model.trainable as a boolean flag at compiling, and performs (2) under the hood. After doing either of the above, you should see:

\n\n
print(model.trainable_weights)\n# [] \n
\n\n
\n\n

Regarding the docs, likely outdated - see linked source code above, up-to-date.

\n", + "system": "" + }, + { + "instruction": "Variable tf.Variable has 'None' for gradient in TensorFlow Probability", + "input": "", + "output": "

I expect it's because you're using TensorFlow 2, are you? It isn't fully supported yet. If so, downgrading to 1.14 should get it working.

\n", + "system": "" + }, + { + "instruction": "AttributeError: 'Conv2D' object has no attribute 'shape'", + "input": "", + "output": "

You forgot to pass x2 an input argument in the fourth line, also same with x3 and x4. So instead of writing

\n\n
x2= layers.Conv2D(32,(3,3), activation='relu')\n
\n\n

You should have

\n\n
x2= layers.Conv2D(32,(3,3), activation='relu')(x1)\n
\n", + "system": "" + }, + { + "instruction": "Keras: How to obtain confidence of prediction class?", + "input": "", + "output": "

In Keras, model.predict() actually returns you the confidence(s). So in the code snippet, you may want to print q to see the entire array with all confidence levels.

\n

np.argmax(x) gives you the argument(position) in the array where X has the max value.

\n", + "system": "" + }, + { + "instruction": "Reversing the image preprocessing of VGG in Keras to return original image", + "input": "", + "output": "

The \"mode\" of the preprocess_input function depends on the framework that the pretrained network weights were trained on. The VGG19 network in Keras uses the weights from the original VGG19 model in caffe, and for this reason, the argument in preprocess_input should be the default (mode='caffe'). See this question: Keras VGG16 preprocess_input modes

\n\n

For your purposes, use the preprocess_input function that is found in keras.applications.vgg19 and reverse engineer it from there.

\n\n

The original preprocessing is found here: https://github.com/keras-team/keras-applications/blob/master/keras_applications/imagenet_utils.py#L21

\n\n

This involves 1) Converting the image(s) from RGB to BGR 2) Subtracting the dataset mean from the image(s)

\n\n

Here is the code to restore the original image:

\n\n
def restore_original_image_from_array(x, data_format='channels_first'):\n    mean = [103.939, 116.779, 123.68]\n\n    # Zero-center by mean pixel\n    if data_format == 'channels_first':\n        if x.ndim == 3:\n            x[0, :, :] += mean[0]\n            x[1, :, :] += mean[1]\n            x[2, :, :] += mean[2]\n        else:\n            x[:, 0, :, :] += mean[0]\n            x[:, 1, :, :] += mean[1]\n            x[:, 2, :, :] += mean[2]\n    else:\n        x[..., 0] += mean[0]\n        x[..., 1] += mean[1]\n        x[..., 2] += mean[2]\n\n    if data_format == 'channels_first':\n        # 'BGR'->'RGB'\n        if x.ndim == 3:\n            x = x[::-1, ...]\n        else:\n            x = x[:, ::-1, ...]\n    else:\n        # 'BGR'->'RGB'\n        x = x[..., ::-1]\n\n    return x\n
\n", + "system": "" + }, + { + "instruction": "GRU/LSTM in Keras with input sequence of varying length", + "input": "", + "output": "

The concept you've drawn in your question is a pretty good start already. I'll add a few things to make it work, as well as a code example below:

\n\n\n\n

The following is a sample implementation w.r.t to the above considerations. Note that I modified your sample data a bit, in order to provide more \"reasoning\" behind group choices. Also each person needs to perform at least one interaction before choosing a group (i.e. the input sequence cannot be empty); if this is not the case for your data, then introducing an additional no-op interaction (e.g. 0) can help.

\n\n
import pandas as pd\nimport tensorflow as tf\n\nmodel = tf.keras.models.Sequential()\nmodel.add(tf.keras.layers.LSTM(10, input_shape=(None, 2)))  # LSTM for arbitrary length series.\nmodel.add(tf.keras.layers.Dense(3, activation='softmax'))   # Softmax for class probabilities.\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\n\n# Example interactions:\n#   * 1: Likes the group,\n#   * 2: Dislikes the group,\n#   * 3: Chooses the group.\ndf = pd.DataFrame([\n    [1, 1, 3],\n    [1, 1, 3],\n    [1, 2, 2],\n    [1, 3, 3],\n    [2, 2, 1],\n    [2, 2, 3],\n    [2, 1, 2],\n    [2, 3, 2],\n    [3, 1, 1],\n    [3, 1, 1],\n    [3, 1, 1],\n    [3, 2, 3],\n    [3, 2, 2],\n    [3, 3, 1]],\n    columns=['person', 'interaction', 'group']\n)\ndata = [person[1][['interaction', 'group']].values for person in df.groupby('person')]\nx_train = [x[:-1] for x in data]\ny_train = tf.keras.utils.to_categorical([x[-1, 1]-1 for x in data])  # Expects class labels from 0 to n (-> subtract 1).\nprint(x_train)\nprint(y_train)\n\nclass TrainGenerator(tf.keras.utils.Sequence):\n    def __init__(self, x, y):\n        self.x = x\n        self.y = y\n\n    def __len__(self):\n        return len(self.x)\n\n    def __getitem__(self, index):\n        # Need to expand arrays to have batch size 1.\n        return self.x[index][None, :, :], self.y[index][None, :]\n\nmodel.fit_generator(TrainGenerator(x_train, y_train), epochs=1000)\npred = [model.predict(x[None, :, :]).ravel() for x in x_train]\nfor p, y in zip(pred, y_train):\n    print(p, y)\n
\n\n

And the corresponding sample output:

\n\n
[...]\nEpoch 1000/1000\n3/3 [==============================] - 0s 40ms/step - loss: 0.0037\n[0.00213619 0.00241093 0.9954529 ] [0. 0. 1.]\n[0.00123938 0.99718493 0.00157572] [0. 1. 0.]\n[9.9632275e-01 7.5039308e-04 2.9268670e-03] [1. 0. 0.]\n
\n\n
\n\n

Using custom generator expressions: According to the documentation we can use any generator to yield the data. The generator is expected to yield batches of the data and loop over the whole data set indefinitely. When using tf.keras.utils.Sequence we do not need to specify the parameter steps_per_epoch as this will default to len(train_generator). Hence, when using a custom generator, we shall provide this parameter as well:

\n\n
import itertools as it\n\nmodel.fit_generator(((x_train[i % len(x_train)][None, :, :],\n                      y_train[i % len(y_train)][None, :]) for i in it.count()),\n                    epochs=1000,\n                    steps_per_epoch=len(x_train))\n
\n", + "system": "" + }, + { + "instruction": "keras.load_model() can't recognize Tensorflow's activation functions", + "input": "", + "output": "

You need to add custom objects

\n\n
tf.keras.models.load_model('model', custom_objects={'leaky_relu': tf.nn.leaky_relu})\n
\n", + "system": "" + }, + { + "instruction": "How to support masking in custom tf.keras.layers.Layer", + "input": "", + "output": "
    \n
  1. To support masking one should implement the compute_mask method inside the custom layer

  2. \n
  3. To access the mask, simply add as the second positional argument in the call method the argument mask, and it will be accessible (ex. call(self, inputs, mask=None))

  4. \n
  5. This cannot be guessed, it is the layer's before responsible to calculate the mask

  6. \n
  7. Once you implemented the compute_mask passing the mask to the next layer happens automatically - excluding the case of model subclassing, which in this case it is up to you to calculate masks and pass them on.

  8. \n
\n\n

Example:

\n\n
class MyCustomKerasLayers(tf.keras.layers.Layer):\n    def __init__(self, .......):\n        ...\n\n    def compute_mask(self, inputs, mask=None):\n        # Just pass the received mask from previous layer, to the next layer or \n        # manipulate it if this layer changes the shape of the input\n        return mask\n\n    def call(self, input, mask=None):\n        # using 'mask' you can access the mask passed from the previous layer\n
\n\n

Notice that this example just passes on the mask, if the layer will output a shape different than the one received, you should change the mask accordingly in compute_mask to pass on the correct one

\n\n

EDIT

\n\n

Now explanation is also included in the tf.keras masking and padding documentation.

\n", + "system": "" + }, + { + "instruction": "Multiple metrics to specific inputs", + "input": "", + "output": "

Yes, you can pass the losses/metrics as a dictionary that maps layer name to a loss/metrics.

\n\n

A quote from the documentation:

\n\n
\n

loss: ... If the model has multiple outputs, you can use a different\n loss on each output by passing a dictionary or a list of losses. The\n loss value that will be minimized by the model will then be the sum of\n all individual losses.

\n
\n\n

and

\n\n
\n

metrics: ... To specify different metrics for different\n outputs of a multi-output model, you could also pass a dictionary,\n such as metrics={'output_a': 'accuracy'}.

\n
\n\n

Example:

\n\n
model.compile(\n   optimizer='rmsprop',\n   loss={'output_1': 'loss_1', 'output_2': 'loss_2'},\n   loss_weights={'output_1': 1., 'output_2': 0.2},\n   metrics={'output_1': 'metric_1', 'output_2': ['metric_2', 'metric_3']})\n
\n\n

You can read more about multi-output model with Keras in: https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models

\n", + "system": "" + }, + { + "instruction": "Modify ResNet50 output layer for regression", + "input": "", + "output": "

Your code is throwing the error because you're using the original fully-connected top layer that was trained to classify images into one of 1000 classes. To make the network working, you need to replace this top layer with your own which should have the shape compatible with your dataset and task.

\n\n

Here is a small snippet I was using to create an ImageNet pre-trained model for the regression task (face landmarks prediction) with Keras:

\n\n
NUM_OF_LANDMARKS = 136\n\ndef create_model(input_shape, top='flatten'):\n    if top not in ('flatten', 'avg', 'max'):\n        raise ValueError('unexpected top layer type: %s' % top)\n\n    # connects base model with new \"head\"\n    BottleneckLayer = {\n        'flatten': Flatten(),\n        'avg': GlobalAvgPooling2D(),\n        'max': GlobalMaxPooling2D()\n    }[top]\n\n    base = InceptionResNetV2(input_shape=input_shape,\n                             include_top=False, \n                             weights='imagenet')\n\n    x = BottleneckLayer(base.output)\n    x = Dense(NUM_OF_LANDMARKS, activation='linear')(x)\n    model = Model(inputs=base.inputs, outputs=x)\n    return model\n
\n\n

In your case, I guess you only need to replace InceptionResNetV2 with ResNet50. Essentially, you are creating a pre-trained model without top layers:

\n\n
base = ResNet50(input_shape=input_shape, include_top=False)\n
\n\n

And then attaching your custom layer on top of it:

\n\n
x = Flatten()(base.output)\nx = Dense(NUM_OF_LANDMARKS, activation='sigmoid')(x)\nmodel = Model(inputs=base.inputs, outputs=x)\n
\n\n

That's it.

\n\n

You also can check this link from the Keras repository that shows how ResNet50 is constructed internally. I believe it will give you some insights about the functional API and layers replacement.

\n\n
\n\n

Also, I would say that both regression and classification tasks are not that different if we're talking about fine-tuning pre-trained ImageNet models. The type of task mostly depends on your loss function and the top layer's activation function. Otherwise, you still have a fully-connected layer with N outputs but they are interpreted in a different way.

\n", + "system": "" + }, + { + "instruction": "Deep learnin on Google Colab: loading large image dataset is very long, how to accelerate the process?", + "input": "", + "output": "

Not sure if you solve the issue. I was having the same problem. \nAfter I use os.listdir to the particular data folder before I ran CNN and worked.

\n\n
print(os.listdir(\"./drive/My Drive/Colab Notebooks/dataset\"))\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow CPU memory problem (allocation exceeds 10% of system memory)", + "input": "", + "output": "

In my experience, a common cause of this type of issue is that we are using a reasonable batch size in training but trying to use a larger batch size (usually the whole dataset) when evaluating.

\n\n

I have found myself doing this sort of thing in error:

\n\n
model.fit(x_train, y_train, epochs=5, batch_size=10)\nmodel.evaluate(x_test, y_test)\n
\n\n

whereas we really need to do this:

\n\n
\nmodel.fit(x_train, y_train, epochs=5, batch_size=10)\nmodel.evaluate(x_test, y_test, batch_size=10)\n
\n", + "system": "" + }, + { + "instruction": "Add SVM to last layer", + "input": "", + "output": "

This should work for adding svm as last layer.

\n\n
inp = Input((train_X.shape[1], train_X.shape[2]))\nlstm = LSTM(1, return_sequences=False)(inp)\noutput = Dense(train_Y.shape[1], activation='softmax', W_regularizer=l2(0.01)))(lstm)\n\nmodel = Model(inputs=inp, outputs=output)\nmodel.compile(loss='hinge', optimizer='adam', metrics=['accuracy'])\nmodel.fit(train_X, train_Y, validation_split=.20, epochs=2, batch_size=50)\n
\n\n

Here I have used hinge as loss considering binary categorised target. But if it is more than that, then you can consider using categorical_hinge

\n", + "system": "" + }, + { + "instruction": "Multiple inputs to Keras Sequential model", + "input": "", + "output": "

You can't do this using Sequential API. That's because of two reasons:

\n\n
    \n
  1. Sequential models, as their name suggests, are a sequence of layers where each layer is connected directly to its previous layer and therefore they cannot have branches (e.g. merge layers, multiple input/output layers, skip connections, etc.).

  2. \n
  3. The add() method of Sequential API accepts a Layer instance as its argument and not a Tensor instance. In your example merged is a Tensor (i.e. concatenation layer's output).

  4. \n
\n\n

Further, the correct way of using Concatenate layer is like this:

\n\n
merged = Concatenate()([x, y])\n
\n\n

However, you can also use concatenate (note the lowercase \"c\"), its equivalent functional interface, like this:

\n\n
merged = concatenate([x, y])\n
\n\n

Finally, to be able to construct that third model you also need to use the functional API.

\n", + "system": "" + }, + { + "instruction": "Understanding multivariate time series classification with Keras", + "input": "", + "output": "

I believe the input shape for Keras should be:

\n\n

input_shape=(number_of_samples, nb_time_steps, max_nb_features).

\n\n

And most often nb_time_steps = 1

\n\n

P.S.: I tried solving a very similar problem for an internship position (but my results turned out to be wrong). You may take a look here: https://github.com/AbbasHub/Deep_Learning_LSTM/blob/master/2018-09-22_Multivariate_LSTM.ipynb (see if you can spot my mistake!)

\n", + "system": "" + }, + { + "instruction": "ReduceLROnPlateau fallback to the previous weights with the minimum acc_loss", + "input": "", + "output": "

Here's a working example following @nuric's direction:

\n
from tensorflow.python.keras.callbacks import ReduceLROnPlateau\nfrom tensorflow.python.platform import tf_logging as logging\n\nclass ReduceLRBacktrack(ReduceLROnPlateau):\n    def __init__(self, best_path, *args, **kwargs):\n        super(ReduceLRBacktrack, self).__init__(*args, **kwargs)\n        self.best_path = best_path\n\n    def on_epoch_end(self, epoch, logs=None):\n        current = logs.get(self.monitor)\n        if current is None:\n            logging.warning('Reduce LR on plateau conditioned on metric `%s` '\n                            'which is not available. Available metrics are: %s',\n                             self.monitor, ','.join(list(logs.keys())))\n        if not self.monitor_op(current, self.best): # not new best\n            if not self.in_cooldown(): # and we're not in cooldown\n                if self.wait+1 >= self.patience: # going to reduce lr\n                    # load best model so far\n                    print("Backtracking to best model before reducting LR")\n                    self.model.load_weights(self.best_path)\n\n        super().on_epoch_end(epoch, logs) # actually reduce LR\n
\n

ModelCheckpoint call-back can be used to update the best model dump. e.g. pass the following two call-backs to model fit:

\n
model_checkpoint_path = <path to checkpoint>\nc1 = ModelCheckpoint(model_checkpoint_path, \n                     save_best_only=True,\n                     monitor=...)\nc2 = ReduceLRBacktrack(best_path=model_checkpoint_path, monitor=...)\n
\n", + "system": "" + }, + { + "instruction": "Do I need a stateful or stateless LSTM?", + "input": "", + "output": "

Stateful LSTM is used when the whole sequence plays a part in forming the output. Taking an extreme case; you might have 1000-length sequence, and the very first character of that sequence is what actually defines the output:

\n\n

Stateful\nIf you were to batch this into 10 x 100 length sequences, then with stateful LSTM the connections (state) between sequences in the batch would be retained and it would (with enough examples) learn the relationship of the first character plays significant importance to the output. In effect, sequence length is immaterial because the network's state is persisted across the whole stretch of data, you simply batch it as a means of supplying the data.

\n\n

Stateless\nDuring training, the state is reset after each sequence. So in the example I've given, the network wouldn't learn that it's the first character of the 1000-length sequences that defines the output, because it would never see the long-term dependency because the first character and the final output value are in separate sequences, and the state isn't retained between the sequences.

\n\n

Summary\nWhat you need to determine is whether there is likely to be dependency of data at the end of your time-series being affected by what potentially happened right at the start.

\n\n

I would say that it's actually quite rare that there are such long-term dependencies like that, and what you're probably better doing is using a stateless LSTM, but setting sequence length as a hyperparameter to find which sequences length best models the data, i.e. provides the most accurate validation data.

\n", + "system": "" + }, + { + "instruction": "How do I determine the binary class predicted by a convolutional neural network on Keras?", + "input": "", + "output": "

You are doing binary classification. So you have a Dense layer consisting of one unit with an activation function of sigmoid. Sigmoid function outputs a value in range [0,1] which corresponds to the probability of the given sample belonging to positive class (i.e. class one). Everything below 0.5 is labeled with zero (i.e. negative class) and everything above 0.5 is labeled with one. So to find the predicted class you can do the following:

\n\n
preds = model.predict(data)\nclass_one = preds > 0.5\n
\n\n

The true elements of class_one correspond to samples labeled with one (i.e. positive class).

\n\n

Bonus: to find the accuracy of your predictions you can easily compare class_one with the true labels:

\n\n
acc = np.mean(class_one == true_labels)\n
\n\n

Note that I have assumed that true_labels consists of zeros and ones.

\n\n
\n\n

Further, if your model were defined using Sequential class, then you could easily use predict_classes method:

\n\n
pred_labels = model.predict_classes(data)\n
\n\n

However, since you are using Keras functional API to construct your model (which is a very good thing to do so, in my opinion), you can't use predict_classes method since it is ill-defined for such models.

\n", + "system": "" + }, + { + "instruction": "Opening Keras model with embedding layer in Tensorflow in Golang", + "input": "", + "output": "

So, it turns out that I did not need to specify the inputs for the embedding layers. I was actually structuring the input incorrectly. It should look like this:

\n\n
tensor1, _ := tf.NewTensor([][]int32{tokes_text})\ntensor2, _ := tf.NewTensor([][]int32{tokes_title})\n\n\nresult := model.Exec([]tf.Output{\n            model.Op(\"dense_18/Sigmoid\", 0),\n        }, map[tf.Output]*tf.Tensor{\n            model.Op(\"input_3\", 0): tensor1,\n            model.Op(\"input_4\", 0): tensor2,\n        })\n
\n", + "system": "" + }, + { + "instruction": "Tensorflow, Keras: How to create a trainable variable that only update in specific positions?", + "input": "", + "output": "

You have two different tools to address this problem.

\n\n
    \n
  1. You can create the variables you need and rearrange them into the desired form.
  2. \n
  3. You can create more variables than you need then discard some to reach the desired form.
  4. \n
\n\n

Both approach are not exclusive and you could you a mix of successives steps of type #1 and #2.

\n\n

For example, for your first example (diagonal matrix), we can use approach #1.

\n\n
w = tf.Variable(tf.zeros(n))\nA = tf.diag(w) # creates a diagonal matrix with elements of w\n
\n\n

For your second, more complex example, we could use approach #2.

\n\n
A = tf.Variable(tf.zeros((n, n)))\nA = tf.matrix_band_part(A, 1, 1) # keep only the central band of width 3\nA = tf.matrix_set_diag(A, tf.ones(n)) # set diagonal to 1\n
\n", + "system": "" + }, + { + "instruction": "Keras, TensorFlow : "TypeError: Cannot interpret feed_dict key as Tensor"", + "input": "", + "output": "

I found that there are a couple of workarounds, depending on various context:

\n\n
    \n
  1. Using clear_session() function:

    \n\n
    from keras import backend as K\n
    \n\n

    Then do following at the beginning or at the end of the function, after predicting all the data:

    \n\n
    K.clear_session()\n
  2. \n
  3. Calling _make_predict_function():

    \n\n

    After you load your trained model call:

    \n\n
    model._make_predict_function()\n
    \n\n

    See explanation

  4. \n
  5. Disable threading:

    \n\n

    If you are running django server use this command:

    \n\n
    python manage.py runserver --nothreading \n
    \n\n

    For flask use this:

    \n\n
    flask run --without-threads\n
  6. \n
\n\n

If none of the above solutions work, check these links keras issue#6462, keras issue#2397

\n", + "system": "" + }, + { + "instruction": "Why is ReLU used in regression with Neural Networks?", + "input": "", + "output": "

The general reason for using non-linear activation functions in hidden layers is that, without them, no matter how many layers or how many units per layer, the network would behave just like a simple linear unit. This is nicely explained in this short video by Andrew Ng: Why do you need non-linear activation functions?

\n

In your case, looking more closely, you'll see that the activation function of your final layer is not the relu as in your hidden layers, but the linear one (which is the default activation when you don't specify anything, like here):

\n\n
keras.layers.Dense(1)\n
\n

From the Keras docs:

\n
\n

Dense

\n

[...]

\n

Arguments

\n

[...]

\n

activation: Activation function to use (see activations). If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).

\n
\n

which is indeed what is expected for a regression network with a single continuous output.

\n", + "system": "" + }, + { + "instruction": "How to implement Merge from Keras.layers", + "input": "", + "output": "

Merge cannot be used with a sequential model. In a sequential model, layers can only have one input and one output. \nYou have to use the functional API, something like this. I assumed you use the same input layer for modela and modelb, but you could create another Input() if it is not the case and give both of them as input to the model.

\n\n
def linear_model_combined(optimizer='Adadelta'):    \n\n    # declare input\n    inlayer =Input(shape=(100, 34))\n    flatten = Flatten()(inlayer)\n\n    modela = Dense(1024)(flatten)\n    modela = Activation('relu')(modela)\n    modela = Dense(512)(modela)\n\n    modelb = Dense(1024)(flatten)\n    modelb = Activation('relu')(modelb)\n    modelb = Dense(512)(modelb)\n\n    model_concat = concatenate([modela, modelb])\n\n\n    model_concat = Activation('relu')(model_concat)\n    model_concat = Dense(256)(model_concat)\n    model_concat = Activation('relu')(model_concat)\n\n    model_concat = Dense(4)(model_concat)\n    model_concat = Activation('softmax')(model_concat)\n\n    model_combined = Model(inputs=inlayer,outputs=model_concat)\n\n    model_combined.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])\n\n    return model_combined\n
\n", + "system": "" + }, + { + "instruction": "How to stack layers in Keras without using Sequential()?", + "input": "", + "output": "

Not sure if I've got your question right, but I guess that you could use the functional API and concatenate or add layers as it is shown in Keras applications, like, ResNet50 or InceptionV3 to build \"non-sequential\" networks.

\n\n

UPDATE

\n\n

In one of my projects, I was using something like this. I had a custom layer (it was not implemented in my version of Keras, so I've just manually \"backported\" the code into my notebook).

\n\n
class LeakyReLU(Layer):\n    \"\"\"Leaky version of a Rectified Linear Unit backported from newer Keras \n    version.\"\"\"\n\n    def __init__(self, alpha=0.3, **kwargs):\n        super(LeakyReLU, self).__init__(**kwargs)\n        self.supports_masking = True\n        self.alpha = K.cast_to_floatx(alpha)\n\n    def call(self, inputs):\n        return tf.maximum(self.alpha * inputs, inputs)\n\n    def get_config(self):\n        config = {'alpha': float(self.alpha)}\n        base_config = super(LeakyReLU, self).get_config()\n        return dict(list(base_config.items()) + list(config.items()))\n\n    def compute_output_shape(self, input_shape):\n        return input_shape\n
\n\n

Then, the model:

\n\n
def create_model(input_shape, output_size, alpha=0.05, reg=0.001):\n    inputs = Input(shape=input_shape)\n\n    x = Conv2D(16, (3, 3), padding='valid', strides=(1, 1), \n               kernel_regularizer=l2(reg), kernel_constraint=maxnorm(3),\n               activation=None)(inputs)\n    x = BatchNormalization()(x)\n    x = LeakyReLU(alpha=alpha)(x)\n    x = MaxPooling2D(pool_size=(2, 2))(x)\n\n    x = Conv2D(32, (3, 3), padding='valid', strides=(1, 1),\n               kernel_regularizer=l2(reg), kernel_constraint=maxnorm(3),\n               activation=None)(x)\n    x = BatchNormalization()(x)\n    x = LeakyReLU(alpha=alpha)(x)\n    x = MaxPooling2D(pool_size=(2, 2))(x)\n\n    x = Conv2D(64, (3, 3), padding='valid', strides=(1, 1),\n               kernel_regularizer=l2(reg), kernel_constraint=maxnorm(3),\n               activation=None)(x)\n    x = BatchNormalization()(x)\n    x = LeakyReLU(alpha=alpha)(x)\n    x = MaxPooling2D(pool_size=(2, 2))(x)\n\n    x = Conv2D(128, (3, 3), padding='valid', strides=(1, 1),\n               kernel_regularizer=l2(reg), kernel_constraint=maxnorm(3),\n               activation=None)(x)\n    x = BatchNormalization()(x)\n    x = LeakyReLU(alpha=alpha)(x)\n    x = MaxPooling2D(pool_size=(2, 2))(x)\n\n    x = Conv2D(256, (3, 3), padding='valid', strides=(1, 1),\n               kernel_regularizer=l2(reg), kernel_constraint=maxnorm(3),\n               activation=None)(x)\n    x = BatchNormalization()(x)\n    x = LeakyReLU(alpha=alpha)(x)\n    x = MaxPooling2D(pool_size=(2, 2))(x)\n\n    x = Flatten()(x)\n    x = Dense(500, activation='relu', kernel_regularizer=l2(reg))(x)\n    x = Dense(500, activation='relu', kernel_regularizer=l2(reg))(x)\n    x = Dense(500, activation='relu', kernel_regularizer=l2(reg))(x)\n    x = Dense(500, activation='relu', kernel_regularizer=l2(reg))(x)\n    x = Dense(500, activation='relu', kernel_regularizer=l2(reg))(x)\n    x = Dense(500, activation='relu', kernel_regularizer=l2(reg))(x)\n    x = Dense(output_size, activation='linear', kernel_regularizer=l2(reg))(x)\n\n    model = Model(inputs=inputs, outputs=x)\n\n    return model\n
\n\n

Finally, a custom metric:

\n\n
def root_mean_squared_error(y_true, y_pred):\n    return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))\n
\n\n

I was using the following snippet to create and compile the model:

\n\n
model = create_model(input_shape=X.shape[1:], output_size=y.shape[1])\nmodel.compile(loss=root_mean_squared_error, optimizer='adamax')\n
\n\n

As usual, I was using a checkpoint callback to save the model. To load the model, you need to pass the custom layers classes and metrics as well into load_model function:

\n\n
def load_custom_model(path):\n    return load_model(path, custom_objects={\n        'LeakyReLU': LeakyReLU,\n        'root_mean_squared_error': root_mean_squared_error\n    })\n
\n\n

Does it help?

\n", + "system": "" + }, + { + "instruction": "Keras Word2Vec implementation", + "input": "", + "output": "

I followed the same tutorial and the loss drops after the algorithm went through a sample again. Note that the loss function is calculated only for the current target and context word pair. In the code example from the tutorial one epoch is only one sample, therefore you would need more than the number of target and context words to come to a point where the loss drops.

\n\n

I implemented the training part with the following line

\n\n
model.fit([word_target, word_context], labels, epochs=5)\n
\n\n

Be warned that this can take a long time depending on how large the corpus is. The train_on_batch function gives you more control in training and you can vary the batch size or select samples you choose at every step of the training.

\n", + "system": "" + }, + { + "instruction": "Activity regularizer with softmax?", + "input": "", + "output": "

Due to the Softmax, the contribution of the L1-Regularization to the total cost is in fact constant.

\n\n

However, the gradient of the regularization term is non-zero and equals to the number of non-zero activations (gradient of abs is sign, so we have a sum of signs of activations that are positive due to the softmax).

\n\n

You can try to run with and without the L1-term and check how many non-zero elements you end up with.

\n", + "system": "" + }, + { + "instruction": "Difference between shape and int_shape in Keras", + "input": "", + "output": "
import keras.backend as K \nimport numpy as np\n\nx = K.variable(np.eye(2, 2)) \n\nprint('shape:', K.shape(x))\nprint('int_shape:', K.int_shape(x))\n
\n

gives

\n
\n

shape: Tensor("Shape_4:0", shape=(2,), dtype=int32)

\n

int_shape: (2, 2)

\n
\n

As you can see, first one is a tensor (it contains two numbers of corresponding shape) whereas the second one is a tuple.

\n", + "system": "" + }, + { + "instruction": "How to conceptually think about relationship between tokenized words and word embeddings?", + "input": "", + "output": "

In short

\n\n

Conceptually, the keras::layer_embedding() takes a 2D matrix [samples, word_sequences], where the values are the integer word identifiers (word index), and replaces said value with their word vector, so that it becomes a 3D matrix [samples, word_sequences, embeddings] -- in other words, where the values are word vectors, not word identifiers. The word vectors that are glued on can come from somewhere else, as in your example above, or they can be randomly initialized and updated during training.

\n\n
\n\n

In less short

\n\n

You pass keras::layer_embedding() word sequences. train_x is a 2D matrix where the rows are samples (documents) and the columns are word sequences. The values in train_x are the integer identifiers (word index) for each word, corresponding to their position in your separately stored list of words (vocabulary). We can stylize train_x as:

\n\n

\"enter

\n\n

Here, the value 75 corresponds to the word in the 75th position of your vocabulary.

\n\n

The embedding_matrix you are passing to keras::set_weights() is a 2D matrix where the rows matches that of your vocabulary. For instance, the values on the 75th row of embedding_matrix are the word vectors for the word in the 75th position of your vocabulary.

\n\n

So if you are gluing on pre-trained embeddings, as in your example above, then keras::layer_embedding() simply replaces the word index with the word vectors in that row of embedding_matrix. We can stylize the operation as

\n\n
for (x in 1:nrow(train_x)) {\n  for (y in 1:ncol(train_x)) {\n    train_x[x, y] <- embedding_matrix[train_x[x, y], ]\n  }\n}\n
\n\n

We therefore end with a 3d matrix (a cube), which we can stylize as:

\n\n

\"enter

\n", + "system": "" + }, + { + "instruction": "Understanding tensorboard images", + "input": "", + "output": "

It looks like that is exactly what you are getting. The grayscale of the image shows the weights. The slider on top can be used to go back and forth in epochs and hence look at the training progression.

\n", + "system": "" + }, + { + "instruction": "Second derivative in Keras", + "input": "", + "output": "

In order for a K.gradients() layer to work like that, you have to enclose it in a Lambda() layer, because otherwise a full Keras layer is not created, and you can't chain it or train through it. So this code will work (tested):

\n\n
import keras\nfrom keras.models import *\nfrom keras.layers import *\nfrom keras import backend as K\nimport tensorflow as tf\n\ndef grad( y, x ):\n    return Lambda( lambda z: K.gradients( z[ 0 ], z[ 1 ] ), output_shape = [1] )( [ y, x ] )\n\ndef network( i, d ):\n    m = Add()( [ i, d ] )\n    a = Lambda(lambda x: K.log( x ) )( m )\n    return a\n\nfixed_input = Input(tensor=tf.constant( [ 1.0 ] ) )\ndouble = Input(tensor=tf.constant( [ 2.0 ] ) )\n\na = network( fixed_input, double )\n\nb = grad( a, fixed_input )\nc = grad( b, fixed_input )\nd = grad( c, fixed_input )\ne = grad( d, fixed_input )\n\nmodel = Model( inputs = [ fixed_input, double ], outputs = [ a, b, c, d, e ] )\n\nprint( model.predict( x=None, steps = 1 ) )\n
\n\n

def network models f( x ) = log( x + 2 ) at x = 1. def grad is where the gradient calculation is done. This code outputs:

\n\n
\n

[array([1.0986123], dtype=float32), array([0.33333334], dtype=float32), array([-0.11111112], dtype=float32), array([0.07407408], dtype=float32), array([-0.07407409], dtype=float32)]

\n
\n\n

which are the correct values for log( 3 ), , -1 / 32, 2 / 33, -6 / 34.

\n\n
\n\n

Reference TensorFlow code

\n\n

For reference, the same code in plain TensorFlow (used for testing):

\n\n
import tensorflow as tf\n\na = tf.constant( 1.0 )\na2 = tf.constant( 2.0 )\n\nb = tf.log( a + a2 )\nc = tf.gradients( b, a )\nd = tf.gradients( c, a )\ne = tf.gradients( d, a )\nf = tf.gradients( e, a )\n\nwith tf.Session() as sess:\n    print( sess.run( [ b, c, d, e, f ] ) )\n
\n\n

outputs the same values:

\n\n
\n

[1.0986123, [0.33333334], [-0.11111112], [0.07407408], [-0.07407409]]

\n
\n\n

Hessians

\n\n

tf.hessians() does return the second derivative, that's a shorthand for chaining two tf.gradients(). The Keras backend doesn't have hessians though, so you do have to chain the two K.gradients().

\n\n

Numerical approximation

\n\n

If for some reason none of the above works, then you might want to consider numerically approximating the second derivative with taking the difference over a small ε distance. This basically triples the network for each input, so this solution introduces serious efficiency considerations, besides lacking in accuracy. Anyway, the code (tested):

\n\n
import keras\nfrom keras.models import *\nfrom keras.layers import *\nfrom keras import backend as K\nimport tensorflow as tf\n\ndef network( i, d ):\n    m = Add()( [ i, d ] )\n    a = Lambda(lambda x: K.log( x ) )( m )\n    return a\n\nfixed_input = Input(tensor=tf.constant( [ 1.0 ], dtype = tf.float64 ) )\ndouble = Input(tensor=tf.constant( [ 2.0 ], dtype = tf.float64 ) )\n\nepsilon = Input( tensor = tf.constant( [ 1e-7 ], dtype = tf.float64 ) )\neps_reciproc = Input( tensor = tf.constant( [ 1e+7 ], dtype = tf.float64 ) )\n\na0 = network( Subtract()( [ fixed_input, epsilon ] ), double )\na1 = network(               fixed_input,              double )\na2 = network(      Add()( [ fixed_input, epsilon ] ), double )\n\nd0 = Subtract()( [ a1, a0 ] )\nd1 = Subtract()( [ a2, a1 ] )\n\ndv0 = Multiply()( [ d0, eps_reciproc ] )\ndv1 = Multiply()( [ d1, eps_reciproc ] )\n\ndd0 = Multiply()( [ Subtract()( [ dv1, dv0 ] ), eps_reciproc ] )\n\nmodel = Model( inputs = [ fixed_input, double, epsilon, eps_reciproc ], outputs = [ a0, dv0, dd0 ] )\n\nprint( model.predict( x=None, steps = 1 ) )\n
\n\n

Outputs:

\n\n
\n

[array([1.09861226]), array([0.33333334]), array([-0.1110223])]

\n
\n\n

(This only gets to the second derivative.)

\n", + "system": "" + }, + { + "instruction": "Advanced Activation layers in Keras Functional API", + "input": "", + "output": "
    \n
  1. No, you forgot to connect the LeakyReLU to the dense layer:

    \n\n

    LR = LeakyReLU(alpha=0.1)(dense_1)

  2. \n
  3. Usually the advanced activations have tunable or learnable parameters, and these have to stored somewhere, it makes more sense for them to be layers as you can then access and save these parameters.

  4. \n
  5. Do it only if there is an advantage, such as tunable parameters.
  6. \n
\n", + "system": "" + }, + { + "instruction": "How to configure input shape for bidirectional LSTM in Keras", + "input": "", + "output": "

This is a well-known problem and it concerns both ordinary and bidirectional RNNs. This discussion on GitHub might help you. In essence, here are the most common options:

\n\n\n", + "system": "" + }, + { + "instruction": "Estimating high resolution images from lower ones using a Keras model based on ConvLSTM2D", + "input": "", + "output": "

This is happening because LSTMs require temporal data, but your first one was declared as a many-to-one model, which outputs a tensor of shape (batch_size, 300, 400, 16). That is, batches of images:

\n\n
model.add(ConvLSTM2D(16, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape))     \nmodel.add(ConvLSTM2D(8, kernel_size=(3,3), activation='sigmoid',padding='same'))\n
\n\n

You want the output to be a tensor of shape (batch_size, 8, 300, 400, 16) (i.e. sequences of images), so they can be consumed by the second LSTM. The way to fix this is to add return_sequences in the first LSTM definition:

\n\n
model.add(ConvLSTM2D(16, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape,\n                     return_sequences=True))\nmodel.add(ConvLSTM2D(8, kernel_size=(3,3), activation='sigmoid',padding='same'))\n
\n\n

You mentioned classification. If what you indent is to classify entire sequences, then you need a classifier at the end:

\n\n
model.add(ConvLSTM2D(16, kernel_size=(3,3), activation='sigmoid',padding='same',input_shape=input_shape,\n                     return_sequences=True))\nmodel.add(ConvLSTM2D(8, kernel_size=(3,3), activation='sigmoid',padding='same'))\nmodel.add(GlobalAveragePooling2D())\nmodel.add(Dense(10, activation='softmax'))  # output shape: (None, 10)\n
\n\n

But if you are trying to classify each image within the sequences, then you can simply reapply the classifier using TimeDistributed:

\n\n
x = Input(shape=(300, 400, 8))\ny = GlobalAveragePooling2D()(x)\ny = Dense(10, activation='softmax')(y)\nclassifier = Model(inputs=x, outputs=y)\n\nx = Input(shape=(data1.shape[0], data1.shape[1], data1.shape[2], 1))\ny = ConvLSTM2D(16, kernel_size=(3, 3),\n               activation='sigmoid',\n               padding='same',\n               return_sequences=True)(x)\ny = ConvLSTM2D(8, kernel_size=(3, 3),\n               activation='sigmoid',\n               padding='same',\n               return_sequences=True)(y)\ny = TimeDistributed(classifier)(y)  # output shape: (None, 8, 10)\n\nmodel = Model(inputs=x, outputs=y)\n
\n\n

Finally, take a look at the examples in keras repository. There's one for a generative model using ConvLSTM2D.

\n\n
\n\n

Edit: to estimate data2 from data1...

\n\n

If I got it right this time, X_train should be 1 sample of a stack of 8 (300, 400, 1) images, not 8 samples of a stack of 1 image of shape (300, 400, 1).
\nIf that's true, then:

\n\n
X_train = data1.reshape(data1.shape[0], 1, data1.shape[1], data1.shape[2], 1)\nY_train = data2.reshape(data2.shape[0], 1, data2.shape[1], data2.shape[2], 1)\n
\n\n

Should be updated to:

\n\n
X_train = data1.reshape(1, data1.shape[0], data1.shape[1], data1.shape[2], 1)\nY_train = data2.reshape(1, data2.shape[0], data2.shape[1], data2.shape[2], 1)\n
\n\n

Also, accuracy doesn't usually make sense when your loss is mse. You can use other metrics such as mae.

\n\n

Now you just need to update your model to return sequences and to have a single unit in the last layer (because the images you are trying to estimate have a single channel):

\n\n
model = Sequential()\ninput_shape = (data1.shape[0], data1.shape[1], data1.shape[2], 1)\nmodel.add(ConvLSTM2D(16, kernel_size=(3, 3), activation='sigmoid', padding='same',\n                     input_shape=input_shape,\n                     return_sequences=True))\nmodel.add(ConvLSTM2D(1, kernel_size=(3, 3), activation='sigmoid', padding='same',\n                     return_sequences=True))\n\nmodel.compile(loss='mse', optimizer='adam')\n
\n\n

After that, model.fit(X_train, Y_train, ...) will start training normally:

\n\n
Using TensorFlow backend.\n(8, 300, 400)\n(8, 300, 400)\nEpoch 1/10\n\n1/1 [==============================] - 5s 5s/step - loss: 2993.8701\nEpoch 2/10\n\n1/1 [==============================] - 5s 5s/step - loss: 2992.4492\nEpoch 3/10\n\n1/1 [==============================] - 5s 5s/step - loss: 2991.4536\nEpoch 4/10\n\n1/1 [==============================] - 5s 5s/step - loss: 2989.8523\n
\n", + "system": "" + }, + { + "instruction": "Keras Lambda layer and variables : "TypeError: can't pickle _thread.lock objects"", + "input": "", + "output": "

The problem is with lambda arguments: minimum and span. They are deduced from the input, but when you define the lambda layer like this:

\n\n
x = Lambda(preprocess,arguments={'minimum':minimum,'span':span})(inputs)\n
\n\n

... they are considered independent arguments that need to be serialized (as a context for lambda). This results in error, because both of them are tensorflow tensors, not static values or a numpy arrays.

\n\n

Change your code to this:

\n\n\n\n
# `preprocess` encapsulates all intermediate values in itself.\ndef preprocess(X):\n  maximum = K.max(X)\n  minimum = K.min(X)\n  span = maximum - minimum\n  output = (X - minimum) / span\n  return output\n\ninputs = Input(shape=(n_inputs,), name='input_tensor')\nx = Lambda(preprocess)(inputs)\n
\n", + "system": "" + }, + { + "instruction": "Keras - visualize classes on a CNN network", + "input": "", + "output": "

What worked for me was the following:

\n\n

To avoid installing all dependencies and caffe on my machine, I've pulled this Docker Image with all Deep Learning frameworks in it.

\n\n

Within minutes I had caffe (as well as keras, tensorflow, CUDA, theano, lasagne, torch, openCV) installed in a container with a shared folder in my host machine.

\n\n

I then ran this caffe script --> \nDeep Dream, and voil\u00e1.

\n\n

models generated by caffe are more resourceful and allow classes as stated above to be 'printed' on input images or from noise.

\n", + "system": "" + }, + { + "instruction": "Sequential has no attribution "validation_data"", + "input": "", + "output": "

Try using self.model.predict(self.validation_data[0]). That is what worked for me.

\n", + "system": "" + }, + { + "instruction": "Keras load_model returning Unexpected keyword argument passed to optimizer: amsgrad", + "input": "", + "output": "

AMSgrad has been added on 11.12.2017 to Keras https://github.com/keras-team/keras/pull/8693 whereas Keras 2.1.2 has been built on 01.12.2017 https://github.com/keras-team/keras/releases/tag/2.1.2

\n\n

So either you need to wait on the next release or install Keras from scratch if the model uses it.

\n", + "system": "" + }, + { + "instruction": "getting ValueError : "Can only tuple-index with a MultiIndex "", + "input": "", + "output": "

You have problem with i:(i+window_size), 0 in dataset[i:(i+window_size), 0].

\n\n

In your code dataset means train_data['seq'] which is single column - single-dimensional Series - but you use i:(i+window_size), 0 like in two-dimensional DataFrame.

\n\n

You can use only single integer like dataset[0] or slice dataset[i:(i+window_size)]

\n", + "system": "" + }, + { + "instruction": "Keras: ImportError: `save_model` requires h5py even thought the code already imported h5py", + "input": "", + "output": "

Make sure you use the latest version of Keras.

\n\n

Also, this error has been reported here in the keras github:\nhttps://github.com/fchollet/keras/issues/3426

\n\n

on linux:

\n\n
sudo apt-get install libhdf5\nsudo pip install h5py\n
\n", + "system": "" + }, + { + "instruction": "how to save val_loss and val_acc in Keras", + "input": "", + "output": "

From Keras documentation, we have for models.fit method:

\n
fit(x=None, y=None, \n    batch_size=None, \n    epochs=1, \n    verbose=1, \n    callbacks=None, \n    validation_split=0.0, validation_data=None, \n    shuffle=True, \n    class_weight=None, \n    sample_weight=None, \n    initial_epoch=0, \n    steps_per_epoch=None, \n    validation_steps=None\n)\n
\n

'val_loss' is recorded if validation is enabled in fit, and val_accis recorded if validation and accuracy monitoring are enabled. - This is from the keras.callbacks.Callback() object, if used for callbacks parameter in the above fit method.

\n

Instead of using the history callback, which you've used, it can be used as follows:

\n
    from keras.callbacks import Callback\n    logs = Callback()\n    model.fit(train_data, \n                train_labels,\n                epochs = 64, \n                batch_size = 10,\n                shuffle = True,\n                validation_split = 0.2, \n                callbacks=[logs]\n           ) \n
\n

'val_loss' is recorded if validation is enabled in fit means: when using the model.fit method you are using either the validatoin_split parameter or you use validation_data parameter to specify the tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights) on which to evaluate the loss and any model metrics at the end of each epoch. .

\n
\n

A History object. Its History.history attribute is a record of\ntraining loss values and metrics values at successive epochs, as well\nas validation loss values and validation metrics values (if\napplicable). - Keras Documentation ( Return value for model.fit method)

\n
\n

You are using the History callback, in your model as follows:

\n
model.fit(train_data, \n            train_labels,\n            epochs = 64,\n            batch_size = 10,\n            shuffle = True,\n            validation_split = 0.2, \n            callbacks=[history]\n       )\n
\n

history.history will output a dictionary for you with the : loss, acc, val_loss and val_acc, if you use a variable for saving model.fit like below:

\n
history = model.fit(\n     train_data, \n     train_labels,\n     epochs = 64,\n     batch_size = 10,\n     shuffle = True,\n     validation_split = 0.2, \n     callbacks=[history]\n)\nhistory.history\n
\n

The output will be like the following:

\n
{'val_loss': [14.431451635814849,\n              14.431451635814849,\n              14.431451635814849,\n              14.431451635814849,\n              14.431451635814849,\n              14.431451635814849,\n              14.431451635814849,\n              14.431451635814849,\n              14.431451635814849,\n              14.431451635814849],\n 'val_acc':  [0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403,\n              0.1046428571712403],\n 'loss': [14.555215610322499,\n          14.555215534028553,\n          14.555215548560733,\n          14.555215588524229,\n          14.555215592157273,\n          14.555215581258137,\n          14.555215575808571,\n          14.55521561940511,\n          14.555215563092913,\n          14.555215624854679],\n 'acc': [0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571,\n         0.09696428571428571]}\n
\n

You can save the data both by using csvlogger like below as given in the comments or by using the longer method of writing a dictionary to a csv file as given here writing a dictionary to a csv

\n
csv_logger = CSVLogger('training.log')\nmodel.fit(X_train, Y_train, callbacks=[csv_logger])\n
\n", + "system": "" + }, + { + "instruction": "very low accuracy in Neural networks using keras and validation accuracy 0.0000e+00", + "input": "", + "output": "

I decided to summarize our \"chat\".

\n\n

So, what to do if your test accuracy is low (around \u2248 0.1%), here are some general recommendations:

\n\n\n", + "system": "" + }, + { + "instruction": "How to add attention layer to seq2seq model on Keras", + "input": "", + "output": "

You can use this repo,

\n\n
    \n
  1. you will need to pip install keras-self-attention
  2. \n
  3. import layer from keras_self_attention import SeqSelfAttention\n\n
  4. \n
  5. Since you are using keras functional API,

    \n\n
    enc_out, h, c = lstm()(enc_in)\natt = SeqSelfAttention()(enc_out)\ndec_in = Input(shape=(None, in_alphabet_len))(att)\n
    \n\n

    I hope this answers your question, and future readers

  6. \n
\n", + "system": "" + }, + { + "instruction": "Euclidean distance transform in tensorflow", + "input": "", + "output": "

I don't see any problem for you to use the distance transform in keras, basically, all you need is tf.py_func, which wraps an existing python function to a tensorflow operator.

\n\n

However, I think the fundamental issue here is about the backpropagation. Your model will have any problem in the forward pass, but what gradient do you expect to propagate? Or you simply don't care its gradient at all.

\n", + "system": "" + }, + { + "instruction": "How does validation_data affect learning in Keras?", + "input": "", + "output": "
\n

Is e.g. the learning rate of back-propagation is automatically optimized by the validation dataset?

\n
\n\n

No, it is not.

\n\n

Your model fits the weights of the learned function to your training data using your pre set hyperparameters. That function is validated on your validation set. Based on the performance of your function on the validation set you can set the hyperparameters to good values, either per hand or by using heuristics.

\n", + "system": "" + }, + { + "instruction": "Something wrong with Keras code Q-learning OpenAI gym FrozenLake", + "input": "", + "output": "

Besides setting use_bias=False as @Maldus mentioned in the comments, another thing you can try is to start with a higher epsilon value (e.g. 0.5, 0.75)? A trick might be to only decrease the epsilon value IF you reach the goal. i.e. don't decrease epsilon on the end of every episode. That way your player can keep on exploring the map randomly, until it starts to converge on a good route, and then it'll be a good idea to reduce the epsilon parameter.

\n\n

I've actually implemented a similar model in keras in this gist using Convolutional layers instead of Dense layers. Managed to get it to work in under 2000 episodes. Might be of some help to others :)

\n", + "system": "" + }, + { + "instruction": "Keras Training warm_start", + "input": "", + "output": "

Yes - it's possible. But rather cumbersome. You need to use train_on_batch function which keeps all model parameters (also optimizer ones).

\n\n

This is cumbersome because you need to divide your dataset to batches on your own and you are also losing the possibility to apply Callbacks and to use automatic progbar. I hope that in new Keras version this option would be added to a fit method.

\n", + "system": "" + }, + { + "instruction": "How to initialize a convolution layer with an arbitrary kernel in Keras?", + "input": "", + "output": "

A few items to fix. Let's start with the kernel initializer. From the documentation:

\n\n
\n

If passing a custom callable, then it must take the argument shape (shape of the variable to initialize) and dtype (dtype of generated values)

\n
\n\n

So the signature should become:

\n\n
def init_f(shape, dtype=None)\n
\n\n

The function will work without the dtype, but it's good practice to keep it there. That way you can specify the dtype to calls inside your function, e.g.:

\n\n
np.zeros(shape, dtype=dtype)\n
\n\n

This also addresses your second issue: the shape argument is a tuple, so you just need to pass it straight to np.zeros and don't need to make another tuple.

\n\n

I'm guessing you're trying to initialize the kernel with a 1 in the middle, so you could also generalize your function to work with whatever shape it receives:

\n\n
ker[tuple(map(lambda x: int(np.floor(x/2)), ker.shape))]=1\n
\n\n

Putting it all together:

\n\n
def init_f(shape, dtype=None):\n    ker = np.zeros(shape, dtype=dtype)\n    ker[tuple(map(lambda x: int(np.floor(x/2)), ker.shape))]=1\n    return ker\n
\n\n

One last problem. You need to pass the function to the layer, not the result of the call:

\n\n
model.add(Conv2D(filters=32, kernel_size=(3,3),\n                  kernel_initializer=init_f))\n
\n\n

The layer function will pass the arguments to init_f.

\n", + "system": "" + }, + { + "instruction": "How to train a network in Keras for varying output size", + "input": "", + "output": "

A recurrent neural networks (RNNs) would be much more appropriate, this models are typicall called seq2seq, that is, sequence to sequence. Recurrent nets use layers like LSTM and GRU, and can input and output variable length sequences. Just look at things like Machine Translation done with RNNs.

\n\n

This can be done directly with keras, and there are many examples lying around the internet, for example this one.

\n", + "system": "" + }, + { + "instruction": "Keras slow on GPU with LSTM Networks", + "input": "", + "output": "

If you are using ReLU, try use a Tanh loss.

\n

I was using ReLU for a LSTM-DNN model at 100 epochs. Each Epoch went from about 14 second to 2

\n

hopefully this solves your problem - I was running on GPU too so it could just be your processor unfortunately - use google Colab if you are bottlenecked - you get access to free GPU usage

\n", + "system": "" + }, + { + "instruction": "Serving Keras Models With Tensorflow Serving", + "input": "", + "output": "

Very recently TensorFlow changed the way it exports the model, so the majority of the tutorials available on web are outdated. I honestly don't know how deeplearning4j works, but I use Keras quite often. I managed to create a simple example that I already posted on this issue in TensorFlow Serving Github.

\n\n

I'm not sure whether this will help you, but I'd like to share how I did and maybe it will give you some insights. My first trial prior to creating my custom model was to use a trained model available on Keras such as VGG19. I did this as follows.

\n\n

Model creation

\n\n
import keras.backend as K\nfrom keras.applications import VGG19\nfrom keras.models import Model\n\n# very important to do this as a first thing\nK.set_learning_phase(0)\n\nmodel = VGG19(include_top=True, weights='imagenet')\n\n# The creation of a new model might be optional depending on the goal\nconfig = model.get_config()\nweights = model.get_weights()\nnew_model = Model.from_config(config)\nnew_model.set_weights(weights)\n
\n\n

Exporting the model

\n\n
from tensorflow.python.saved_model import builder as saved_model_builder\nfrom tensorflow.python.saved_model import utils\nfrom tensorflow.python.saved_model import tag_constants, signature_constants\nfrom tensorflow.python.saved_model.signature_def_utils_impl import     build_signature_def, predict_signature_def\nfrom tensorflow.contrib.session_bundle import exporter\n\nexport_path = 'folder_to_export'\nbuilder = saved_model_builder.SavedModelBuilder(export_path)\n\nsignature = predict_signature_def(inputs={'images': new_model.input},\n                                  outputs={'scores': new_model.output})\n\nwith K.get_session() as sess:\n    builder.add_meta_graph_and_variables(sess=sess,\n                                         tags=[tag_constants.SERVING],\n                                         signature_def_map={'predict': signature})\n    builder.save()\n
\n\n

Some side notes

\n\n\n\n

With respect to serving different models within the same server, I think that something similar to the creation of a model_config_file might help you. To do so, you can create a config file similar to this:

\n\n
model_config_list: {\n  config: {\n    name: \"my_model_1\",\n    base_path: \"/tmp/model_1\",\n    model_platform: \"tensorflow\"\n  },\n  config: {\n     name: \"my_model_2\",\n     base_path: \"/tmp/model_2\",\n     model_platform: \"tensorflow\"\n  }\n}\n
\n\n

Finally, you can run the client like this:

\n\n
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --config_file=model_config.conf\n
\n", + "system": "" + }, + { + "instruction": "Neural-net Regression predicts same value for all test samples", + "input": "", + "output": "

You should try scaling your data. This normally happens when features are not scaled.

\n", + "system": "" + }, + { + "instruction": "keras val very slow when use model.fit_generator", + "input": "", + "output": "

@Yanning As you have mentioned in your comment itself the first epoch is slow because the ImageDataGenerator is reading data from disk to RAM. This part is very slow. Once the data has been read into RAM it just the matter of reading and transferring data from RAM to GPU.

\n\n

Therefore if your dataset is not huge and can fit into your RAM, you can try to make a single numpy file out of all the dataset and read this data in the beginning. This will save a lot of disk seek time.

\n\n

Please checkout this post to get some comparison between time taken for different operations:

\n\n

Latency Numbers Every Programmer Should Know

\n\n

Latency Comparison Numbers

\n\n
Main memory reference                         100 ns\nRead 1 MB sequentially from memory        250,000 ns \nRead 1 MB sequentially from SSD         1,000,000 ns\nRead 1 MB sequentially from disk       20,000,000 ns
\n", + "system": "" + }, + { + "instruction": "Conv1D on 2D input", + "input": "", + "output": "

Here is a visual illustration

\n\n
kernel_size = (2, )\n\n------------- \n| 1 1 1 1 1 |                <---- kernel dim = kernel_size X 5\n| 2 2 2 2 2 |\n-------------\n  3 3 3 3 3 \n\n\n\n--------------------------\n| 1 1 1 1 1 1 1 1 1 1 1 1 |  <---- kernel dim = kernel_length X 12\n| 2 2 2 2 2 2 2 2 2 2 2 2 |        i.e more params! but after \n--------------------------         you apply say MaxPool1D(pool_size=(2,2))\n  3 3 3 3 3 3 3 3 3 3 3 3          in both cases, then layer shapes from here\n                                   on out are the same, thus same outputs!\n
\n", + "system": "" + }, + { + "instruction": "Panel data in Keras LSTM", + "input": "", + "output": "

Try:

\n\n
mod = Sequential()\nmod.add(LSTM(30, input_shape=(timesteps, features), return_sequences = True))\nmod.add(LSTM(10, return_sequences = True))\nmod.add(TimeDistributed(Dense(1, activation='sigmoid')))\n# In newest Keras version you can change the line above to mod.add(Dense(1, ..))\n\nmod.compile(loss='binary_crossentropy', optimizer='rmsprop')\n
\n", + "system": "" + }, + { + "instruction": "patch-wise training and fully convolutional training in FCN", + "input": "", + "output": "

The term \"Fully Convolutional Training\" just means replacing fully-connected layer with convolutional layers so that the whole network contains just convolutional layers (and pooling layers).

\n\n

The term \"Patchwise training\" is intended to avoid the redundancies of full image training.\nIn semantic segmentation, given that you are classifying each pixel in the image, by using the whole image, you are adding a lot of redundancy in the input. A standard approach to avoid this during training segmentation networks is to feed the network with batches of random patches (small image regions surrounding the objects of interest) from the training set instead of full images. This \"patchwise sampling\" ensures that the input has enough variance and is a valid representation of the training dataset (the mini-batch should have the same distribution as the training set). This technique also helps to converge faster and to balance the classes. In this paper, they claim that is it not necessary to use patch-wise training and if you want to balance the classes you can weight or sample the loss.\nIn a different perspective, the problem with full image training in per-pixel segmentation is that the input image has a lot of spatial correlation. To fix this, you can either sample patches from the training set (patchwise training) or sample the loss from the whole image. That is why the subsection is called \"Patchwise training is loss sampling\".\nSo by \"restricting the loss to a randomly sampled subset of its spatial terms excludes patches from the gradient computation.\" They tried this \"loss sampling\" by randomly ignoring cells from the last layer so the loss is not calculated over the whole image.

\n", + "system": "" + }, + { + "instruction": "Keras Callback EarlyStopping comparing training and validation loss", + "input": "", + "output": "

You can create a custom callback class for your purpose.

\n\n

I have created one that should correspond to your need :

\n\n
class CustomEarlyStopping(Callback):\n    def __init__(self, ratio=0.0,\n                 patience=0, verbose=0):\n        super(EarlyStopping, self).__init__()\n\n        self.ratio = ratio\n        self.patience = patience\n        self.verbose = verbose\n        self.wait = 0\n        self.stopped_epoch = 0\n        self.monitor_op = np.greater\n\n    def on_train_begin(self, logs=None):\n        self.wait = 0  # Allow instances to be re-used\n\n    def on_epoch_end(self, epoch, logs=None):\n        current_val = logs.get('val_loss')\n        current_train = logs.get('loss')\n        if current_val is None:\n            warnings.warn('Early stopping requires %s available!' %\n                          (self.monitor), RuntimeWarning)\n\n        # If ratio current_loss / current_val_loss > self.ratio\n        if self.monitor_op(np.divide(current_train,current_val),self.ratio):\n            self.wait = 0\n        else:\n            if self.wait >= self.patience:\n                self.stopped_epoch = epoch\n                self.model.stop_training = True\n            self.wait += 1\n\n    def on_train_end(self, logs=None):\n        if self.stopped_epoch > 0 and self.verbose > 0:\n            print('Epoch %05d: early stopping' % (self.stopped_epoch))\n
\n\n

I took the liberty to interpret that you wanted to stop if the ratio between the train_loss and the validation_loss goes under a certain ratio threshold. This ratio argument should be between 0.0 and 1.0. However, 1.0 is dangerous as the validation loss and the training loss might fluctuate a lot in an erratic way at the beginning of the training.

\n\n

You can add a patience argument which will wait to see if the breaking of your threshold is staying for a certain number of epochs.

\n\n

The way to use this is for exampe :

\n\n
callbacks = [CustomEarlyStopping(ratio=0.5, patience=2, verbose=1), \n            ... Other callbacks ...]\n...\nmodel.fit(..., callbacks=callbacks)\n
\n\n

In this case it will stop if the training loss stays lower than 0.5*val_loss for more than 2 epochs.

\n\n

Does that help you?

\n", + "system": "" + }, + { + "instruction": "Keras correct input shape for multilayer perceptron", + "input": "", + "output": "

From keras MLP example, https://github.com/fchollet/keras/blob/master/examples/mnist_mlp.py

\n\n
# the data, shuffled and split between train and test sets\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\nX_train = X_train.reshape(60000, 784)\nX_test = X_test.reshape(10000, 784)\n
\n\n

And the model input

\n\n
model = Sequential()\nmodel.add(Dense(512, input_shape=(784,)))\n
\n\n

So you should reshape your train and test to (2000,75*75) and (500,75*75) with

\n\n
train_data = train_data.reshape(2000, 75*75)\ntest_data = test_data.reshape(500, 75*75)\n
\n\n

and then set the model input shape as you did

\n\n
model.add(Dense(512, input_shape=(75*75,)))\n
\n", + "system": "" + }, + { + "instruction": "Change input tensor shape for VGG16 application", + "input": "", + "output": "

VGG16 model in itself is just a set of weights of the fixed sequence of layers and fixed convolution kernel sizes etc. That doesn't mean that those convolution kernels cannot be applied to images of other sizes.

\n\n

For example in your case:

\n\n
from keras.models import Model\nfrom keras.layers import Dense,Flatten\nfrom keras.applications import vgg16\nfrom keras import backend as K\n\nmodel = vgg16.VGG16(weights='imagenet', include_top=False, input_shape=(160,320,3))\nmodel.summary(line_length=150)\n\nflatten = Flatten()\nnew_layer2 = Dense(10, activation='softmax', name='my_dense_2')\n\ninp2 = model.input\nout2 = new_layer2(flatten(model.output))\n\nmodel2 = Model(inp2, out2)\nmodel2.summary(line_length=150)\n
\n\n

According to here the minimum image size can be 48x48x3 anything above than that is fine.

\n\n

Now its true the original weights were learnt on 224,224,3 shaped images but the filters weights act as very good starting point for new tasks with new set of images. You do need to re-train the network but the network would converge very quickly. This is the basis of transfer learning.

\n", + "system": "" + }, + { + "instruction": "Keras model.to_json() error: 'rawunicodeescape' codec can't decode bytes in position 94-98: truncated \\uXXXX", + "input": "", + "output": "

It seems like your code is in directory like this: \"C:\\Users\\python\\u{...}.py\". This kind of error is related to python 3 version, where we get special character \\u and can't decode it on Windows machines. You can change filename or full path to file, so that it did not contain special characters or make a patch to function 'func_dump' from file generic_utils.py (it can be reached by following path 'keras/utils/generic_utils.py'). You should replace line code = marshal.dumps(func.__code__).decode('raw_unicode_escape') by line code = marshal.dumps(func.__code__).replace(b'\\\\',b'/').decode('raw_unicode_escape').

\n", + "system": "" + }, + { + "instruction": "Is it possible to export python and its necessary libraries into a environment independent file?", + "input": "", + "output": "

Python 3 includes a tool called zipapp that allows you to build single archives out of Python projects, bundling in all (or some) dependencies. The tool is only bundled with Python 3, but the generated archive will work with any version of Python >= 2.6 you are targeting.

\n\n

Supposing you have all your source files in src/, including the mentioned Tensor Flow library:

\n\n
$ python3 -m zipapp -o yourapp.pyz -m \"your.entry.point.module:main_function\" src/\n
\n\n

The -m flag allows you to specify a module and a function inside it, separed with a :, that will be executed when you run the .pyz file. About running it, it's just a matter of:

\n\n
$ python ./yourapp.pyz\n
\n\n

Again, you will need Python >= 2.6 in order for this to work.

\n\n

If you are targeting unix platforms, you might also add a shebang:

\n\n
$ echo \"#! /usr/bin/python\" > yourapp2.pyz\n$ cat yourapp.pyz >> yourapp2.pyz\n$ chmod +x yourapp2.pyz\n
\n\n

so that you can run the file as:

\n\n
$ ./yourapp2.pyz\n
\n\n

As a side note, a .pyz is nothing more than a .zip file. You might create one yourself without zipapp, just include a __main__.py file at top level. That will be the entry point of your bundle.

\n", + "system": "" + }, + { + "instruction": "GridSearch with Keras Neural Networks", + "input": "", + "output": "

Here the error indicates that the build_fn needs to have 2 arguments as indicated from the # of parameters from param_grid.

\n\n

So you need to explicitly define an new function and use that as build_fn=make_model

\n\n
def make_model(batch_size, nb_epoch):\n    model = Sequential()\n    model.add(Dense(in_size, input_dim=in_size, init='uniform', activation='relu'))\n    model.add(Dense(8, init='uniform', activation='relu'))\n    model.add(Dense(1, init='uniform', activation='sigmoid'))\n    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n    return model\n
\n\n

Also check keras/examples/mnist_sklearn_wrapper.py where GridSearchCV is used for hyper-parameter search.

\n", + "system": "" + }, + { + "instruction": "Installing Keras package with conda install", + "input": "", + "output": "

One solution could be creating a conda environment:

\n\n
conda create -n keras python=3.5\n
\n\n

Now activate it:

\n\n
conda activate keras\n
\n\n

and install keras:

\n\n
(keras)$ conda install keras\n
\n\n

Try if it works:

\n\n
(keras)$ python\n>>> import keras\n
\n", + "system": "" + }, + { + "instruction": "what is the meaning of border_mode in keras?", + "input": "", + "output": "

When you use a two-dimentional image with m rows and n cols, and a a*b size kernel to convolute the input image, this is what happens:

\n\n

If border_mode is 'full', returns a (m+a-1)x(n+b-1) image;
\n if border_mode is 'same', returns the same dimention as the input image;
if border_mode is 'valid',returns a (m-a+1)x(n-b+1) image.\n for example,

\n\n

Input:\nIn following 4x4 image

\n\n

A = [12 13 14 15;1 2 3 4;16 17 18 19;5 6 7 8], and a 3x3 kernel B = [1 2 3;4 5 6;7 8 9],

\n\n

if border_mode is 'full', then returns a 6x6 matrix;
\nif border_mode is 'same', then returns a 4x4 matrix;
\nif border_mode is 'valid', then returns a 2x2 matrix.

\n\n

you can also use function conv2(A,B,border_mode) in MATLAB to test the output matrix.

\n\n

Hope this answer could help.

\n", + "system": "" + }, + { + "instruction": "Machine Learning: Image classification into 3 classes (Dog or Cat or Neither) using Convolutional NN", + "input": "", + "output": "

Question 2

\n\n

I'll take the \"killer\" heuristic first. Yes, use the existing trained model. Simply conglomerate all of the dog classifications into your class 1, the cats into class 2, and everything else into class 0. This will solve virtually all of your problem.

\n\n

Question 1

\n\n

The problem is that your initial model has been trained that everything in the world (all 40,000 images) is either a dog or a cat. Yes, you have to train a third set, unless your training method is a self-limiting algorithm, such as a single-class SVM (run once on each classification). Even then, I expect that you'd have some trouble excluding a lynx or a wolf.

\n\n

You're quite right that you'll need plenty of examples for the \"neither\" class, given the high dimension of the input space: it's not so much the quantity of images, but their placement just \"over the boundary\" from a cat or dog. I'd be interested in a project to determine how to do this with minimal additional input.

\n\n

In short, don't simply grab 50K images from the ImageNet type of the world; choose those that will give your model the best discrimination: other feline and canine examples, other objects you find in similar environments (end table, field rodent, etc.).

\n", + "system": "" + }, + { + "instruction": "Stateful LSTM with Embedding Layer (shapes don't match)", + "input": "", + "output": "

The batch_input_shape parameter of the Embedding layer should be (batch_size, time_steps), where time_steps is the length of the unrolled LSTM / number of cells and batch_size is the number of examples in a batch.

\n\n
model = Sequential()\nmodel.add(Embedding(\n   input_dim=input_dim, # e.g, 10 if you have 10 words in your vocabulary\n   output_dim=embedding_size, # size of the embedded vectors\n   input_length=time_steps,\n   batch_input_shape=(batch_size,time_steps)\n))\nmodel.add(LSTM(\n   10, \n   batch_input_shape=(batch_size,time_steps,embedding_size),\n   return_sequences=False, \n   stateful=True)\n)\n
\n\n

There is an excellent blog post which explains stateful LSTMs in Keras. Also, I've uploaded a gist which contains a simple example of a stateful LSTM with Embedding layer.

\n", + "system": "" + }, + { + "instruction": "How to use Keras to build a Part-of-Speech tagger?", + "input": "", + "output": "

There's many variations to how to do this and they depend on the amount of data you have and time you want to invest into this. I'll try to give you the mainstream path which you can improve upon yourself while citing some of the alternatives. I will not assume prior knowledge of text modeling with deep learning.

\n\n

One way is to model the problem as multi-class classification, where the classes/label types are all possible POS tags. There's two most common ways to frame this with a deep learning model: One is a window model. The other is a sequence tagger using a recurrent unit.

\n\n

Let's assume the simplest of both, the window model. Then you can do the following:

\n\n

Structuring the data

\n\n
    \n
  1. Chop your corpus into windows of W words (e.g. 3 words) where the center word is the one you want to classify the other ones are context. Let's call this part of the data X.
  2. \n
  3. For each window, get the POS tag for the center word. Let's call this part of data y
  4. \n
\n\n

Encoding the data

\n\n

Encoding X as vectors

\n\n

Now neural nets need X encoded as a sequence of vectors. A common choice is to encode each word as a word embedding.

\n\n

To do so first you tokenize your text and encode each word as an integer word id (e.g. every ocurrence of \"cat\" will be the number 7). If you don't have your own tokenizer you can use the one bundled with Keras. This takes text and returns sequence of integers/word ids.

\n\n

Second you may want to pad and truncate each sequence of word ids so that every instance have the same length (note: there's other ways of handling this). An example from the imdb_lstm.py is

\n\n
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=max_features)\nprint(len(X_train), 'train sequences')\nprint(len(X_test), 'test sequences')\nprint('Pad sequences (samples x time)')\nX_train = sequence.pad_sequences(X_train, maxlen=maxlen)\nX_test = sequence.pad_sequences(X_test, maxlen=maxlen)\nprint('X_train shape:', X_train.shape)\nprint('X_test shape:', X_test.shape)\n
\n\n

Then you can use an Embedding layer to convert the sequence of padded/truncated word ids to a sequence of word embeddings. Example from imdb_lstm.py:

\n\n
model = Sequential()\nmodel.add(Embedding(max_features, 128, dropout=0.2))\nmodel.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))  # try using a GRU instead, for fun\n
\n\n

Here the output of Embedding is being used fed to an LSTM. I list other model options at the end.

\n\n

Encoding y

\n\n

To do multi-class classification with Keras one usually uses categorical_crossentropy which expects the label to be a one-hot vector which as long as the number of possible categories (number of possible POS tags in your case). You can use keras' to_categorical. Note that it expects a vector of integer where each integer represents a class (e.g. NNP could be 0, VBD could be 1 and so on):

\n\n
def to_categorical(y, nb_classes=None):\n    '''Convert class vector (integers from 0 to nb_classes) to binary class matrix, for use with categorical_crossentropy.\n    # Arguments\n        y: class vector to be converted into a matrix\n        nb_classes: total number of classes\n    # Returns\n        A binary matrix representation of the input.\n    '''\n
\n\n

Model options

\n\n

Since in this line of solution you would be basically be doing multi-class classification you can basically take as imdb_ follow any examples from keras examples. These are actually binary text classification examples. To make them multi-class you need to use a softmax instead of a sigmoid as the final activation function and categorical_crossentropy instead of binary_crossentropy like in the mnist_ examples:

\n\n
model.add(Flatten())\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss='categorical_crossentropy',\n              optimizer='adadelta',\n              metrics=['accuracy'])\n
\n", + "system": "" + }, + { + "instruction": "Why do tensorflow and keras SimpleRNN layers have a default activation of tanh", + "input": "", + "output": "

RNNs can suffer from both exploding gradient and vanishing gradient problems. When the sequence to learn is long, then this can be a very delicate balance tipping into one or the other quite easily. Both problems are caused by exponentiation - each layer multiplies by weight matrix and derivative of activation, so if either the matrix magnitude or activation derivative is different from 1.0, there will be a tendency towards exploding or vanishing.

\n\n

ReLUs do not help with exploding gradient problems. In fact they can be worse than activation functions which are naturally limited when weights are large such as sigmoid or tanh.

\n\n

ReLUs do help with vanishing gradient problems. However, the designs of LSTM and GRU cells are also intended to address the same problem (of dealing with learning from potentially weak signals many time steps away), and do so very effectively.

\n\n

For a simple RNN with short time series, there should be nothing wrong working with ReLU activation. To address the possibility of exploding gradients when training, you could look at gradient clipping (treating gradients outside of allowed range as being the min or max of that range).

\n", + "system": "" + }, + { + "instruction": "Fully convolutional autoencoder for variable-sized images in keras", + "input": "", + "output": "

Yes, there is.

\n\n

You can use three methods

\n\n\n\n

Padding will only work to increase the dimensions. Not beneficial for reducing the size.

\n\n

Resizing should be more costly but optimum solution for each case (up or downsampling). It will keep all the values in the range and will simply resample them to resize in a given dimension.

\n\n

Crop or Pad will work as resize and it will be more compute-efficient as there is no interpolation in this method. However, if you want to resize it to a smaller dimension, it will crop from the edges.

\n\n

By using those 3, you can arrange your layer's dimensions.

\n", + "system": "" + }, + { + "instruction": "Keras: LSTM with class weights", + "input": "", + "output": "

I think you should use sample_weight with sample_weight_mode='temporal'.

\n\n

From the Keras docs:

\n\n
\n

sample_weight: Numpy array of weights for the training samples, used\n for scaling the loss function (during training only). You can either\n pass a flat (1D) Numpy array with the same length as the input samples\n (1:1 mapping between weights and samples), or in the case of temporal\n data, you can pass a 2D array with shape (samples, sequence_length),\n to apply a different weight to every timestep of every sample. In this\n case you should make sure to specify sample_weight_mode=\"temporal\" in\n compile().

\n
\n\n

In your case you would need to supply a 2D array with the same shape as your labels.

\n", + "system": "" + }, + { + "instruction": "Keras/Tensorflow predict: error in array shape", + "input": "", + "output": "

You have to reshape the input image to have a shape of [?, 3, 32, 32] where ? is the batch size. In your case, since you have 1 image the batch size is 1, so you can do:

\n\n
img = np.array(img)\nimg = img.reshape((1, 3, 32, 32))\n
\n", + "system": "" + }, + { + "instruction": "Creating a generator from list of sequences for RNN", + "input": "", + "output": "

It's not exactly clear what you are looking for. A small sample of input and desired output would help. Nevertheless, I'll take a stab at what I think you are asking:

\n\n
def dataIterator(data, batchSize=batchSize):\n    for patient_data in data:\n        for n in range(0, len(patient_data), batchSize):\n            yield patient_data[n:n+batchSize]\n
\n", + "system": "" + }, + { + "instruction": "The right way to define a function in theano?", + "input": "", + "output": "

theano.tensor.zeros(...) can't take any other value than 0.

\n\n

Unless you add nodes to the graph of course and modify parts of the zeros tensor using theano.tensor.set_subtensor.

\n\n

The input tensor theano.tensor.fmatrix can take any value you input.

\n", + "system": "" + }, + { + "instruction": "How to train a network in Keras for varying output size", + "input": "", + "output": "

A recurrent neural networks (RNNs) would be much more appropriate, this models are typicall called seq2seq, that is, sequence to sequence. Recurrent nets use layers like LSTM and GRU, and can input and output variable length sequences. Just look at things like Machine Translation done with RNNs.

\n\n

This can be done directly with keras, and there are many examples lying around the internet, for example this one.

\n", + "system": "" + }, + { + "instruction": "Error: No module named 'tensorflow.keras'", + "input": "", + "output": "

As mentioned by @Shishu Kumar Choudhary,

\n

There is tensorflow and keras version mismatched in your system environment. (TF=2.9.1, Keras=2.2)

\n

You need to install the same version of keras==2.9 as installed version of tensorflow==2.9 to fix this issue.

\n

Please use the below code to install keras==2.9 and try running your code again

\n
!pip install keras==2.9\n\n!pip install patchify    #To install and import other mentioned libraries  in code\n!pip install segmentation_models\n
\n", + "system": "" + }, + { + "instruction": "Evaluating (model.evaluate) with a triplet loss Siamese neural network model - tensorflow", + "input": "", + "output": "

Considering that eval_model is trained to produce embeddings, I think that should be good to evaluate the similarity between two embeddings using cosine similarity.

\n

Following the TF documentation, the cosine similarity is a number between -1 and 1. When it is a negative number closer to -1, it indicates greater similarity. When it is a positive number closer to 1, it indicates greater dissimilarity.

\n

We can simply calculate the cosine similarity between Positive and Negative inputs for all the samples at disposal. When the cosine similarity is < 0 we can say that the two inputs are similar (1 = similar, 0 = not similar). In the end, is possible to calculate the binary accuracy as a final metric.

\n

We can make all the calculations using TF and without the need of using model.evaluate.

\n
eval_model = Model(inputs=anchor_input, outputs=encoded_anchor)\neval_model.load_weights('weights.hdf5')\n\ncos_sim = tf.keras.losses.cosine_similarity(\n    eval_model(X1), eval_model(X2)\n).numpy().reshape(-1,1)\n\naccuracy = tf.reduce_mean(tf.keras.metrics.binary_accuracy(Y, -cos_sim, threshold=0))\n
\n

Another approach consists in computing the cosine similarity between the anchor and positive images and comparing it with the similarity between the anchor and the negative images.

\n
eval_model = Model(inputs=anchor_input, outputs=encoded_anchor)\neval_model.load_weights('weights.hdf5')\n\npositive_similarity = tf.keras.losses.cosine_similarity(\n    eval_model(X_anchor), eval_model(X_positive)\n).numpy().mean()\n\nnegative_similarity = tf.keras.losses.cosine_similarity(\n    eval_model(X_anchor), eval_model(X_negative)\n).numpy().mean()\n
\n

We should expect the similarity between the anchor and positive images to be larger than the similarity between the anchor and the negative images.

\n", + "system": "" + }, + { + "instruction": "BatchDataSet: get img array and labels", + "input": "", + "output": "

Just unbatch your dataset and convert the data to lists:

\n
import tensorflow as tf\nimport pathlib\n\ndataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" \ndata_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True)\ndata_dir = pathlib.Path(data_dir) \nbatch_size = 32 \ntrain_ds = tf.keras.utils.image_dataset_from_directory( \n     data_dir, validation_split=0.2, subset="training", \n     seed=123, batch_size=batch_size) \n\ntrain_ds = train_ds.unbatch()\nimages = list(train_ds.map(lambda x, y: x))\nlabels = list(train_ds.map(lambda x, y: y))\n\nprint(len(labels))\nprint(len(images))\n
\n
Found 3670 files belonging to 5 classes.\nUsing 2936 files for training. \n2936 \n2936\n
\n", + "system": "" + }, + { + "instruction": "Keras ImageDataGenerator brightness range", + "input": "", + "output": "

As this article shows, 0 means no brightness (completely black), 1 means maximum brightness (no change), and anything above just makes the image brighter.

\n", + "system": "" + }, + { + "instruction": "Kernel and Recurrent Kernel in Keras LSTMs", + "input": "", + "output": "

The kernels are basically the weights handled by the LSTM cell

\n

units = neurons, like the classic multilayer perceptron

\n

It is not shown in your diagram, but the input is a vector X with 1 or more values, and each value is sent in a neuron with its own weight w (the which we are going to learn with backpropagation)

\n

The four matrices are these (expressed as Wf, Wi, Wc, Wo):

\n

\"The

\n

When you add a neuron, you are adding other 4 weights\\kernel

\n

So for your input vector X you have four matrix. And therefore

\n
1 * 4 * units = kernel\n
\n

Regarding the recurrent_kernel here you can find the answer.\nBasically in keras input and hidden state are not concatenated like in the example diagrams (W[ht-1, t]) but they are split and handled with other four matrices called U:

\n

\n

Because you have a hidden state x neuron, the weights U (all four U) are:

\n
units * (4 * units) = recurrent kernel\n
\n

ht-1 comes in a recurrent way from all your neurons. Like in a multilayer perceptron, each output of a neuron goes in all the next recurrent layer neurons

\n

source: http://colah.github.io/posts/2015-08-Understanding-LSTMs/

\n", + "system": "" + }, + { + "instruction": "What is the difference between the file extensions .h5 .hdf5 and .ckpt and which one should I use?", + "input": "", + "output": "
\n

What is the difference between the file extensions .h5, .hdf5 and .ckpt ?

\n
\n

.h5 and .hdf5

\n

According to this both .h5 and .hdf5 are basically the same, it is a data file saved in the Hierarchical Data Format (HDF), It contains multidimensional arrays of scientific data.

\n

And according to this saving a model using that format results in saving the model with the following:

\n
    \n
  1. The weight values.
  2. \n
  3. The model's architecture.
  4. \n
  5. The model's training configuration (what you pass to the .compile() method)
  6. \n
  7. The optimizer and its state, if any (this enables you to restart training where you left off)
  8. \n
\n

.ckpt

\n

It is short for checkpoint, so by its name it's basically to save a state of the model during training after achieving a certain condition (lower than a certain loss value or higher than a certain accuracy value).

\n

Saving model as .ckpt has its setback as it only saves the weights of the variables or the graph, so you will need to have full architectures and functions used to load those weights and variables into the architecture and build and use the model. (basically the code)

\n

This format is mainly used when you want to resume the training and allows you to customize the saved checkpoints and load them as well. (which allows for continuous improving for the model and changing parameters according to results which allows for creating different models from different checkpoints).

\n
\n

Which extension should i use ?

\n
\n

Depends on your goal of training the model, if you are in the training process and experimenting a lot, I would suggest saving the model as a .ckpt format.

\n

If you're done experimenting and finalizing the model, I would suggest saving it as a .h5 format so that you could load it and use it without needing to have the code used to create model architecture.

\n
\n

Also would I need to call model.save(filepath) at the end of the code or would my model be saved automatically by ModelCheckpoint()?

\n
\n

You can call both, but i would suggest having the extension in ModelCheckpoint() be .ckpt so that you can save the highest possible model state during the training process, and when you are done training call model.save(filepath) but as a .h5 format so that after training the model should be saved and used anywhere without the need for the original architecture code.

\n

That way you give yourself the option to enhance training and load the .ckpt model or if you are satisfied with the final result use the .h5 model as a final version for the model.

\n", + "system": "" + }, + { + "instruction": "Python Keras code out of memory for no apparent reason", + "input": "", + "output": "

I was able to replicate your issue in the notebook on google colab GPU instance with 12GB of memory. After 6 iteration my memory spiked to ~2,5GB, then ~6GB at 50 iteration and then kernel died.

\n

by just calling garbage collector in every inner loop memmory stabilized at ~1gb and i was able to continue pass 2 outer iterations. (then i canceled it)

\n

My suspicion here why this is happening is that tensorflow is creating references in iterations faster than garbage collector is able to collect them by default.

\n
import gc\nimport tensorflow as tf\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.utils import shuffle\nimport numpy as np\n\ndata, targets = shuffle(*fetch_openml('CIFAR_10', version=1, return_X_y=True))\n\ntrain_sz = 50000\nX_train, X_test, y_train, y_test = data[:train_sz, :], data[train_sz:, :], np.asarray(targets[:train_sz], dtype=np.int), np.asarray(targets[train_sz:], dtype=np.int)\n\nmodel = tf.keras.Sequential()\nmodel.add(tf.keras.Input(shape=(X_train.shape[1],)))\nmodel.add(tf.keras.layers.Dense(64, activation='relu'))\nmodel.add(tf.keras.layers.Dense(10))\nmodel.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer='adam')\n\ns = 0\nfor _ in range(500):\n    for i in range(100):\n        gc.collect()\n        layers = []\n        for layer in model.get_weights():\n            layers.append(np.random.normal(0, 1, layer.shape))\n        model.set_weights(layers)\n        eval = model.evaluate(X_train, y_train)\n        s += eval\n        print(f'Done {i} eval {eval}')\ns\n
\n

PS: officially calling garbage collector manually is something you want to avoid, but sometimes it gets the job done.

\n", + "system": "" + }, + { + "instruction": "Inverse of keras.Flatten for building autoencoder", + "input": "", + "output": "

Why do you want to have specifically (10,1) shape for the vector?\nYou are trying to then do convolutions on that with kernel of size 3x3, which does not really make sense.

\n\n

The shape a convolutional layer takes in has height, width and channels. The output of the dense layer has to be reshaped which can be done with Reshape layer.\nYou can then reshape it to for example 5x2 with single channel.

\n\n
encoded = Reshape((5, 2, 1))(encoded)\n
\n", + "system": "" + }, + { + "instruction": "Difference between using import keras and import tensorflow.keras?", + "input": "", + "output": "

tensorflow.keras imports use TensorFlow repository code, whereas keras imports use Keras repository code. The two use independent method/class implementations, even if keras imports from tensorflow.

\n\n

Depending on your use, functionality may differ significantly, or not much. tensorflow.keras is recommended per being better maintaned and more up-to-date - unless you use TF <2, where the two are on par for most part, except with performance considerations.

\n", + "system": "" + }, + { + "instruction": "Define action values in keras-rl", + "input": "", + "output": "

I am not sure why self.action_space = spaces.Discrete(3) is giving you actions as 0,2,4 since I cannot reproduce your error with the code snippet you posted, so I would suggest the following for defining your action

\n
self.action_space = gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)\n
\n

And this what I get when I sample from the action space.

\n
actions= gym.spaces.Box(low=np.array([1]),high= np.array([3]), dtype=np.int)\nfor i in range(10):\n    print(actions.sample())\n\n[1]\n[3]\n[2]\n[2]\n[3]\n[3]\n[1]\n[1]\n[2]\n[3]\n
\n", + "system": "" + }, + { + "instruction": "Convert model.fit_generator to model.fit", + "input": "", + "output": "

You just have to change model.fit_generator() to model.fit().

\n\n

As of TensorFlow 2.1, model.fit() also accepts generators as input. As simple as that.

\n\n

From TensorFlow's official documentation:

\n\n
\n

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future\n version. Instructions for updating: Please use Model.fit, which\n supports generators.

\n
\n", + "system": "" + }, + { + "instruction": "keras variational autoencoder loss function", + "input": "", + "output": "

first_one: CE + mean(kl, axis=-1) = CE + sum(kl, axis=-1) / d

\n\n

second_one: d * CE + sum(kl, axis=-1)

\n\n

So:\nfirst_one = second_one / d

\n\n

And note that the second one returns the mean loss over all the samples, but the first one returns a vector of losses for all samples.

\n", + "system": "" + }, + { + "instruction": "setting bias for multiclass classification python tensorflow keras", + "input": "", + "output": "

Considering that you're using 'softmax':

\n\n
softmax = exp(neurons) / sum(exp(neurons))\n
\n\n

And that you want the results of the classes to be:

\n\n
frequency = [0.4 , 0.3 , 0.2 , 0.08 , 0.02]\n
\n\n

Biases should be given by the equation (elementwise):

\n\n
frequency = exp(biases) / sum(exp(biases))\n
\n\n

This forms a system of equations:

\n\n\n\n

If you can solve this system of equations, you get the biases you want.

\n\n

I used excel and test-error method to determine that for the frequencies you wanted, your biases should be respectively:

\n\n
[1.1 , 0.81 , 0.4 , -0.51 , -1.9] \n
\n\n

I don't really know how to solve that system easily, but you can keep experimenting with excel or another thing until you reach the solution.

\n\n
\n\n

Adding the biases to the layer - method 1.

\n\n

Use a name when defining the layer, like:

\n\n
self.last_dense = layers.Dense(units=3, activation=\"softmax\", name='last_layer')\n
\n\n

You may need to build the model first, so:

\n\n
dummy_predictions = model.predict(np.zeros((1,) + input_shape))\n
\n\n

Then you get the weights:

\n\n
weights_and_biases = model.get_layer('last_layer').get_weights()\nw, b = weights_and_biases\nnew_biases = np.array([-0.45752, 0.51344, 0.30730])\nmodel.get_layer('last_layer').set_weights([w, new_biases])\n
\n\n

Method 2

\n\n
def bias_init(bias_shape):\n    return K.variable([-0.45752, 0.51344, 0.30730])\n\nself.last_dense = layers.Dense(units=3, activation=\"softmax\", bias_initializer=bias_init)\n
\n", + "system": "" + }, + { + "instruction": "how to handle unbalanced data for multilabel classification using CNN in Keras?", + "input": "", + "output": "

I am not sure that you need to handle the imbalance issue using in particular Keras per se, rather than using some intuition. One simple way to do so is to use the same amount of data per each class. Of course, that causes another problem, which is that you filter a lot of samples. But still is a thing that you can check. Of course, when you have imbalance data it is not a very good idea to just calculate the classification performance since it does so well how each class performs.

\n\n

You should further, calculate the confusion matrix, in order to visualize how well each class performs individually. A more detailed approach to imbalanced data issues could be found in this blog and in here.

\n\n

The most important is to use the right tools to evaluate the performance of your classification, and also handle the input data as proposed in the links I mentioned.

\n", + "system": "" + }, + { + "instruction": "Using conv1D \u201cError when checking input: expected conv1d_input to have 3 dimensions, but got array with shape (213412, 36)\u201d", + "input": "", + "output": "

There's a couple of problems I notice with your code.

\n\n\n\n
# Here I'm assuming some dummy data\n# Xtrain => [213412, 36, 1] (Note that you need Xtrain to be 3D not 2D - So we're adding a channel dimension of 1)\nXtrain = np.expand_dims(np.random.normal(size=(213412, 36)),axis=-1)\n# Ytrain => [213412, 10]\nYtrain = np.random.choice([0,1], size=(213412,10))\n\nn_timesteps, n_features, n_outputs =Xtrain.shape[0], Xtrain.shape[1], Ytrain.shape[1]\n\nmodel = Sequential()\nmodel.add(Conv1D(filters=64, kernel_size=1, \nactivation='relu',input_shape=(n_features,1)))\n\nmodel.add(Conv1D(filters=64, kernel_size=1, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(MaxPooling1D(pool_size=2))\nmodel.add(Flatten())\nmodel.add(Dense(100, activation='relu'))\nmodel.add(Dense(n_outputs, activation='softmax'))\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n# fit network\nmodel.fit(Xtrain, Ytrain, epochs=10, batch_size=32, verbose=0)\n
\n", + "system": "" + }, + { + "instruction": "Bert Embedding Layer raises `Type Error: unsupported operand type(s) for +: 'None Type' and 'int'` with BiLSTM", + "input": "", + "output": "

First of all, the results by \"mean\" or \"first\" pooling is not for all the tokens, so you got to change in call() function:

\n\n
elif self.pooling == \"mean\": \n    result = self.bert(inputs=bert_inputs, signature=\"tokens\", as_dict=True)[\"sequence_output\" ] \n    pooled = result\n
\n\n

In build_model, change to:

\n\n
embedding_size = 768\nin_id = Input(shape=(max_seq_length,), name=\"input_ids\") \nin_mask = Input(shape=(max_seq_length,), name=\"input_masks\")\nin_segment = Input(shape=(max_seq_length,), name=\"segment_ids\")\n\nbert_inputs = [in_id, in_mask, in_segment] \nbert_output = BertLayer(n_fine_tune_layers=12, pooling=\"mean\")(bert_inputs) \nbert_output = Reshape((max_seq_length, embedding_size))(bert_output) \n\nbilstm = Bidirectional(LSTM(128, dropout=0.2,recurrent_dropout=0.2,return_sequences=True))(bert_output)\noutput = Dense(output_size, activation=\"softmax\")(bilstm)\n\n
\n", + "system": "" + }, + { + "instruction": "How to generate CNN heatmaps using built-in Keras in TF2.0 (tf.keras)", + "input": "", + "output": "

At the end of the GradientTape loop, conv_output and grads already holds the value. The iterate function is no longer need to compute the values.

\n\n

Working example below:

\n\n
from tensorflow.keras.applications.vgg16 import preprocess_input\nfrom tensorflow.keras.preprocessing.image import load_img\nfrom tensorflow.keras.models import load_model\n\nfrom tensorflow.keras import preprocessing\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras import models\n\nimport tensorflow as tf\nimport numpy as np\n\nimage_size = 224\n\n# Load pre-trained Keras model and the image to classify\nmodel = tf.keras.applications.vgg16.VGG16()\nimage = np.random.random((image_size, image_size, 3))\nimg_tensor = preprocessing.image.img_to_array(image)\nimg_tensor = np.expand_dims(img_tensor, axis=0)\nimg_tensor = preprocess_input(img_tensor)\n\nconv_layer = model.get_layer(\"block5_conv3\")\nheatmap_model = models.Model([model.inputs], [conv_layer.output, model.output])\n\n# Get gradient of the winner class w.r.t. the output of the (last) conv. layer\nwith tf.GradientTape() as gtape:\n    conv_output, predictions = heatmap_model(img_tensor)\n    loss = predictions[:, np.argmax(predictions[0])]\n    grads = gtape.gradient(loss, conv_output)\n    pooled_grads = K.mean(grads, axis=(0, 1, 2))\n\nheatmap = tf.reduce_mean(tf.multiply(pooled_grads, conv_output), axis=-1)\nheatmap = np.maximum(heatmap, 0)\nmax_heat = np.max(heatmap)\nif max_heat == 0:\n    max_heat = 1e-10\nheatmap /= max_heat\n\nprint(heatmap.shape)\n
\n", + "system": "" + }, + { + "instruction": "When should we inherits keras.Model instead of keras.layers.Layer even if we don't use model.fit?", + "input": "", + "output": "

Update

\n\n

So the comment was: Yes, I know training and serialization utilities exist in Model as I wrote in the question. My question is why TF tutorials need to use Model though they don't use these methods.

\n\n

The best answer can be provided by the authors in this case because your question asks why they chose one method over the other where both of them have them can do the job equally well. Why can do the job equally well? Well, because Model is just like a Layer, but with added training and serialization utilities.

\n\n

We can argue that using model when just layer can do the job is an overkill, but then it may be a matter of taste.

\n\n

Hope it helps

\n\n

PS.

\n\n

In the eager example and custom layer writing tutorials that you provided we cannot replace model with layer, so these tutorials do not apply to your question

\n\n
\n\n

With model you can train but with layer only you cannot. See list of their methods below (excluding inner and inherited ones):

\n\n

tf.keras.layers.Layer

\n\n
activity_regularizer\nactivity_regularizer\nadd_loss\nadd_metric\nadd_update\nadd_variable\nadd_weight\napply\nbuild\ncall\ncompute_mask\ncompute_output_shape\ncount_params\ndtype\ndynamic\nfrom_config\nget_config\nget_input_at\nget_input_mask_at\nget_input_shape_at\nget_losses_for\nget_output_at\nget_output_mask_at\nget_output_shape_at\nget_updates_for\nget_weights\ninbound_nodes\ninput\ninput_mask\ninput_shape\nlosses\nmetrics\nname\nnon_trainable_variables\nnon_trainable_weights\noutbound_nodes\noutput\noutput_mask\noutput_shape\nset_weights\ntrainable\ntrainable\ntrainable_variables\ntrainable_weights\nupdates\nvariables\nweights\n
\n\n

see? no fit or evaluate method there.\ntf.keras.Model

\n\n
\ncompile\nevaluate\nevaluate_generator\nfit\nfit_generator\nget_weights\nload_weights\nmetrics\nmetrics_names\npredict\npredict_generator\npredict_on_batch\nreset_metrics\nrun_eagerly\nrun_eagerly\nsample_weights\ntest_on_batch\ntrain_on_batch\n
\n", + "system": "" + }, + { + "instruction": "What's the difference between tensorflow.python.keras and tensorflow.keras?", + "input": "", + "output": "

Anything under tf.python.* is private, intended for development only, rather than for public use.

\n\n

Importing from tensorflow.python or any other modules (including import tensorflow_core...) is not supported, and can break unannounced.

\n\n

So, it is suggested not to use anything with tf.python.*.

\n", + "system": "" + }, + { + "instruction": "unable to execute plot_model in keras", + "input": "", + "output": "

Try importing from tensorflow directly:

\n\n

from tensorflow.keras.utils import plot_model

\n", + "system": "" + }, + { + "instruction": "Counting CNN layers", + "input": "", + "output": "

When calculating the depth of a CNN network, we only consider the layers that have tunable/trainable weights/parameters. In CNN only Convolutional Layers and Fully Connected Layers will have trainable parameters. If you want to label layers consider only Convolutional, Full Connected and Output layers (Conv2D and Dense).\nMax Pooling Layers are generally take together with Convolutional Layers as one layer.

\n", + "system": "" + }, + { + "instruction": "How to load data from a downloaded tar.gz file in tensorflow/keras?", + "input": "", + "output": "

I think that the best you can do is first extract the tar file:

\n
import tarfile\n\nif fname.endswith("tar.gz"):\n    tar = tarfile.open(fname, "r:gz")\n    tar.extractall()\n    tar.close()\nelif fname.endswith("tar"):\n    tar = tarfile.open(fname, "r:")\n    tar.extractall()\n    tar.close()\n
\n

and then access to the model data and load it using keras:

\n

https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model

\n", + "system": "" + }, + { + "instruction": "# of Units for Dense Layer in TensorFlow", + "input": "", + "output": "

Those are called hyperparameters and should be tuned on a validation/test set to tweak your model to get an higher accuracy.

\n\n

Tuning just means trying different combinations of parameters and keep the one with the lowest loss value or better accuracy on the validation set, depending on the problem.

\n\n

There are two basic methods:

\n\n

Grid search: For each parameter, decide a range and steps into that range, like 8 to 64 neurons, in powers of two (8, 16, 32, 64), and try each combination of the parameters. This is obviously requires an exponential number of models to be trained and tested and takes a lot of time.

\n\n

Random search: Do the same but just define a range for each parameter and try a random set of parameters, drawn from an uniform distribution over each range. You can try as many parameters sets you want, for as how long you can. This is just a informed random guess.

\n", + "system": "" + }, + { + "instruction": "Using different sample weights for each output in a multi-output Keras model", + "input": "", + "output": "

First of all, let's clear up a misunderstanding:

\n\n

If your model has one output/input layer then you can use Sequential API to construct your model, regardless of the number of neurons in the output and input layers. On the other hand, if your model has multiple output/input layers, then you must use Functional API to define your model (no matter how many neurons the input/output layers might have).

\n\n

Now, you have stated that your model has two output values and for each output value you want to use a different sample weighting. To be able to do that, your model must have two output layers, and then you can set the sample_weight argument as a dictionary containing two weight arrays corresponding to two output layers.

\n\n

To make it more clear, consider this dummy example:

\n\n
from keras import layers\nfrom keras import models \nimport numpy as np\n\ninp = layers.Input(shape=(5,))\n# assign names to output layers for more clarity\nout1 = layers.Dense(1, name='out1')(inp)\nout2 = layers.Dense(1, name='out2')(inp)\n\nmodel = models.Model(inp, [out1, out2])\nmodel.compile(loss='mse',\n              optimizer='adam')\n\n# create some dummy training data as well as sample weight\nn_samples = 100\nX = np.random.rand(n_samples, 5)\ny1 = np.random.rand(n_samples,1)\ny2 = np.random.rand(n_samples,1)\n\nw1 = np.random.rand(n_samples,)\nw2 = np.random.rand(n_samples,)\n\nmodel.fit(X, [y1, y2], epochs=5, batch_size=16, sample_weight={'out1': w1, 'out2': w2})\n
\n", + "system": "" + }, + { + "instruction": "Loss is NaN on image classification task", + "input": "", + "output": "

In my case, I used sparse_categorical_crossentropy with labels numbered from [1,2,3] (3 classes). In this case it produced NaNs from the start.

\n

When I changed the labels from [1,2,3] to [0,1,2] the problem has disappeared.

\n", + "system": "" + }, + { + "instruction": "Target array shape different to expected output using Tensorflow", + "input": "", + "output": "

You need to uncomment the Flatten layer when creating your model. Essentially what this layer does is that it takes a 4D input (batch_size, height, width, num_filters) and unrolls it into a 2D one (batch_size, height * width * num_filters). This is needed to get the output shape you want.

\n", + "system": "" + }, + { + "instruction": "Unexpected results with CuDNNLSTM (instead of LSTM) layer", + "input": "", + "output": "

So here there are two problems.
\nUse of CuDNNLSTM and parameter tuning.
\nBasically, the network overfits on your dataset which leads the output being only one sentence for every input. This is neither the fault of CuDNNLSTM nor LSTM.

\n\n

Firstly,
\nCuDNN has a bit different maths from regular LSTM to make it Cuda Compatible and run faster. The LSTM takes 11 sec to run on eng-hindi file for the same code that you used and CuDNNLSTM takes 1 sec for each epoch.

\n\n

In the CuDNNLSTM time_major param is set to false. For this reason the network overfits. You can check it here.
\nYou can clearly see for small datasets like eng-hin or eng-marathi the val-loss increases after 30 epochs. There is no point in running the network more where your network loss is decreasing and val_loss is increasing. The case with LSTM is same too.

\n\n

Here you need param tuning for small datasets.

\n\n

Here are a few links which can help:

\n\n
    \n
  1. Eng-Mar
  2. \n
  3. Pytorch translation tutorial
  4. \n
  5. Similar Question 2 and Similar Question 2
  6. \n
  7. NMT-keras
  8. \n
\n", + "system": "" + }, + { + "instruction": "Why do we have to import keras from tensorflow if we have already imported tensorflow in python?", + "input": "", + "output": "

When you import a module in python, you import only that module (along with the modules imported by the module), any modules defined in the imported module need to be imported on their own,

\n\n

example

\n\n
$ ptree\n.\n\u251c\u2500\u2500 imports.py\n\u2514\u2500\u2500 pmod\n    \u251c\u2500\u2500 cmod.py\n    \u2514\u2500\u2500 __init__.py\n\n1 directory, 3 files\n\n$ cat imports.py \nimport pmod\n\npmod.do()\npmod.cmod.do()\n\n$ python3 imports.py \nhello from pmod\nTraceback (most recent call last):\n  File \"imports.py\", line 4, in <module>\n    pmod.cmod.do()\nAttributeError: module 'pmod' has no attribute 'cmod'\n
\n\n

however if I import it explicitly it does work

\n\n
$ cat imports.py \nimport pmod\nimport pmod.cmod\n\npmod.do()\npmod.cmod.do()\n\n$ python3 imports.py \nhello from pmod\nhello from cmod\n
\n", + "system": "" + }, + { + "instruction": "Does keras-tokenizer perform the task of lemmatization and stemming?", + "input": "", + "output": "

There might be some confusion what a tokenizer does respectively what tokenization is. Tokenization splits a string into smaller entities such as words or single characters. Therefore, these are also referred to as tokens. Wikipedia provides a nice example:

\n\n

The quick brown fox jumps over the lazy dog becomes:

\n\n
<sentence>\n  <word>The</word>\n  <word>quick</word>\n  ...\n  <word>dog</word>\n</sentence>\n
\n\n

Lemmatization (grouping together the inflected forms of a word -> link) or stemming (process of reducing inflected (or sometimes derived) words to their word stem -> link) is something you do during preprocessing. Tokenization can be a part of a preprocessing process before or after (or both) lemmatization and stemming.

\n\n

Anyhow, Keras is no framework for fully fletched text-preprocessing. Hence, you feed already cleaned, lemmatized etc. data into Keras. Regarding your first question: No, Keras does not provide such functionallity like lemmatization or stemming.

\n\n

What Keras understands under Text preprocessing like here in the docs is the functionallity to prepare data in order to be fed to a Keras-model (like a Sequential model). This is for example why the Keras-Tokenizer does this:

\n\n
\n

This class allows to vectorize a text corpus, by turning each text\n into either a sequence of integers (each integer being the index of a\n token in a dictionary) or into a vector where the coefficient for each\n token could be binary, based on word count, based on tf-idf...

\n
\n\n

By for example vectorizing your input strings and transforming them into numeric data you can feed them as input to a, in case of Keras, neural network.

\n\n

What text_to_sequence means can be extracted from this: [...]sequence of integers (each integer being the index of a token in a dictionary)[...].\nThis means that your former strings can afterwards be a sequence (e.g. array) of numeric integers instead of actual words.

\n\n

Regarding this you should also take a look on what Keras Sequential models are (e.g. here) since they take seuqences as input.

\n\n

Additionally, text_to_word_sequence() (docs) also provides such tokenization, but does not vectorize your data into numeric vectors and returns an array of your tokenized strings.

\n\n
\n

Converts a text to a sequence of words (or tokens).

\n
\n", + "system": "" + }, + { + "instruction": "Keras multi-label image classification with F1-score", + "input": "", + "output": "

Based on user706838 answer ...

\n\n

use the f1_score in https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric

\n\n
import tensorflow as tf\nimport keras.backend as K\n\ndef f1_loss(y_true, y_pred):\n\n    tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)\n    tn = K.sum(K.cast((1-y_true)*(1-y_pred), 'float'), axis=0)\n    fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)\n    fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)\n\n    p = tp / (tp + fp + K.epsilon())\n    r = tp / (tp + fn + K.epsilon())\n\n    f1 = 2*p*r / (p+r+K.epsilon())\n    f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)\n    return 1 - K.mean(f1)\n
\n", + "system": "" + }, + { + "instruction": "Custom Keras Data Generator with yield", + "input": "", + "output": "

You are using the Sequence API, which works a bit different than plain generators. In a generator function, you would use the yield keyword to perform iteration inside a while True: loop, so each time Keras calls the generator, it gets a batch of data and it automatically wraps around the end of the data.

\n\n

But in a Sequence, there is an index parameter to the __getitem__ function, so no iteration or yield is required, this is performed by Keras for you. This is made so the sequence can run in parallel using multiprocessing, which is not possible with old generator functions.

\n\n

So you are doing things the right way, there is no change needed.

\n", + "system": "" + }, + { + "instruction": "Convolving Across Channels in Keras CNN: Conv1D, Depthwise Separable Conv, CCCP?", + "input": "", + "output": "

Let me explain the operations you mentioned in a bit of detail so you understand the differences between their intuition and usage:

\n

Cascaded cross-channel parametric pooling:

\n

This is introduced in the Network-in-Network paper and is implemented in Keras as GlobalAveragePooling2D(). This operation averages over the output of each feature map in the previous layers.

\n

It is a structural regularizer that enforces correspondence between feature maps\nand categories, so feature maps can be interpreted as category confidence. It reduces parameter count and sums up spatial information and hence, it is more robust to spatial translations of the input.

\n

GlobalAveragePooling2D() is generally used without Dense() layers in the model before it.

\n

Conv1D:

\n

Conv1D() is a convolution operation exactly similar to Conv2D() but it applies only to one dimension. Conv1D() is generally used on sequences or other 1D data, not as much on images.

\n

Depthwise Separable Convolution:

\n

Quoting from the Keras documentation

\n
\n

Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. The depth_multiplier argument controls how many output channels are generated per input channel in the depthwise step.

\n
\n

This blog explains the depthwise separable convolution pretty well.

\n

Conv2D(num_filters, (1, 1)):

\n

This is generally known as 1x1 convolution, introduced in the Network-in-Network paper.

\n

The 1x1 convolutional filters are used to reduce/increase dimensionality in the filter dimension, without affecting the spatial dimensions. This is also used in the Google Inception architecture for dimensionality reduction in filter space.

\n

In your particular case, I am not exactly sure which of this techniques you can use. I do not think Conv1D would be of much use. You can definitely use GlobalMaxPooling or GlobalAveragePooling as long as you do not use Dense before them. This is helpful to get spatial information. Depthwise Separable Convolution can be used as well in place of your Conv2D layers. Conv2D(num_filters, (1, 1)) is very helpful for dimensionality reduction in filter space, mostly towards the end of your model architecture.

\n

Maybe, if you follow the resources you get a better understanding of the operations and see how they apply to your problem.

\n", + "system": "" + }, + { + "instruction": "How to feed Bert embeddings to LSTM", + "input": "", + "output": "

You can create model that uses first the Embedding layer which is followed by LSTM and then Dense.\nSuch as here:

\n\n
deep_inputs = Input(shape=(length_of_your_data,))\nembedding_layer = Embedding(vocab_size, output_dim = 3000, trainable=True)(deep_inputs)\nLSTM_Layer_1 = LSTM(512)(embedding_layer) \ndense_layer_1 = Dense(number_of_classes, activation='softmax')(LSTM_Layer_1) \nmodel_AdGroups = Model(inputs=deep_inputs, outputs=dense_layer_1) \n
\n", + "system": "" + }, + { + "instruction": "Raise 'Unable to create group (name already exists)' Error when save model in Keras", + "input": "", + "output": "

In case If i need to use an op like tf.tile as in your case, I will call it with a lambda layer. So the code that works is as follows

\n
import numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Input, Lambda\nfrom tensorflow.keras import Model\n\ndef my_fun(a):\n  out = tf.tile(a, (1, tf.shape(a)[0]))\n  return out\n\na = Input(shape=(10,))\n#out = tf.tile(a, (1, tf.shape(a)[0]))\nout = Lambda(lambda x : my_fun(x))(a)\nmodel = Model(a, out)\n\nx = np.zeros((50,10), dtype=np.float32)\nprint(model(x).numpy())\n\nmodel.save('my_model.h5')\n\n#load the model\nnew_model=tf.keras.models.load_model("my_model.h5")\n
\n

Anyone facing similar issue, please follow the GitHub issue related to this problem for a final solution. Thanks!

\n

Edit on 02/04/2020

\n

With the recent code modifications, you could use tf-nightly to save the model in "h5" format without any issue as shown below.

\n
import numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Input\nfrom tensorflow.keras import Model\n\na = Input(shape=(10,))\nout = tf.tile(a, (1, tf.shape(a)[0]))\nmodel = Model(a, out)\n\nx = np.zeros((50,10), dtype=np.float32)\nprint(model(x).numpy())\n\nmodel.save('./my_model', save_format='h5')\n
\n", + "system": "" + }, + { + "instruction": "Why is TimeDistributed not needed in my Keras LSTM?", + "input": "", + "output": "

In your case the 2 models you define are identical.

\n\n

This is caused by the fact that you use the return_sequences=True parameter which means that the Dense layer is applied to every timestep just like TimeDistributedDense but if you switch to False then the 2 models are not identical and an error is raised in case of TimeDistributedDense version though not in the Dense one.

\n\n

A more thorough explanation is provided here also to a similar situation.

\n", + "system": "" + }, + { + "instruction": "Batch Normalization in tf.keras does not calculate average mean and average variance", + "input": "", + "output": "

This is because tf.keras.layers.BatchNormalization inherits from tf.keras.layers.Layer. Keras API handle update ops as part of its fit and evaluate loops. This in turn means that it won't update tf.GraphKeys.UPDATE_OPS collection without it.

\n\n

So in order to make it work, you need to update it manually

\n\n
hidden = tf.keras.layers.Dense(units, activation=None)(out)\nbatch_normed = tf.keras.layers.BatchNormalization(trainable=True) \nlayer = batch_normed(hidden)\n
\n\n

This creates separate class instance

\n\n
tf.add_to_collection(tf.GraphKeys.UPDATE_OPS, batch_normed.updates)\n
\n\n

And this updates needed collection. Also take a look https://github.com/tensorflow/tensorflow/issues/25525

\n", + "system": "" + }, + { + "instruction": "what's the meaning of input_length in Keras.ctc_batch_cost()", + "input": "", + "output": "

CTC loss for one example is calculated on a 2D array (T,C). C must be equal to number of character + 1(blank characters). C contains the probability distribution of the characters at a time stamps. T will be the number of time stamps.

\n\n

T should be of length 2* max_string_length. All possible encoding of y_true with length T will be used in negative log loss calculation.

\n\n

It is usually the shape of previous layer output.

\n", + "system": "" + }, + { + "instruction": "'tuple' object is not an iterator", + "input": "", + "output": "

As the error says, you're passing a tuple instead of a generator object. fit_generator() expects a generator object. Internally, it would call next() on the generator object to get a batch of data.

\n\n

If I do, next() on a tuple, I would get the same error:

\n\n
>>> next((1,2))\nTraceback (most recent call last):\n  File \"<input>\", line 1, in <module>\n    next((1,2))\nTypeError: 'tuple' object is not an iterator\n\n>>> sample_generator = ((1,2) for i in range(3))\n>>> x,y = next(sample_generator)\n>>> x,y\n(1, 2)\n
\n", + "system": "" + }, + { + "instruction": "different prediction after load a model in keras", + "input": "", + "output": "

Are you pre-processing your data in the same way when you load your model ?

\n\n

And if yes, did you set the seed of your pre-processing functions ?\nIf you build a dictionnary with keras, are the sentences coming in the same order ?

\n", + "system": "" + }, + { + "instruction": "Keras ImageDataGenerator: problem with data and label shape", + "input": "", + "output": "

It is very likely that the cats['images'] and cats['labels'] are Python lists. First convert them to arrays using np.array and then pass them to flow method:

\n\n
cats['images'] = np.array(cats['images'])\ncats['labels'] = np.array(cats['labels'])\n
\n", + "system": "" + }, + { + "instruction": "ValueError: You are trying to load a weight file containing 6 layers into a model with 0", + "input": "", + "output": "
    \n
  1. Drop InputLayer and use input_shape in first layer. Your code will be similar to:

    \n\n
    model = Sequentional()\nmodel.add(Conv2D(filters=20,..., input_shape=(36, 120, 1)))
    \n\n

    It seems models with InputLayer are not serialized to HDF5 correctly.

  2. \n
  3. Upgrade your Tensorflow and Keras to the latest version

  4. \n
  5. Fix the interpreter problem as explained here

  6. \n
\n", + "system": "" + }, + { + "instruction": "Custom Hebbian Layer Implementation in Keras - input/output dims and lateral node connections", + "input": "", + "output": "

Okay I think I maybe figured it out, sort of. There were many small problems but the biggest thing was I needed to add the compute_output_shape function which makes the layer able to modify the shape of its input as explained here:\nhttps://keras.io/layers/writing-your-own-keras-layers/

\n\n

So here is the code with all the changes I made. It will compile and modify the input shape just fine. Note that this layer computes weight changes inside the layer itself and there may be some issues with that if you try to actually use the layer (I'm still ironing these out), but this is a separate issue.

\n\n
class Hebbian(Layer):\n\n\ndef __init__(self, output_dim, lmbda=1.0, eta=0.0005, connectivity='random', connectivity_prob=0.25, **kwargs):\n    '''\n    Constructor for the Hebbian learning layer.\n\n    args:\n        output_dim - The shape of the output / activations computed by the layer.\n        lambda - A floating-point valued parameter governing the strength of the Hebbian learning activation.\n        eta - A floating-point valued parameter governing the Hebbian learning rate.\n        connectivity - A string which determines the way in which the neurons in this layer are connected to\n            the neurons in the previous layer.\n    '''\n    self.output_dim = output_dim\n    self.lmbda = lmbda\n    self.eta = eta\n    self.connectivity = connectivity\n    self.connectivity_prob = connectivity_prob\n\n    super(Hebbian, self).__init__(**kwargs)\n\n\n\ndef random_conn_init(self, shape, dtype=None):\n    A = np.random.normal(0, 1, shape)\n    A[self.B] = 0\n    return tf.constant(A, dtype=tf.float32)\n\n\ndef zero_init(self, shape, dtype=None):\n    return np.zeros(shape)\n\n\ndef build(self, input_shape):\n    # create weight variable for this layer according to user-specified initialization\n    if self.connectivity == 'random':\n        self.B = np.random.random(input_shape[0]) < self.connectivity_prob\n    elif self.connectivity == 'zero':\n        self.B = np.zeros(self.output_dim)\n\n    if self.connectivity == 'all':\n        self.kernel = self.add_weight(name='kernel', shape=(np.prod(input_shape[1:]), \\\n                    np.prod(self.output_dim)), initializer='uniform', trainable=False)\n    elif self.connectivity == 'random':\n        self.kernel = self.add_weight(name='kernel', shape=(np.prod(input_shape[1:]), \\\n                    np.prod(self.output_dim)), initializer=self.random_conn_init, trainable=False)\n    elif self.connectivity == 'zero':\n        self.kernel = self.add_weight(name='kernel', shape=(np.prod(input_shape[1:]), \\\n                    np.prod(self.output_dim)), initializer=self.zero_init, trainable=False)\n    else:\n        raise NotImplementedError\n\n\n    # call superclass \"build\" function\n    super(Hebbian, self).build(input_shape)\n\n\ndef call(self, x):  # x is the input to the network\n    x_shape = tf.shape(x)\n    batch_size = tf.shape(x)[0]\n\n    # reshape to (batch_size, product of other dimensions) shape\n    x = tf.reshape(x, (tf.reduce_prod(x_shape[1:]), batch_size))\n\n    # compute activations using Hebbian-like update rule\n    activations = x + self.lmbda * tf.matmul(self.kernel, x)  \n\n\n    # compute outer product of activations matrix with itself\n    outer_product = tf.matmul(tf.expand_dims(x, 1), tf.expand_dims(x, 0)) \n\n    # update the weight matrix of this layer\n    self.kernel = self.kernel + tf.multiply(self.eta, tf.reduce_mean(outer_product, axis=2)) \n    self.kernel = tf.multiply(self.kernel, self.B)\n    return K.reshape(activations, x_shape)\n\ndef compute_output_shape(self, input_shape):\n    return (input_shape[0], self.output_dim)\n
\n", + "system": "" + }, + { + "instruction": "What does keras normalize axis argument does?", + "input": "", + "output": "

The normalize function just performs a regular normalization to improve performance:

\n
\n

Normalization is a rescaling of the data from the original range so\nthat all values are within the range of 0 and 1.

\n
\n

There is a nice explanation of the axis argument in another post:

\n
\n

What is the meaning of axis=-1 in keras.argmax?

\n
\n

For example:

\n
\n

Your data has some shape (19,19,5,80). This means:

\n\n
\n

Also, for those who want to go deeper, there is an explanation from Fran\u00e7ois Chollet - Keras\u2019 author- on GitHub:

\n
\n\n

https://github.com/fchollet/keras/issues/1921

\n
\n", + "system": "" + }, + { + "instruction": "Different results for tensorflowjs and keras on same model and tensor", + "input": "", + "output": "

Upgrading to the latest version of tfjs (currently 0.13.3) solves the issue.\nMore context to the question can be viewed here and there

\n\n
<script src=\"https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.13.3\"></script>\n
\n", + "system": "" + }, + { + "instruction": "How to name keras model based on functional api", + "input": "", + "output": "

Well I experimented a little and it worked, so to name functional api based keras model, just pass the input in Model as follows:

\n\n
Model(inputs=inputs, outputs=dense_out, name=name)\n
\n\n

and its name can be excessed in similar way we access sequential one i.e.

\n\n
model.name\n
\n\n

Naming models comes in handy when you are working with bunch of models and want to save them all as well.

\n", + "system": "" + }, + { + "instruction": "Correct payload for TensorFlow Serving REST API", + "input": "", + "output": "

I give an example here with Estimator api, hope it can help someone who come up with the similar problems.

\n\n

To export a SavedModel with Estimator you need a input_receiver_fn to accpect inputs when serving. The input_receiver_fn in my application is as following:

\n\n
def _serving_input_receiver_fn():\n  serialized_tf_sample = tf.placeholder(dtype=tf.string,\n                                        shape=None, name='input_example_tensor')\n  receiver_tensors = {'example': serialized_tf_sample}\n  # example_proto: {'feature_name': tf.VarLenFeature(tf.int64),}\n  features = tf.parse_example(serialized_tf_sample, example_proto)\n  return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)\n
\n\n

You can put the following code under train_and_evaluate to export a SavedModel

\n\n
estimator.export_savedmodel(model_export_path, _serving_input_receiver_fn)\n
\n\n

To serve the model you can pull the tensorflow/serving docker image, you can refer https://www.tensorflow.org/serving/docker for help. (I suggest you pull the image with devel tag, since it's better to debug)

\n\n

Simply run the below command to start serving

\n\n

/usr/local/bin/tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=my_model --model_base_path my_model_path

\n\n

The client code is simple, but should take care of. Because the serialized example should be encoded with base64 and added a b64 key.

\n\n
import requests\nresp = requests.post('http://host:8501/v1/models/my_model:predict', json={\n        'instances': [\n            {'example': {'b64':base64.b64encode(single_example.SerializeToString())}}\n        ]})\nresp.json()\n
\n\n

If you have any question, just comment below.

\n", + "system": "" + }, + { + "instruction": "Keras try save and load model error You are trying to load a weight file containing 16 layers into a model with 0 layers", + "input": "", + "output": "

I loaded the model in a different way looking around for a solution, I had same problem.. NOW to apply my trained model. finally i used VGG16 as model and using my h5 weights i\u00b4ve trained on my own and Great!

\n\n
weights_model='C:/Anaconda/weightsnew2.h5'  # my already trained \nweights .h5\nvgg=applications.vgg16.VGG16()\ncnn=Sequential()\nfor capa in vgg.layers:\n    cnn.add(capa)\ncnn.layers.pop()\nfor layer in cnn.layers:\n    layer.trainable=False\ncnn.add(Dense(2,activation='softmax'))  \n\ncnn.load_weights(weights_model)\n\ndef predict(file):\n    x = load_img(file, target_size=(longitud, altura)) \n    x = img_to_array(x)                            \n    x = np.expand_dims(x, axis=0)\n    array = cnn.predict(x)     \n    result = array[0]\n    respuesta = np.argmax(result) \n    if respuesta == 0:\n        print(\"Gato\")\n    elif respuesta == 1:\n        print(\"Perro\")\n
\n", + "system": "" + }, + { + "instruction": "In Keras, why must the loss function be computed based upon the output of the neural network?", + "input": "", + "output": "

It turns out that it's a bit tricky. In fact, Keras does not need y_pred to be in the loss function. However, it needs that all trainable variables to be referenced in the loss function.

\n\n

When you call m3.fit(), Keras will perform a gradient computation between your loss function and the trainable weights of your layers. If your loss function do not references the same elements that you have in the trainable_variables collection, some of the gradients computation operations will not be possible.

\n\n

So how to avoid it ? Well, you could reference y_pred, even if doesn't do anything. Or you could freeze the layers that won't be impacted by the optimizer (as you don't compute their loss anyway)

\n\n

So in your case, you just have to freeze your output layer :

\n\n
output = Dense(1, trainable = False)(hidden_a)\n
\n", + "system": "" + }, + { + "instruction": "Multiple Embedding layers for Keras Sequential model", + "input": "", + "output": "

This can be done easily if you switch to the functional API, first have a read. Then you can build a model with multiple inputs that represent different columns:

\n\n
col1, col2, col3 = Input(shape=(10,)), Input(shape=(10,)), ...\ncol1_embeded = Embedding(500, 64)(col1)\ncol2_embedded = Embedding(100, 64)(col2)\n# ...\n
\n\n

The gist of this the layers are callable objects that build the computation graph. You can also for example share embedding layers between columns by simply using the same Embedding layer.

\n", + "system": "" + }, + { + "instruction": "Tensorflow flatten vs numpy flatten function effect on machine learning training", + "input": "", + "output": "

The biggest difference between np.flatten and tf.layers.flatten (or tf.contrib.layers.flatten) is that numpy operations are applicable only to static nd arrays, while tensorflow operations can work with dynamic tensors. Dynamic in this case means that the exact shape will be known only at runtime (either training or testing).

\n\n

So my recommendation is pretty simple:

\n\n\n", + "system": "" + }, + { + "instruction": "How to vertically stack trained models in keras?", + "input": "", + "output": "
inputA = Input(input_shape_for_A)\noutputA = modelA(inputA)\noutputB = modelB(outputA)\n\nmodelC = Model(inputA, outputB)\n
\n", + "system": "" + }, + { + "instruction": "How ensure that Keras is using GPU with tensorflow backend?", + "input": "", + "output": "

You need to run your network with log_device_placement = True set in the TensorFlow session (the line before the last in the sample code below.) Interestingly enough, if you set that in a session, it will still apply when Keras does the fitting. So this code below (tested) does output the placement for each tensor. Please note, I've short-circuited the data reading because your data wan't available, so I'm just running the network with random data. The code this way is self-contained and runnable by anyone. Another note: if you run this from Jupyter Notebook, the output of the log_device_placement will go to the terminal where Jupyter Notebook was started, not the notebook cell's output.

\n\n
from tensorflow.contrib.keras.api.keras.models import Sequential\nfrom tensorflow.contrib.keras.api.keras.layers import Dense\nfrom tensorflow.contrib.keras.api.keras.layers import Dropout\nfrom tensorflow.contrib.keras.api.keras import utils as np_utils\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\n# Read data\n#pddata=pd.read_csv('data/data.csv', delimiter=';')\npddata = \"foobar\"\n\n# Helper function (prepare & test data)\ndef split_to_train_test (data):\n\n    return (\n        np.random.uniform( size = ( 100, 101 ) ),\n        np.random.uniform( size = ( 100, 101 ) ),\n        np.random.randint( 0, size = ( 100 ), high = 3 ),\n        np.random.randint( 0, size = ( 100 ), high = 3 )\n    )\n\n    trainLenght = len(data) - len(data)//10\n\n    trainData = data.loc[:trainLenght].sample(frac=1).reset_index(drop=True)\n    testData = data.loc[trainLenght+1:].sample(frac=1).reset_index(drop=True)\n\n    trainLabels = trainData.loc[:,\"Label\"].as_matrix()\n    testLabels = testData.loc[:,\"Label\"].as_matrix()\n\n    trainData = trainData.loc[:,\"Feature 0\":].as_matrix()\n    testData  = testData.loc[:,\"Feature 0\":].as_matrix()\n\n    return (trainData, testData, trainLabels, testLabels)\n\n# prepare train & test data\n(X_train, X_test, y_train, y_test) = split_to_train_test (pddata)\n\n# Convert labels to one-hot notation\nY_train = np_utils.to_categorical(y_train, 3)\nY_test  = np_utils.to_categorical(y_test, 3)\n\n# Define model in Keras\ndef create_model(init):\n    model = Sequential()\n    model.add(Dense(101, input_shape=(101,), kernel_initializer=init, activation='tanh'))\n    model.add(Dense(101, kernel_initializer=init, activation='tanh'))\n    model.add(Dense(101, kernel_initializer=init, activation='tanh'))\n    model.add(Dense(101, kernel_initializer=init, activation='tanh'))\n    model.add(Dense(3, kernel_initializer=init, activation='softmax'))\n    return model\n\n# Train the model\nuniform_model = create_model(\"glorot_normal\")\nuniform_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])\nwith tf.Session( config = tf.ConfigProto( log_device_placement = True ) ):\n    uniform_model.fit(X_train, Y_train, batch_size=1, epochs=300, verbose=1, validation_data=(X_test, Y_test)) \n
\n\n

Terminal output (partial, it was way too long):

\n\n
\n

...
\n VarIsInitializedOp_13: (VarIsInitializedOp): /job:localhost/replica:0/task:0/device:GPU:0
\n 2018-04-21 21:54:33.485870: I tensorflow/core/common_runtime/placer.cc:884]
\n VarIsInitializedOp_13: (VarIsInitializedOp)/job:localhost/replica:0/task:0/device:GPU:0
\n training/SGD/mul_18/ReadVariableOp: (ReadVariableOp): /job:localhost/replica:0/task:0/device:GPU:0
\n 2018-04-21 21:54:33.485895: I tensorflow/core/common_runtime/placer.cc:884]
\n training/SGD/mul_18/ReadVariableOp: (ReadVariableOp)/job:localhost/replica:0/task:0/device:GPU:0
\n training/SGD/Variable_9/Read/ReadVariableOp: (ReadVariableOp): /job:localhost/replica:0/task:0/device:GPU:0
\n 2018-04-21 21:54:33.485903: I tensorflow/core/common_runtime/placer.cc:884]
\n training/SGD/Variable_9/Read/ReadVariableOp: (ReadVariableOp)/job:localhost/replica:0/task:0/device:GPU:0
\n ...

\n
\n\n

Note the GPU:0 at the end of many lines.

\n\n

Tensorflow manual's relevant page: Using GPU: Logging Device Placement.

\n", + "system": "" + }, + { + "instruction": "Create model using one - hot encoding in Keras", + "input": "", + "output": "

Cool, you cleaned up the question. You want to classify a sentence. I am assuming you said I want to do better than the bag-of-words encoding. You want to place importance on the sequence.

\n\n

We'll choose a new model then -- an RNN (the LSTM version). This model effectively sums over the importance of each word ( in sequence ) as it builds up a representation of the sentence that best fits the task.

\n\n

But we're going to have to handle the preprocessing a bit differently. For efficiency ( so that we can process more sentences together in a batch as opposed to single sentence at a time) we want all sentences to have the same amount of words. So We choose a max_words, say 20 and we pad shorter sentences to reach the max words and and we cut sentences longer than 20 words down.

\n\n

Keras is going to help with that. We'll encode every word with a integer.

\n\n
from keras.preprocessing.sequence import pad_sequences\nfrom keras.utils import to_categorical\nfrom keras.models import Sequential\nfrom keras.layers import Embedding, Dense, LSTM\n\nnum_classes = 5 \nmax_words = 20\nsentences = [\"The cat is in the house\",\n                           \"The green boy\",\n            \"computer programs are not alive while the children are\"]\nlabels = np.random.randint(0, num_classes, 3)\ny = to_categorical(labels, num_classes=num_classes)\n\nwords = set(w for sent in sentences for w in sent.split())\nword_map = {w : i+1 for (i, w) in enumerate(words)}\nsent_ints = [[word_map[w] for w in sent] for sent in sentences]\nvocab_size = len(words)\n
\n\n

So \"the green boy\" might be [1, 3, 5] now. \nThen we'll pad and one-hot encode with

\n\n
# pad to max_words length and encode with len(words) + 1  \n# + 1 because we'll reserve 0 add the padding sentinel.\nX = np.array([to_categorical(pad_sequences((sent,), max_words),  \n       vocab_size + 1)  for sent in sent_ints])\nprint(X.shape) # (3, 20, 16)\n
\n\n

Now to the model: we'll add a Dense layer to convert those one hot \nwords to dense vectors. And then we use an LSTM to convert word vectors \nin sentence to a dense sentence vector. And finally we'll use softmax activation to produce a probability distribution over the classes.

\n\n
model = Sequential()\nmodel.add(Dense(512, input_shape=(max_words, vocab_size + 1)))\nmodel.add(LSTM(128))\nmodel.add(Dense(5, activation='softmax'))\nmodel.compile(loss='categorical_crossentropy',\n          optimizer='adam',\n          metrics=['accuracy'])\n
\n\n

That should complle. You can then carry on with training.

\n\n
model.fit(X,y)\n
\n\n

EDIT:

\n\n

this line:

\n\n
# we need to split the sentences in a words write now it reading every\n# letter notice the sent.split() in the correct version below.\nsent_ints = [[word_map[w] for w in sent] for sent in sentences]\n
\n\n

should be:

\n\n
sent_ints = [[word_map[w] for w in sent.split()] for sent in sentences]\n
\n", + "system": "" + }, + { + "instruction": "Keras ModelCheckpoint monitor multiple values", + "input": "", + "output": "

I am afraid you will have to do it in separate instances. Think about what is happening here -

\n\n
checkpointer = ModelCheckpoint(filepath='checkpoints/weights-{epoch:02d}.hdf5', monitor='val_O1_categorical_accuracy' , verbose=1, save_best_only=True, mode='max')\n
\n\n

When you are saving a model by monitoring val_O1_categorical_accuracy, here is what it will do in pseudocode -

\n\n
for each epoch:\n    check the val_O1_categorical_accuracy after updating weights\n    if this metric is better in this epoch than the previous ones:\n        save the model\n    else\n        pass\n
\n\n

So really specifying multiple monitor is out of scope. In this case it has to be an either/or choice as based on a monitor metric only one model among other conflicting models can be the best one.

\n", + "system": "" + }, + { + "instruction": "Issue with plot_model in keras and pydot", + "input": "", + "output": "

I solved it with

\n\n

sudo apt-get install graphviz

\n", + "system": "" + }, + { + "instruction": "Keras fit_generator: Unexpected usage of __getitem__ method", + "input": "", + "output": "

The below code will work for you

\n\n
def gen(train_data):\n    print('generator initiated')\n    #Define a batch size\n    batch_size = 64\n\n    #Complete length of data\n    data_size = len(train_data)\n\n    #Total number of batches will be created\n    num_batches = int(data_size / batch_size)\n\n\n    if (num_batches*batch_size) < data_size:\n         num_batches += 1\n    while True:\n        cnt=0\n        for i in range(num_batches):\n            start_index = cnt * batch_size\n            end_index = min((cnt + 1) * batch_size, data_size)\n            cnt +=1\n\n            //Do some preprocessing \n            x_train_padded = add_pad(x_train,3,pad)\n            x_train_padded = np.array(x_train_padded)\n\n            yield (x_train_padded,y_train_padded)\n\n\nfun_model.fit_generator(gen(train_data),steps_per_epoch =int(len(train_data)/64),nb_epoch=50,callbacks=callbacks_list, verbose=2,shuffle=True)\n
\n", + "system": "" + }, + { + "instruction": "Keras MSE definition", + "input": "", + "output": "

K.mean(a, axis=-1) and also np.mean(a, axis=-1) is just taking the mean across the final dimension. Here a is an array with shape (10, 1) and in this case, taking the mean across the final dimension happens to be the same as flattening it to a 1d array of shape (10,). Implementing it like so supports the more general case of e.g. multiple linear regression.

\n\n

Also, you can inspect the value of nodes in the computation graph at run-time using keras.backend.print_tensor. See answer: Is there any way to debug a value inside a tensor while training on Keras?

\n\n

Edit: You question appears to be about why the loss doesn't return a single scalar value but instead returns a scalar value for each data-point in the batch. To support sample weighting, Keras losses are expected to return a scalar for each data-point in the batch. See losses documentation and the sample_weight argument of fit for more information. Note specifically: \"The actual optimized objective is the [weighted] mean of the output array across all data points.\"

\n", + "system": "" + }, + { + "instruction": "Computational GPU (Tesla K40c) is slower than graphics GPU (GTX 960)", + "input": "", + "output": "

TL;DR: Measure again with a larger batch size.

\n\n

Those results do not surprise me much. It's a common mistake to think that an expensive Tesla card (or a GPU for that matter) will automatically do everything faster. You have to understand how GPUs work in order to harness their power.

\n\n

If you compare the base clock speeds of your devices, you will find that your Xeon CPU has the fastest one:

\n\n\n\n

This gives you a hint of the speeds at which these devices operate and gives a very rough estimate of how fast they can crunch numbers if they would only do one thing at a time, that is, with no parallelization.

\n\n

So as you see, GPUs are not fast at all (for some definition of fast), in fact they're quite slow. Also note how the K40c is in fact slower than the GTX 960. \n However, the real power of a GPU comes from its ability to process a lot of data simultaneously! If you now check again at how much parallelization is possible on these devices, you will find that your K40c is not so bad after all:

\n\n\n\n

Again, these numbers give you a very rough estimate of how many things these devices can do simultaneously.

\n\n

Note: I am severely simplifying things: In absolutely no way is a CPU core comparable to a cuda core! They are very very different things. And in no way can base clock frequencies be compared like this! It's just to give an idea of what's happening.

\n\n

So, your devices needs to be able to process a lot of data in parallel in order to maximize their throughput. Luckily tensorflow already does this for you: It will automatically parallelize all those heavy matrix multiplications for maximum throughput. However this is only going to be fast if the matrices have a certain size. Your batch size is set to 128 which means that almost all of these matrices will have the first dimension set to 128. I don't know the details of your model, but if the other dimensions are not large either, then I suspect that most of your K40c is staying idle during those matrix multiplications. Try to increase the batch size and measure again. You should find that larger batch sizes will make the K40c faster in comparison with the GTX 960. The same should be true for increasing the model's capacity: increase the number of units in the fully-connected layers and the number of filters in the convolutional layers. Adding more layers will probably not help here. The output of the nvidia-smi tool is also very useful to see how busy a GPU really is.

\n\n

Note however that changing the model's hyper-parameter and/or the batch size will of course have a huge impact on how the model is able to train successfully and naturally you might also hit memory limitations.

\n\n

Perhaps if increasing the batch size or changing the model is not an option, you could also try to train two models on the K40c at the same time to make use of the idle cores. However I have never tried this, so it might not work at all.

\n", + "system": "" + }, + { + "instruction": "Merge weights of same model trained on 2 different computers using tensorflow", + "input": "", + "output": "

There is literally no way to merge weights, you cannot average or combine them in any way, as the result will not mean anything. What you could do instead is combine predictions, but for that the training classes have to be the same.

\n\n

This is not a programming limitation but a theoretical one.

\n", + "system": "" + }, + { + "instruction": "Submitting Google Cloud ML Engine Jobs from Python Directly", + "input": "", + "output": "

There is indeed a way of submitting jobs to Cloud ML Engine from a Python script. You can use the Google Python API Client Library for that purpose, and in the link I shared you can have a look at a close explanation of how the API calls. There's a command-by-command explanation, and at the end an example of how to put everything together. In order to work with the library, you will have to install it first, as explained in this other page.

\n

Then, the method you are interested in (for submitting jobs) is cloudml.projects.jobs.create(), and you can find detailed information on how to call it in the developers page. I think you might be interested in playing around with the REST API first, in order to get familiar with how it works; you can do so through the APIs Explorer. Below there's an example of a body used to make the API call:

\n
training_inputs = {'scaleTier': 'CUSTOM',\n    'masterType': 'complex_model_m',\n    'workerType': 'complex_model_m',\n    'parameterServerType': 'large_model',\n    'workerCount': 9,\n    'parameterServerCount': 3,\n    'packageUris': ['gs://<YOUR_TRAINER_PATH>/package-0.0.0.tar.gz'],\n    'pythonModule': 'trainer.task',\n    'args': ['--arg1', 'value1', '--arg2', 'value2'],\n    'region': '<REGION>',\n    'jobDir': 'gs://<YOUR_TRAINING_PATH>',\n    'runtimeVersion': '1.4'}\n\njob_spec = {'jobId': my_job_name, 'trainingInput': training_inputs}\n
\n

You should, adapt it to the specifications of your model. Once it is ready, you can have a look at this page explaining how to submit a training job using Python, but in short, it should be something like this:

\n
from oauth2client.client import GoogleCredentials\nfrom googleapiclient import discovery\nfrom googleapiclient import errors\n\nproject_name = 'my_project_name'\nproject_id = 'projects/{}'.format(project_name)\n\ncredentials = GoogleCredentials.get_application_default()\n\ncloudml = discovery.build('ml', 'v1', credentials=credentials)\n\nrequest = cloudml.projects().jobs().create(body=job_spec, parent=project_id)\n\ntry:\n    response = request.execute()\n    # Handle a successful request\n\nexcept errors.HttpError, err:\n    logging.error('There was an error creating the training job.'\n                  ' Check the details:')\n    logging.error(err._get_reason())\n
\n

You should be able to run this code in order to submit a Cloud ML Engine job through a Python script.

\n

I hope this could be of help and lighten a bit the opacity of the documentation you mentioned.

\n", + "system": "" + }, + { + "instruction": "Gradient clipping in keras", + "input": "", + "output": "

According to the official documentation, any optimizer can have optional arguments clipnorm and clipvalue. If clipnorm provided, gradient will be clipped whenever gradient norm exceeds the threshold.

\n", + "system": "" + }, + { + "instruction": "Getting Keras trained model predictions in c#", + "input": "", + "output": "

This question is somewhat vague, but I'll try to answer it the best I can:

\n\n

Tensorflow core is written in c++ and exposes an interface, which can be consumed by a client written in whichever language (although yes, the Python one is the most developed so far).

\n\n

What could you do:

\n\n\n", + "system": "" + }, + { + "instruction": "Use "Flatten" or "Reshape" to get 1D output of unknown input shape in keras", + "input": "", + "output": "

You can try K.batch_flatten() wrapped in a Lambda layer.\nThe output shape of K.batch_flatten() is dynamically determined at runtime.

\n\n
model.add(Lambda(lambda x: K.batch_flatten(x)))\nmodel.summary()\n\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nconv2d_5 (Conv2D)            (None, 4, None, 32)       4128      \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 4, None, 32)       128       \n_________________________________________________________________\nleaky_re_lu_3 (LeakyReLU)    (None, 4, None, 32)       0         \n_________________________________________________________________\nconv2d_6 (Conv2D)            (None, 1, None, 1)        65        \n_________________________________________________________________\nactivation_3 (Activation)    (None, 1, None, 1)        0         \n_________________________________________________________________\nlambda_5 (Lambda)            (None, None)              0         \n=================================================================\nTotal params: 4,321\nTrainable params: 4,257\nNon-trainable params: 64\n_________________________________________________________________\n\n\nX = np.random.rand(32, 4, 256, 1)\nprint(model.predict(X).shape)\n(32, 256)\n\nX = np.random.rand(32, 4, 64, 1)\nprint(model.predict(X).shape)\n(32, 64)\n
\n", + "system": "" + }, + { + "instruction": "keras + scikit-learn wrapper, appears to hang when GridSearchCV with n_jobs >1", + "input": "", + "output": "

Are you using a GPU? If so, you can't have multiple threads running each variation of the params because they won't be able to share the GPU.

\n\n

Here's a full example on how to use keras, sklearn wrappers in a Pipeline with GridsearchCV: Pipeline with a Keras Model

\n\n

If you really want to have multiple jobs in the GridSearchCV, you can try to limit the GPU fraction used by each job (e.g. if each job only allocates 0.5 of the available GPU memory, you can run 2 jobs simultaneously)

\n\n

See these issues:

\n\n\n", + "system": "" + }, + { + "instruction": "Implementing a complicated activation function in keras", + "input": "", + "output": "

Here, we will need one of these two:

\n\n

The Lambda layer:

\n

If your parameters are not trainable, you can define your function for a lambda layer. The function takes one input tensor, and it can return anything you want:

\n
import keras.backend as K\n\ndef customFunction(x):\n\n    #x can be either a single tensor or a list of tensors\n    #if a list, use the elements x[0], x[1], etc.\n\n    #Perform your calculations here using the keras backend\n    #If you could share which formula exactly you're trying to implement, \n        #it's possible to make this answer better and more to the point    \n\n    #dummy example\n    alphaReal = K.variable([someValue])    \n    alphaImag = K.variable([anotherValue]) #or even an array of values   \n\n    realPart = alphaReal * K.someFunction(x) + ... \n    imagPart = alphaImag * K.someFunction(x) + ....\n\n    #You can return them as two outputs in a list (requires the fuctional API model\n    #Or you can find backend functions that join them together, such as K.stack\n\n    return [realPart,imagPart]\n\n    #I think the separate approach will give you a better control of what to do next. \n
\n

For what you can do, explore the backend functions.

\n

For the parameters, you can define them as keras constants or variables (K.constant or K.variable), either inside or outside the function above, or even transform them in model inputs. See details in this answer

\n

In your model, you just add a lambda layer that uses that function.

\n\n

If you're going to pass more inputs to the function, you'll need the functional model API.
\nIf you're using Tensorflow, the output_shape will be computed automatically, I believe only Theano requires it. (Not sure about CNTK).

\n

The custom layer:

\n

A custom layer is a new class you create. This approach will only be necessary if you're going to have trainable parameters in your function. (Such as: optimize alpha with backpropagation)

\n

Keras teaches it here.

\n

Basically, you have an __init__ method where you pass the constant parameters, a build method where you create the trainable parameters (weights), a call method that will do the calculations (exactly what would go in the lambda layer if you didn't have trainable parameters), and a compute_output_shape method so you can tell the model what the output shape is.

\n
class CustomLayer(Layer):\n\n    def __init__(self, alphaReal, alphaImag):\n\n        self.alphaReal = alphaReal    \n        self.alphaImage = alphaImag\n\n    def build(self,input_shape):\n\n        #weights may or may not depend on the input shape\n        #you may use it or not...\n\n        #suppose we want just two trainable values:\n        weigthShape = (2,)\n\n        #create the weights:\n        self.kernel = self.add_weight(name='kernel', \n                                  shape=weightShape,\n                                  initializer='uniform',\n                                  trainable=True)\n\n        super(CustomLayer, self).build(input_shape)  # Be sure to call this somewhere!\n\n    def call(self,x):\n\n        #all the calculations go here:\n\n        #dummy example using the constant inputs\n        realPart = self.alphaReal * K.someFunction(x) + ... \n        imagPart = self.alphaImag * K.someFunction(x) + ....\n\n        #dummy example taking elements of the trainable weights\n        realPart = self.kernel[0] * realPart    \n        imagPart = self.kernel[1] * imagPart\n\n        #all the comments for the lambda layer above are valid here\n\n        #example returning a list\n        return [realPart,imagPart]\n\n    def compute_output_shape(self,input_shape):\n\n        #if you decide to return a list of tensors in the call method, \n        #return a list of shapes here, twice the input shape:\n        return [input_shape,input_shape]    \n\n        #if you stacked your results somehow in a single tensor, compute a single tuple, maybe with an additional dimension equal to 2:\n        return input_shape + (2,)\n
\n", + "system": "" + }, + { + "instruction": "Slicing-based assignment in Keras / Tensorflow?", + "input": "", + "output": "

I think slice assignment can be avoided for this custom loss. If you want to adjust the loss value for sqerr[:, 4:-1:7], you can subtract the original value from the total loss, and then add back the adjusted loss value.

\n\n
def custom_loss_keras(y_true, y_pred):\n    # first sum up the squared error column-wise\n    sqerr = K.square(y_true - y_pred)\n    loss = K.sum(sqerr, axis=-1)\n\n    # subtract the loss for the sliced part\n    loss -= K.sum(sqerr[:, 4:-1:7], axis=-1)\n\n    # add back the adjusted loss for the sliced part\n    denominator = K.maximum(y_true[:, 2:-1:7], K.epsilon())  # handle zero-division\n    numerator = y_true[:, 2:-1:7] - y_true[:, 3:-1:7]\n    loss += K.sum(sqerr[:, 4:-1:7] * K.square(numerator / denominator), axis=-1)\n\n    # take average\n    ncols = K.int_shape(y_pred)[-1]\n    loss /= ncols\n    return K.mean(loss)\n
\n\n

You can verify this function by comparing it to your original numpy version:

\n\n
def custom_loss_numpy(y_true, y_pred):\n    sqerr = (y_true - y_pred)**2\n    sqerr[:,4:-1:7] = sqerr[:,4:-1:7] * ((y_true[:,2:-1:7]-y_true[:,3:-1:7])/y_true[:,2:-1:7])**2\n    return sqerr.mean()\n\ny_true = np.random.rand(50, 1000)\ny_pred = np.random.rand(50, 1000)\n\nprint(custom_loss_numpy(y_true, y_pred))\n889.992075384\n\nprint(K.eval(custom_loss_keras(K.variable(y_true), K.variable(y_pred))))\n889.992\n
\n", + "system": "" + }, + { + "instruction": "Keras VGG16 predict speed slow", + "input": "", + "output": "

There are many issues with your model. The main issue is of course really slow machine, but as you cannot change that here I will state some pieces of advice on how you could speed up your computations:

\n\n
    \n
  1. VGG16 is relatively old architecture. The main issue here is that the so-called volume of tensors (area of feature maps times number of features) is decreased really slowly. I would advise you to use more modern architectures like e.g. ResNet50 or Inception v3 as they have the so-called stem which is making inside tensors much smaller really fast. Your speed should benefit thanks to that. There is also a really light architecture called MobileNet which seems perfect for your task.

  2. \n
  3. Downsample your images - with a size of (480, 640) your image is 6 times bigger than default VGG input. This makes all computations 6 times slower. You could try to first downsample images and then use a feature extractor.

  4. \n
\n", + "system": "" + }, + { + "instruction": "keras thread safe generator for model.fit_generator with Python 3.6.x", + "input": "", + "output": "

In Python 3, you should use next(self.it) instead of self.it.next().

\n\n

You could also try using Keras Sequences, it seems safer, since it's indexed to preserve the correct order of data when multiprocessing.

\n\n

Finally, it seems that workers affect only the generator itself, not the model. In my tests (I'm not good in threading either...) the only difference I could see with more workers was a bigger queue of preloaded data waiting for their turn into the model.

\n", + "system": "" + }, + { + "instruction": "Keras LSTM multiclass classification", + "input": "", + "output": "

Yes, you need one hot target, you can use to_categorical to encode your target or a short way:

\n\n

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

\n\n

here is the full code:

\n\n
from keras.models import Sequential\nfrom keras.layers import *\n\nmodel = Sequential()\nmodel.add(Embedding(5000, 32, input_length=500))\nmodel.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))\nmodel.add(Dense(7, activation='softmax'))\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.summary()\n
\n\n

Summary

\n\n
Using TensorFlow backend.\n_________________________________________________________________\nLayer (type)                 Output Shape              Param #   \n=================================================================\nembedding_1 (Embedding)      (None, 500, 32)           160000    \n_________________________________________________________________\nlstm_1 (LSTM)                (None, 100)               53200     \n_________________________________________________________________\ndense_1 (Dense)              (None, 7)                 707       \n=================================================================\nTotal params: 213,907\nTrainable params: 213,907\nNon-trainable params: 0\n_________________________________________________________________\n
\n", + "system": "" + }, + { + "instruction": "How can I calculate the loss without the weight decay in Keras?", + "input": "", + "output": "

Yes, weight decay losses are included in the loss value printed on the screen.

\n\n

The value you want to monitor is the total loss minus the sum of regularization losses.

\n\n\n\n

The following lines can be found in the source code of model.compile():

\n\n
# Add regularization penalties\n# and other layer-specific losses.\nfor loss_tensor in self.losses:\n    total_loss += loss_tensor\n
\n\n

To get the loss without weight decay, you can reverse the above operations. I.e., the value to be monitored is model.total_loss - sum(model.losses).

\n\n

Now, how to monitor this value is a bit tricky. Fortunately, the list of metrics used by a Keras model is not fixed until model.fit() is called. So you can append this value to the list, and it'll be printed on the screen during model fitting.

\n\n

Here's a simple example:

\n\n
input_tensor = Input(shape=(64, 64, 3))\nhidden = Conv2D(32, 1, kernel_regularizer=l2(0.01))(input_tensor)\nhidden = GlobalAveragePooling2D()(hidden)\nout = Dense(1)(hidden)\nmodel = Model(input_tensor, out)\nmodel.compile(loss='mse', optimizer='adam')\n\nloss_no_weight_decay = model.total_loss - sum(model.losses)\nmodel.metrics_tensors.append(loss_no_weight_decay)\nmodel.metrics_names.append('loss_no_weight_decay')\n
\n\n

When you run model.fit(), something like this will be printed to the screen:

\n\n
Epoch 1/1\n100/100 [==================] - 0s - loss: 0.5764 - loss_no_weight_decay: 0.5178\n
\n\n

You can also verify whether this value is correct by computing the L2 regularization manually:

\n\n
conv_kernel = model.layers[1].get_weights()[0]\nprint(np.sum(0.01 * np.square(conv_kernel)))\n
\n\n

In my case, the printed value is 0.0585, which is indeed the difference between loss and loss_no_weight_decay (with some rounding error).

\n", + "system": "" + }, + { + "instruction": "TypeError: Unrecognized keyword arguments: {'show_accuracy': True} #yelp challenge dataset", + "input": "", + "output": "

If you want to print the accuracy while training your model you do not need to specify it in model.fit but in the model.compile. Here you can set the metrics=['accuracy'] argument.

\n\n

Example

\n\n
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\nmodel.fit(X, Y, verbose=1)\n
\n\n
\n\n

EDIT:

\n\n

In older versions of Keras (< 1) the argument show_accuracy=True does exist.

\n\n
\n\n

EDIT 2:

\n\n

With tensorflow 2.0 keras is included. With this version you should use metrics=['acc'].

\n", + "system": "" + }, + { + "instruction": "Do I need to install keras 2.0 seprately after installing tensorflow 1.3?", + "input": "", + "output": "

Nope, you don't need to install keras 2.0 separately. (See: https://www.tensorflow.org/guide/keras)

\n\n

Do this:

\n\n
import tensorflow as tf\n\nmodel = tf.keras.Sequential()\n
\n\n

Don't do this (Unless you really need framework independent code):

\n\n
import keras\n\nmodel = keras.Sequential()\n
\n", + "system": "" + }, + { + "instruction": "How exactly does Keras take dimension argumentsfor LSTM / time series problems?", + "input": "", + "output": "

Keras convention is that the batch dimension is typically omitted in the input_shape arguments. From the guide:

\n\n
\n

Pass an input_shape argument to the first layer. This is a shape tuple (a tuple of integers or None entries, where None indicates that any positive integer may be expected). In input_shape, the batch dimension is not included.

\n
\n\n

So changing model = build_model([132300,2]) should solve the problem.

\n", + "system": "" + }, + { + "instruction": "Keras Concatenate TypeError: __init__() got multiple values for argument 'axis'", + "input": "", + "output": "

I have found a solution that seems to work!

\n\n

I have done two changes to the code.

\n\n
    \n
  1. Instead of using keras.layers.Concatenate I use keras.layers.concatenate
  2. \n
  3. I \"excluded\" the Conv2dTranspose step from the concatenation
  4. \n
\n\n

The relevant code snippet now looks like this

\n\n
trans5 = Conv2DTranspose(256, (2,2), strides=(2,2),padding='same')(conv4)\nup5 = keras.layers.concatenate([trans5, conv3], axis=3)\n
\n\n

Might this be some kind of bug in keras? Should I report that issue?

\n\n

Thanks so much for your help anyway. Appreciate it!

\n", + "system": "" + }, + { + "instruction": "Keras embedding layer with variable length in functional API", + "input": "", + "output": "

Use shape=(None,) in the input layer, in your case:

\n\n
charinput = Input(shape=(None,),name=\"input\",dtype='int32')\n
\n", + "system": "" + }, + { + "instruction": "Keras: Cannot Import Name np_utils", + "input": "", + "output": "

That tutorial was written on June 9th, 2016. Keras 2 was released in March 2017. Try installing the old version, using pip install keras==1.2.2.

\n", + "system": "" + }, + { + "instruction": "Unpredictable CUDNN_STATUS_NOT_INITIALIZED on Windows", + "input": "", + "output": "

Try limiting your gpu usage with set gpu option per_process_gpu_memory_fraction.

\n\n

Fiddle around with it to see what works and what doesn't.

\n\n

I recommend using .7 as a starting baseline.

\n", + "system": "" + }, + { + "instruction": "Pytorch nn.functional.batch_norm for 2D input", + "input": "", + "output": "

The key is that 2D batchnorm performs the same normalization for each channel. i.e. if you have a batch of data with shape (N, C, H, W) then your mu and stddev should be shape (C,). If your images do not have a channel dimension, then add one using view.

\n\n

Warning: if you set training=True then batch_norm computes and uses the appropriate normalization statistics for the argued batch (this means we don't need to calculate the mean and std ourselves). Your argued mu and stddev are supposed to be the running mean and running std for all training batches. These tensors are updated with the new batch statistics in the batch_norm function.

\n\n
# inp is shape (N, C, H, W)\nn_chans = inp.shape[1]\nrunning_mu = torch.zeros(n_chans) # zeros are fine for first training iter\nrunning_std = torch.ones(n_chans) # ones are fine for first training iter\nx = nn.functional.batch_norm(inp, running_mu, running_std, training=True, momentum=0.9)\n# running_mu and running_std now have new values\n
\n\n

If you want to just use your own batch statistics, try this:

\n\n
# inp is shape (N, C, H, W)\nn_chans = inp.shape[1]\nreshaped_inp = inp.permute(1,0,2,3).contiguous().view(n_chans, -1) # shape (C, N*W*H)\nmu = reshaped_inp.mean(-1)\nstddev = reshaped_inp.std(-1)\nx = nn.functional.batch_norm(inp, mu, stddev, training=False)\n
\n", + "system": "" + }, + { + "instruction": "Customize Keras' loss function in a way that the y_true will depend on y_pred", + "input": "", + "output": "

I believe this is what you looking for.

\n\n
import theano\nfrom keras import backend as K\nfrom keras.layers import Dense\nfrom keras.models import Sequential\n\ndef customized_loss(y_true, y_pred):\n    loss = K.switch(K.equal(y_true, -1), 0, K.square(y_true-y_pred))\n    return K.sum(loss)\n\nif __name__ == '__main__':\n    model = Sequential([ Dense(3, input_shape=(4,)) ])\n    model.compile(loss=customized_loss, optimizer='sgd')\n\n    import numpy as np\n    x = np.random.random((1, 4))\n    y = np.array([[1,-1,0]])\n\n    output = model.predict(x)\n    print output\n    # [[ 0.47242549 -0.45106074  0.13912249]]\n    print model.evaluate(x, y)  # keras's loss\n    # 0.297689884901\n    print (output[0, 0]-1)**2 + 0 +(output[0, 2]-0)**2 # double-check\n    # 0.297689929093\n
\n", + "system": "" + }, + { + "instruction": "Unsupervised loss function in Keras", + "input": "", + "output": "

Write your loss function as if it had two arguments:

\n\n
    \n
  1. y_true
  2. \n
  3. y_pred
  4. \n
\n\n

If you don't have y_true, that's fine, you don't need to use it inside to compute the loss, but leave a placeholder in your function prototype, so keras wouldn't complain.

\n\n
def custom_loss(y_true, y_pred):\n    # do things with y_pred\n    return loss\n
\n\n
\n\n

Adding custom arguments

\n\n

You may also need to use another parameter like margin inside your loss function, even then your custom function should only take in those two arguments. But there is a workaround, use lambda functions

\n\n
def custom_loss(y_pred, margin):\n    # do things with y_pred\n    return loss\n
\n\n

but use it like

\n\n
model.compile(loss=lambda y_true, y_pred: custom_loss(y_pred, margin), ...)\n
\n", + "system": "" + }, + { + "instruction": "Attaching class labels to a Keras model", + "input": "", + "output": "

So I tried my hand at a solution myself and this seems to work. I was hoping for something simpler though.

\n

Opening the model file a second time is not really optimal I think. If anyone can do better, by all means, do.

\n
import h5py\n\nfrom keras.models import load_model\nfrom keras.models import save_model\n\n\ndef load_model_ext(filepath, custom_objects=None):\n    model = load_model(filepath, custom_objects=None)\n    f = h5py.File(filepath, mode='r')\n    meta_data = None\n    if 'my_meta_data' in f.attrs:\n        meta_data = f.attrs.get('my_meta_data')\n    f.close()\n    return model, meta_data\n   \n\ndef save_model_ext(model, filepath, overwrite=True, meta_data=None):\n    save_model(model, filepath, overwrite)\n    if meta_data is not None:\n        f = h5py.File(filepath, mode='a')\n        f.attrs['my_meta_data'] = meta_data\n        f.close()\n
\n

Since h5 files do not accept python containers, you should consider converting the meta data into a string. Assuming that your meta data exists in the form of a dictionary or a list, you can use json to do the conversion. This would also allow you to store more complex data structures within your model.

\n

Full usage example:

\n
import json\nimport keras\n\n# prepare model and label lookup\nmodel = keras.Sequential();\nmodel.add(keras.layers.Dense(10, input_dim=8, activation='relu'));\nmodel.add(keras.layers.Dense(3, activation='softmax'))\nmodel.compile()\n\nfilepath = r".\\mymodel.h5"\n\nlabels = ["dog", "cat", "automobile"]\n\n# save\nlabels_string = json.dumps(labels)\nsave_model_ext(model, filepath, meta_data=labels_string)\n\n# load\nloaded_model, loaded_labels_string = load_model_ext(filepath)\nloaded_labels = json.loads(loaded_labels_string)\n\n# label of class 0: "dog"\nprint(loaded_labels[0])\n
\n

If you prefer to have a dictionary for your classes, be aware that json will convert numeric dictionary keys to strings, so you will have to convert them back to numbers after loading.

\n", + "system": "" + }, + { + "instruction": "Keras LSTM state vs. feed forward network with sliding window", + "input": "", + "output": "

The main difference between a Feed Forward NN (FFNN) and any Recurent Net (RNN, LSTM...) is the presence of recurrent connection through time.

\n

Using a FFNN with sliding windows might get you somewhere but your internal representation will only be based on input at time "t". Whereas Recurent Net will also make use of previously seen data.

\n

Stateless vs Statefull :\nI won't go into the detail there already are a lot of good posts on SO covering this topics, what's important though is that states reset between batches not after each sample so it does carry some information across beyond the sequence length that a regular FFNN won't.

\n", + "system": "" + }, + { + "instruction": "skipping layer in backpropagation in keras", + "input": "", + "output": "

UPDATE 2

\n\n

In addition to tf.py_func, there is now an official guide on how to add a custom op.

\n\n
\n\n

UPDATE

\n\n

See this question for an example of writing a custom op with gradient purely in Python without needing to rebuild anything. Note that there are some limitations to the method (see the documentation of tf.py_func).

\n\n
\n\n

Not exactly a solution to the problem, but still kind of an answer and too long for comments.

\n\n

That's not even a Keras issue, but a TensorFlow one. Each op defines its own gradient computation that is used during backpropagation. I you really wanted to something like that, you would need to implement the op into TensorFlow yourself (no easy feat) and define the gradient that you want - because you can't have \"no gradient\", if anything it would be 1 or 0 (otherwise you can't go on with backpropagation). There is a tf.NoGradient function in TensorFlow which causes an op to propagate zeros, but I don't think it is meant to / can be used out of TensorFlow own internals.

\n\n

UPDATE

\n\n

Okay so a bit more of context. TensorFlow graphs are built of ops, which are implemented by kernels; this is basically a 1-to-1 mapping, except that there may be for example a CPU and a GPU kernel for an op, hence the differentiation. The set of ops supported by TensorFlow is usually static, I mean it can change with newer versions, but in principle you cannot add your own ops, because the ops of a graph go into the Protobuf serialized format, so if you made your own ops then you would not be able to share your graph. Ops are then defined at C++ level with the macro REGISTER_OP (see for example here), and kernels with REGISTER_KERNEL_BUILDER (see for example here).

\n\n

Now, where do gradients come into play? Well, the funny thing is that the gradient of an op is not defined at C++ level; there are ops (and kernels) that implement the gradient of other ops (if you look at the previous files you'll find ops/kernels with the name ending in Grad), but (as far as I'm aware) these are not explicitly \"linked\" at this level. It seems that the associations between ops and their gradients is defined in Python, usually via tf.RegisterGradient or the aforementioned tf.NoGradient (see for example here, Python modules starting with gen_ are autogenerated with the help of the C++ macros); these registrations inform the backpropagation algorithm about how to compute the gradient of the graph.

\n\n

So, how to actually work this out? Well, you need to create at least one op in C++ with the corresponding kernel/s implementing the computation that you want for your forward pass. Then, if the gradient computation that you want to use can be expressed with existing TensorFlow ops (which is most likely), you would just need to call tf.RegisterGradient in Python and do the computation there in \"standard\" TensorFlow. This is quite complicated, but the good news is it's possible, and there's even an example for it (although I think they kinda forgot the gradient registration part in that one)! As you will see, the process involves compiling the new op code into a library (btw I'm not sure if any of this may work on Windows) that is then loaded from Python (obviously this involves going through the painful process of manual compilation of TensorFlow with Bazel). A possibly more realistic example can be found in TensorFlow Fold, an extension of TensorFlow for structured data that register (as of one) one custom operation here through a macro defined here that calls REGISTER_OP, and then in Python it loads the library and register its gradient here through their own registration function defined here that simply calls tf.NotDifferentiable (another name for tf.NoGradient)

\n\n

tldr: It is rather hard, but it can be done and there are even a couple of examples out there.

\n", + "system": "" + }, + { + "instruction": "slice/split a layer in keras as in caffe", + "input": "", + "output": "

It seems that you want to use a Lambda layer. In this case you may do the following:

\n\n
sliced = Lambda(lambda x: x[:,slicing_indeces], output_shape=(sliced_shape))(input)\n
\n\n

Note that in x you need to take into account the samples axis whereas in output_shape it's not needed anymore.

\n", + "system": "" + }, + { + "instruction": "How to convert images color space in Keras?", + "input": "", + "output": "

If you import tensorflow, you can use the tf.image.rgb_to_hsv() function in the lambda:

\n\n
def hsv_conversion(x):\n    import tensorflow as tf    \n    return tf.image.rgb_to_hsv(x)\n\nmodel.add(Lambda(hsv_conversion, input_shape=(160, 320, 3)))\n
\n", + "system": "" + }, + { + "instruction": "How to use max pooling to gather information from LSTM nodes", + "input": "", + "output": "

One may use GlobalMaxPooling1D described here:

\n\n
gru_out = Bidirectional(GRU(hiddenlayer_num, return_sequences=True))(embedded)\nmax_pooled = GlobalMaxPooling1D(gru_out)\n
\n", + "system": "" + }, + { + "instruction": "How do I check the order in which Keras' flow_from_directory method processes folders?", + "input": "", + "output": "

I looked into the source code. I should note that since I posted this question, Keras was updated to version 2.0. So the answer is based on that version.

\n\n

ImageDataGenerator inherits from DirectoryGenerator. In it, I find the following lines:

\n\n
    if not classes:\n        classes = []\n        for subdir in sorted(os.listdir(directory)):\n            if os.path.isdir(os.path.join(directory, subdir)):\n                classes.append(subdir)\n    self.num_class = len(classes)\n    self.class_indices = dict(zip(classes, range(len(classes))))\n\n    def _recursive_list(subpath):\n        return sorted(os.walk(subpath, followlinks=follow_links), key=lambda tpl: tpl[0])\n\n    for subdir in classes:\n        subpath = os.path.join(directory, subdir)\n        for root, _, files in _recursive_list(subpath):\n            for fname in files:\n                is_valid = False\n                for extension in white_list_formats:\n                    if fname.lower().endswith('.' + extension):\n                        is_valid = True\n                        break\n                if is_valid:\n                    self.samples += 1\n    print('Found %d images belonging to %d classes.' % (self.samples, self.num_class))\n
\n\n

Note line 3 where it says \"sorted(os.listdir(direcectory, subdir))\". The generator traverses all folders in alphabetical order.

\n\n

Later on the definition of a _recursive_list uses the same logic on substructures as well.

\n\n

So the answer is: Folders are processed in alphabetical order which somehow makes sense.

\n", + "system": "" + }, + { + "instruction": "Tensorboard Cannot find .runfiles directory error", + "input": "", + "output": "

It seems they didn't consider that someone would be pip-installing TensorBoard in a user directory. Below is my hack to get it to work:

\n\n

In the ~/.local/bin/tensorboard script, there's a section that looks like this:

\n\n
def FindModuleSpace():\n  # Follow symlinks, looking for my module space\n  stub_filename = os.path.abspath(sys.argv[0])\n  while True:\n    # Found it?\n    module_space = stub_filename + '.runfiles'\n    if os.path.isdir(module_space):\n      break\n    for mod in site.getsitepackages():\n      module_space = mod + '/tensorboard/tensorboard' + '.runfiles'\n      if os.path.isdir(module_space):\n        return module_space\n
\n\n

(just above the assertion with the \"Cannot find .runfiles directory\" error).

\n\n

The directory it's looking for is

\n\n
~/.local/lib/python2.7/site-packages/tensorboard/tensorboard.runfiles\n
\n\n

which you can discover by running find ~/.local -name '*runfiles*'.

\n\n

I simply added it to the for loop over directories and all is well:

\n\n
    for mod in site.getsitepackages() + [os.path.expanduser(\"~/.local/lib/python2.7/site-packages\")]\n
\n\n

This is a hack because:

\n\n\n", + "system": "" + }, + { + "instruction": "Keras Semantic Segmentation Weighted Loss Pixel Map", + "input": "", + "output": "

If you know how to do this on a 2d map you could always use multiple outputs and use the custom pixel mask additionally to the cross entropy loss. An example implementation of multiple losses for a u-shaped network can be found here: https://github.com/EdwardTyantov/ultrasound-nerve-segmentation

\n", + "system": "" + }, + { + "instruction": "Why does my keras neural network model output different values on a different machine?", + "input": "", + "output": "

Upon searching the net, I found the problem. It seems like that keras over tensorflow when running on a GPU tends to produce results that are not reproducible when transfered to a non-GPU machine. This most likely has something to do with the cuDNN installed. cuDNN's maxpooling and some convolution backward algorithm is non-deterministic - as said from a forum.

\n\n

Solutions I found say the use of numpy.random.seed(seed_no) right before calling any keras libraries. This works when you run the code on a CPU. Works with both keras/theano and keras/tensorflow.

\n\n

Solution for GPU users using keras over theano involves modifying the .theanorc file into:

\n\n
[dnn.conv]\nalgo_bwd_filter = deterministic\nalgo_bwd_data = deterministic\n
\n\n

Or using theano flags: THEANO_FLAGS=\"dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic\" python rnn_model.py

\n\n

However, I haven't found any clear instructions yet of how to produce uniform results on a keras with tensorflow as back-end running on a GPU.

\n", + "system": "" + }, + { + "instruction": "How to change Keras optimizer code", + "input": "", + "output": "

I think your approach is complicated and it doesn't have to be. Let's say you implement your own optimizer by subclassing keras.optimizers.Optimizer:

\n\n
class MyOptimizer(Optimizer):\n    optimizer functions here.\n
\n\n

Then to instantiate it in your model you can do this:

\n\n
myOpt = MyOptimizer()\nmodel.compile(loss='binary_crossentropy', optimizer=myOpt, metrics= ['accuracy'])\n
\n\n

Just pass an instance of your optimizer as the optimizer parameter of model.compile and that's it, Keras will now use your optimizer.

\n", + "system": "" + }, + { + "instruction": "How to adjust labels with ImageDataGenerator in Keras?", + "input": "", + "output": "

You might do something like:

\n\n
import numpy\n\ndef fliping_gen(image_generator, flip_p=0.5):\n    for x, y in image_generator:\n        flip_selector = numpy.random.binomial(1, flip_p, size=x.shape[0]) == 1\n        x[flip_selector,:,:,:] = x[flip_selector,:,::-1,:]\n        y[flip_selector] = (-1) * y[flip_selector]\n        yield x, y\n
\n", + "system": "" + }, + { + "instruction": "How to code a sequence to sequence RNN in keras?", + "input": "", + "output": "

The problem lies in:

\n\n
model.add(Embedding(max_features, 128, input_length=max_len, dropout=0.2))\n
\n\n

In the Embedding documentation you may see that the first argument provided to it should be set to size of vocabulary + 1. It's because there should be always a place for a null word which index is 0. Because of that you need to change this line to:

\n\n
model.add(Embedding(max_features + 1, 128, input_length=max_len, dropout=0.2))\n
\n", + "system": "" + }, + { + "instruction": "Where does keras store its data sets when using a docker container?", + "input": "", + "output": "

Seems like I solved it. To find where .keras is in the container do once logged in to the container with bash ( as in docker run -it --rm -v /local:/container image_name bash for example) :

\n\n
find -type d -name .keras \n
\n\n

to discover that the location is

\n\n
./root/.keras\n
\n\n

Now just volume the location of the data set:

\n\n
docker run -it --rm -v /Users/folder1:/Users/folder1 -v /Users/Pinocchio/.keras:/root/.keras tf_cpu cifar10_cnn.py\n
\n\n

make sure to not have any typos, specially since its .keras and not keras.

\n", + "system": "" + }, + { + "instruction": "Error when checking model input: expected convolution2d_input_1 to have shape (None, 3, 32, 32) but got array with shape (50000, 32, 32, 3)", + "input": "", + "output": "

If you print x_train.shape you will see the shape being (50000, 32, 32, 3) whereas you have given input_shape=(3, 32, 32) in the first layer. The error simply says that the expected input shape and data given are different.

\n\n

All you need to do is give input_shape=(32, 32, 3). Also if you use this shape then you must use tf as your image ordering. backend.set_image_dim_ordering('tf').

\n\n

Otherwise you can permute the axis of data.

\n\n
x_train = x_train.transpose(0,3,1,2)\nx_test = x_test.transpose(0,3,1,2)\nprint x_train.shape\n
\n", + "system": "" + }, + { + "instruction": "Out of memory using Keras on GPU", + "input": "", + "output": "

it says that you GPU is out of memory. So change the batch size and don't load all the data to the GPU directly using the shared variable but iterate between them. Otherwise find another GPU with higher memory capacity

\n", + "system": "" + }, + { + "instruction": "Keras - Fusion of a Dense Layer with a Convolution2D Layer", + "input": "", + "output": "

I was working on a project of Image Colorization and ended up facing a fusion layer problem then I found a model containing fusion Layer.\nHere it is Hope that solves your questions to some extent.

\n\n
    embed_input = Input(shape=(1000,))\n    encoder_input = Input(shape=(256, 256, 1,))\n\n    #Encoder\n    encoder_output = Conv2D(64, (3,3), activation='relu', padding='same', strides=2,\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_input)\n    encoder_output = Conv2D(128, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_output)\n    encoder_output = Conv2D(128, (3,3), activation='relu', padding='same', strides=2,\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_output)\n    encoder_output = Conv2D(256, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_output)\n    encoder_output = Conv2D(256, (3,3), activation='relu', padding='same', strides=2,\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_output)\n    encoder_output = Conv2D(512, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_output)\n    encoder_output = Conv2D(512, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_output)\n    encoder_output = Conv2D(256, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(encoder_output)\n\n    #Fusion\n    fusion_output = RepeatVector(32 * 32)(embed_input)\n    fusion_output = Reshape(([32, 32, 1000]))(fusion_output)\n    fusion_output = concatenate([encoder_output, fusion_output], axis=3)\n    fusion_output = Conv2D(256, (1, 1), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(fusion_output)\n\n    #Decoder\n    decoder_output = Conv2D(128, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(fusion_output)\n    decoder_output = UpSampling2D((2, 2))(decoder_output)\n    decoder_output = Conv2D(64, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(decoder_output)\n    decoder_output = UpSampling2D((2, 2))(decoder_output)\n    decoder_output = Conv2D(32, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(decoder_output)\n    decoder_output = Conv2D(16, (3,3), activation='relu', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(decoder_output)\n    decoder_output = Conv2D(2, (3, 3), activation='tanh', padding='same',\n                            bias_initializer=TruncatedNormal(mean=0.0, stddev=0.05))(decoder_output)\n    decoder_output = UpSampling2D((2, 2))(decoder_output)\n\n    model = Model(inputs=[encoder_input, embed_input], outputs=decoder_output)\n
\n\n

here is the source link: https://github.com/hvvashistha/Auto-Colorize

\n", + "system": "" + }, + { + "instruction": "How to create a loss function which changes over epoch in Keras", + "input": "", + "output": "

Looking at their documentation they mention that you can use theano/Tf symbolic functions that return a scalar for each data point.\nSo you could do something like this

\n\n
loss = tf.contrib.losses.softmax_cross_entropy(x, x_pred) * \n       (beta * current_epoch / 20 ) +  \n       tf.contrib.losses.mean_squared_error\n
\n\n

You would have to pass x and x_pred as x and x_pred as tf.placeholders\nI think for model creation you could use keras but then again you would have to run the computational graph with sess.run()

\n\n

References:\nhttps://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html#using-keras-models-with-tensorflow

\n", + "system": "" + }, + { + "instruction": "Keras, best way to save state when optimizing", + "input": "", + "output": "
from keras.models import load_model\n\nmodel.save('my_model.h5')  # creates a HDF5 file 'my_model.h5'\ndel model  # deletes the existing model\n\n# returns a compiled model\n# identical to the previous one\nmodel = load_model('my_model.h5')\n
\n\n
\n

You can use model.save(filepath) to save a Keras model into a single\n HDF5 file which will contain:

\n \n \n \n

You can then use keras.models.load_model(filepath) to reinstantiate your model. load_model will also take care of compiling the model using the saved training configuration (unless the model was never compiled in the first place).

\n
\n\n

Keras FAQ: How can I save a Keras model?

\n", + "system": "" + }, + { + "instruction": "Keras freezes when using ImageDataGenerator", + "input": "", + "output": "

Try verbose=2 as a parameter in fit_generator()

\n", + "system": "" + }, + { + "instruction": "How to load local files in Keras?", + "input": "", + "output": "

The purpose of get_file is downloading a file over the network.\nIf you want to access something on your file system, you don't need it.

\n\n

In the case of the text generator, just use something like:

\n\n
text = open(\"localfile.txt\").read().lower()\n
\n\n

and delete the line with path = get_line(...)

\n", + "system": "" + }, + { + "instruction": "How to save LambdaLR scheduler in pytorch with lambda function?", + "input": "", + "output": "

If one wishes to stay with default behavior of torch.save and torch.load, the lambda function can be replaced with a class, for example:

\n\n
class LRPolicy(object):\n    def __init__(self, rate=30):\n        self.rate = rate\n\n    def __call__(self, epoch):\n        return epoch // self.rate\n
\n\n

The scheduler is now

\n\n
scheduler = LambdaLR(optimizer, lr_lambda=LRPolicy(rate=30))\n
\n\n

Now the scheduler can be torch.saveed and torch.load without alternating the pickling module.

\n", + "system": "" } ] \ No newline at end of file