Eren5717 commited on
Commit
1f4e140
·
1 Parent(s): cd1ed13
Files changed (1) hide show
  1. data/dataset.json +90 -0
data/dataset.json CHANGED
@@ -37492,5 +37492,95 @@
37492
  "input": "",
37493
  "output": "<p>If one wishes to stay with default behavior of <code>torch.save</code> and <code>torch.load</code>, the lambda function can be replaced with a class, for example:</p>\n\n<pre><code>class LRPolicy(object):\n def __init__(self, rate=30):\n self.rate = rate\n\n def __call__(self, epoch):\n return epoch // self.rate\n</code></pre>\n\n<p>The scheduler is now</p>\n\n<pre><code>scheduler = LambdaLR(optimizer, lr_lambda=LRPolicy(rate=30))\n</code></pre>\n\n<p>Now the scheduler can be <code>torch.save</code>ed and <code>torch.load</code> without alternating the pickling module.</p>\n",
37494
  "system": ""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37495
  }
37496
  ]
 
37492
  "input": "",
37493
  "output": "<p>If one wishes to stay with default behavior of <code>torch.save</code> and <code>torch.load</code>, the lambda function can be replaced with a class, for example:</p>\n\n<pre><code>class LRPolicy(object):\n def __init__(self, rate=30):\n self.rate = rate\n\n def __call__(self, epoch):\n return epoch // self.rate\n</code></pre>\n\n<p>The scheduler is now</p>\n\n<pre><code>scheduler = LambdaLR(optimizer, lr_lambda=LRPolicy(rate=30))\n</code></pre>\n\n<p>Now the scheduler can be <code>torch.save</code>ed and <code>torch.load</code> without alternating the pickling module.</p>\n",
37494
  "system": ""
37495
+ },
37496
+ {
37497
+ "instruction": "Incorporating user feedback in a ML model",
37498
+ "input": "",
37499
+ "output": "<p>There could be couple of ways to do this:</p>\n\n<p>1) You can incorporate the feedback that you get from the user to only train the last layer of your model, keeping the weights of all other layers intact. Intuitively, for example, in case of CNN this means you are extracting the features using your model but slightly adjusting the classifier to account for the peculiarities of your specific user.</p>\n\n<p>2) Another way could be to have a global model ( which was trained on your large training set) and a simple logistic regression which is user specific. For final predictions, you can combine the results of the two predictions. See <a href=\"http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36955.pdf\" rel=\"nofollow noreferrer\">this paper</a> by google on how they do it for their priority inbox.</p>\n",
37500
+ "system": ""
37501
+ },
37502
+ {
37503
+ "instruction": "LSTM with Keras for mini-batch training and online testing",
37504
+ "input": "",
37505
+ "output": "<p>If I understand you correctly you are asking if you can enable statefulness after training. This should be possible, yes. For example:</p>\n\n<pre><code>net = Dense(1)(SimpleRNN(stateful=False)(input))\nmodel = Model(input=input, output=net)\n\nmodel.fit(...)\n\nw = model.get_weights()\nnet = Dense(1)(SimpleRNN(stateful=True)(input))\nmodel = Model(input=input, output=net)\nmodel.set_weights(w)\n</code></pre>\n\n<p>After that you can predict in a stateful way.</p>\n",
37506
+ "system": ""
37507
+ },
37508
+ {
37509
+ "instruction": "How to plot a learning curve for a keras experiment?",
37510
+ "input": "",
37511
+ "output": "<p>To get accuracy values, you need to request that they are calculated during <code>fit</code>, because accuracy is not an objective function, but a (common) metric. Sometimes calculating accuracy does not make sense, so it is not enabled by default in Keras. However, it is a built-in metric, and easy to add.</p>\n\n<p>To add the metric, <a href=\"https://keras.io/models/sequential/\" rel=\"nofollow noreferrer\">use <code>metrics=['accuracy']</code> parameter to <code>model.compile</code></a>.</p>\n\n<p>In your example: </p>\n\n<pre><code>history = model.fit(X_train, y_train, batch_size = 512, \n nb_epoch = 5, validation_split = 0.05)\n</code></pre>\n\n<p>You can then access validation accuracy as <code>history.history['val_acc']</code></p>\n",
37512
+ "system": ""
37513
+ },
37514
+ {
37515
+ "instruction": "Keras. ValueError: I/O operation on closed file",
37516
+ "input": "",
37517
+ "output": "<p>Change your verbose level in \n<code>model.fit()</code>\nto \n <code>verbose=0.</code><br>\nSee <a href=\"http://github.com/fchollet/keras/issues/2110\" rel=\"noreferrer\">github.com/fchollet/keras/issues/2110</a></p>\n\n<p>It's not a straight-on \"fix\", but it should help alleviate a race condition associated with updating of the iPython console.</p>\n",
37518
+ "system": ""
37519
+ },
37520
+ {
37521
+ "instruction": "keras BLSTM for sequence labeling",
37522
+ "input": "",
37523
+ "output": "<p>Solved. The main issue was reshaping the data for the classification categories as binary arrays. Also used TimeDistributed and set return_sequences to True.</p>\n",
37524
+ "system": ""
37525
+ },
37526
+ {
37527
+ "instruction": "Keras IndexError: indices are out-of-bounds",
37528
+ "input": "",
37529
+ "output": "<p>Answer from the comment - trainx and trainy should be numpy arrays. You can convert the data frame to numpy array using <code>as_matrix()</code> method. I also faced this issue. It's weird that Keras does not give meaningful error message.</p>\n",
37530
+ "system": ""
37531
+ },
37532
+ {
37533
+ "instruction": "Keras: How to feed input directly into other hidden layers of the neural net than the first?",
37534
+ "input": "",
37535
+ "output": "<p>You didn't show which kind of model you use to me, but I assume that you initialized your model as <code>Sequential</code>. In a <code>Sequential</code> model you can only stack one layer after another - so adding a &quot;short-cut&quot; connection is not possible.</p>\n<p>For this reason authors of Keras added option of building &quot;graph&quot; models. In this case you can build a graph (DAG) of your computations. It's a more complicated than designing a stack of layers, but still quite easy.</p>\n<p>Check the documentation <a href=\"http://keras.io/models/#using-the-graph-model\" rel=\"nofollow noreferrer\">site</a> to look for more details.</p>\n",
37536
+ "system": ""
37537
+ },
37538
+ {
37539
+ "instruction": "keras giving same loss on every epoch",
37540
+ "input": "",
37541
+ "output": "<p>I ran into this issue as well. After much deliberation, I figured out that it was my <strong>activation function on my output layer</strong>. </p>\n\n<p>I had this model to predict a binary outcome:</p>\n\n<pre><code>model = Sequential()\nmodel.add(Dense(16,input_shape=(8,),activation='relu'))\nmodel.add(Dense(32,activation='relu'))\nmodel.add(Dense(32,activation='relu'))\nmodel.add(Dense(1, activation='softmax'))\n</code></pre>\n\n<p>and I needed this for binary cross entropy</p>\n\n<pre><code>model = Sequential()\nmodel.add(Dense(16,input_shape=(8,),activation='relu'))\nmodel.add(Dense(32,activation='relu'))\nmodel.add(Dense(32,activation='relu'))\nmodel.add(Dense(1, activation='sigmoid'))\n</code></pre>\n\n<p>I would look towards the problem you are trying to solve and the output needed to ensure that your activation functions are what they need to be.</p>\n",
37542
+ "system": ""
37543
+ },
37544
+ {
37545
+ "instruction": "Multiplying the output of two layers in keras",
37546
+ "input": "",
37547
+ "output": "<p>You can use this</p>\n\n<p>main_branch.add(Merge([branch_1, branch_2], mode = 'dot'))</p>\n",
37548
+ "system": ""
37549
+ },
37550
+ {
37551
+ "instruction": "Train and predict on variable length sequences",
37552
+ "input": "",
37553
+ "output": "<p>I don't have the specific of your model, but TF implementation of LSTM usually expect <code>(batch, seq, features)</code> as input.</p>\n<p>Now lest assume this is one of your batch_id:</p>\n<pre><code>data = np.zeros((15,5))\n\narray([[0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0.]])\n</code></pre>\n<p>You could reshape it with <code>(1, 15, 5</code>) and feed it to the model, but anytime your <code>batch_id</code> length vary your sequence length will vary too and your model expect a fix sequence.</p>\n<p>Instead you could reshape your data before training so that the batch_id length is passed as the batch size:</p>\n<pre><code>data = data[:,np.newaxis,:] \n\narray([[[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]],\n\n [[0., 0., 0., 0., 0.]]])\n</code></pre>\n<p>Same data, with shape (15, 1, 5) but your model would now be looking at a fix length of <code>1</code> and the number of sample would vary.</p>\n<p>Make sure to reshape your <code>label</code> as well.</p>\n<p>To my knowledge, RNN and LSTM being applied for each time steps and state being reset between bacthes only this should not impact the model behavior.</p>\n",
37554
+ "system": ""
37555
+ },
37556
+ {
37557
+ "instruction": "Data Augmentation on tf.dataset.Dataset",
37558
+ "input": "",
37559
+ "output": "<p>You can use <a href=\"https://www.tensorflow.org/api_docs/python/tf/image\" rel=\"noreferrer\">tf.image</a> functions. The <code>tf.image</code> module contains various functions for image processing.</p>\n\n<p><strong>For example:</strong></p>\n\n<p>You can add below functionality in your function <code>def get_dataset</code>.</p>\n\n<ul>\n<li>convert each image to <code>tf.float64</code> in the <code>0-1</code> range.</li>\n<li><code>cache()</code> results as those can be re-used after each <code>repeat</code></li>\n<li>randomly flip left_to_right each image using <code>random_flip_left_right</code>.</li>\n<li>randomly change contrast of image using <code>random_contrast</code>.</li>\n<li>Number of images increased by twice by <code>repeat</code> which repeat all the steps.</li>\n</ul>\n\n<p><strong>Code -</strong></p>\n\n<pre><code>mnist_train = mnist_train.map(\n lambda image, label: (tf.image.convert_image_dtype(image, tf.float32), label)\n).cache(\n).map(\n lambda image, label: (tf.image.random_flip_left_right(image), label)\n).map(\n lambda image, label: (tf.image.random_contrast(image, lower=0.0, upper=1.0), label)\n).shuffle(\n 1000\n).\nbatch(\n batch_size\n).repeat(2)\n</code></pre>\n\n<p>Similarly you can use other functionalities like <code>random_flip_up_down</code>, <code>random_crop</code> functions to Randomly flips an image vertically (upside down) and Randomly crop a tensor to a given size respectively.</p>\n\n<hr>\n\n<p>Your <code>get_dataset</code> function will look like below - </p>\n\n<pre><code>def get_dataset(batch_size=200):\n datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True,\n try_gcs=True)\n mnist_train, mnist_test = datasets['train'], datasets['test']\n\n train_dataset = mnist_train.map(\n lambda image, label: (tf.image.convert_image_dtype(image, tf.float32),label)\n ).cache(\n ).map(\n lambda image, label: (tf.image.random_flip_left_right(image), label)\n ).map(\n lambda image, label: (tf.image.random_contrast(image, lower=0.0, upper=1.0), label)\n ).shuffle(\n 1000\n ).batch(\n batch_size\n ).repeat(2)\n\n test_dataset = mnist_test.map(scale).batch(batch_size)\n\n return train_dataset, test_dataset\n</code></pre>\n\n<p>Adding the link suggested by @Andrew H that gives end-to-end example on <a href=\"https://www.tensorflow.org/tutorials/images/data_augmentation#augment_a_dataset_and_train_a_model_with_it\" rel=\"noreferrer\">data augmentation</a> that also uses <code>mnist</code> dataset. </p>\n\n<p>Hope this answers your question. Happy Learning.</p>\n",
37560
+ "system": ""
37561
+ },
37562
+ {
37563
+ "instruction": "How to use keras.utils.Sequence data generator with tf.distribute.MirroredStrategy for multi-gpu model training in tensorflow?",
37564
+ "input": "",
37565
+ "output": "<p>I used <code>tf.data.Dataset.from_generator</code> with my <code>keras.utils.sequence</code> in the same situation, and it solved my issues!</p>\n\n<pre><code>train_generator = SegmentationMultiGenerator(datasets, folder) # My keras.utils.sequence object\n\ndef generator():\n multi_enqueuer = OrderedEnqueuer(train_generator, use_multiprocessing=True)\n multi_enqueuer.start(workers=10, max_queue_size=10)\n while True:\n batch_xs, batch_ys, dset_index = next(multi_enqueuer.get()) # I have three outputs\n yield batch_xs, batch_ys, dset_index\n\ndataset = tf.data.Dataset.from_generator(generator,\n output_types=(tf.float64, tf.float64, tf.int64),\n output_shapes=(tf.TensorShape([None, None, None, None]),\n tf.TensorShape([None, None, None, None]),\n tf.TensorShape([None, None])))\n\nstrategy = tf.distribute.MirroredStrategy()\n\ntrain_dist_dataset = strategy.experimental_distribute_dataset(dataset)\n</code></pre>\n\n<p>Note that this is my first working solution - at the moment I have found it most convenient to just put 'None' in the place of the real output shapes, which I have found to work. </p>\n",
37566
+ "system": ""
37567
+ },
37568
+ {
37569
+ "instruction": "What is the best activation function to use for time series prediction",
37570
+ "input": "",
37571
+ "output": "<p>Although there is no best activation function as such, I find <a href=\"https://arxiv.org/abs/1710.05941\" rel=\"nofollow noreferrer\"><code>Swish</code></a> to work particularly well for Time-Series problems. AFAIK keras doesn't provide <code>Swish</code> builtin, you can use:</p>\n\n<pre><code>from keras.utils.generic_utils import get_custom_objects\nfrom keras import backend as K\nfrom keras.layers import Activation\n\ndef custom_activation(x, beta = 1):\n return (K.sigmoid(beta * x) * x)\n\nget_custom_objects().update({'custom_activation': Activation(custom_activation)})\n</code></pre>\n\n<p>Then use it in model:</p>\n\n<pre><code>model.add(Activation(custom_activation,name = \"Swish\"))\n</code></pre>\n",
37572
+ "system": ""
37573
+ },
37574
+ {
37575
+ "instruction": "batch normalization, yes or no?",
37576
+ "input": "",
37577
+ "output": "<p><strong>Yes.</strong></p>\n\n<p>The behavior you're observing is a bug - and you don't need BN to see it; plot to the left is for <code>#V1</code>, to the right is for <code>#V2</code>:</p>\n\n<p><a href=\"https://i.sstatic.net/xSP2v.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.sstatic.net/xSP2v.png\" alt=\"enter image description here\"></a></p>\n\n<pre class=\"lang-py prettyprint-override\"><code>#V1\nmodel = Sequential()\nmodel.add(Dense(8, input_shape=(12,)))\n#model.add(Activation('linear')) &lt;-- uncomment == #V2\nmodel.add(Dense(12))\nmodel.compile(optimizer='adam', loss='mean_squared_error')\n</code></pre>\n\n<p>Clearly nonsensical, as <code>Activation('linear')</code> after a layer with <code>activation=None</code> (==<code>'linear'</code>) is an <em>identity</em>: <code>model.layers[1].output.name == 'activation/activation/Identity:0'</code>. This can be confirmed further by fetching and plotting intermediate layer outputs, which are identical for <code>'dense'</code> and <code>'activation'</code> - will omit here.</p>\n\n<p>So, the activation does literally nothing, except it doesn't - somewhere along the commit chain between 1.14.0 and 2.0.0, this was fixed, though I don't know where. Results w/ BN using TF 2.0.0 w/ Keras 2.3.1 below:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>val_loss = 0.840 # without BN\nval_loss = 0.819 # with BN\n</code></pre>\n\n<p><a href=\"https://i.sstatic.net/fdynR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.sstatic.net/fdynR.png\" alt=\"enter image description here\"></a></p>\n\n<hr>\n\n<p><strong>Solution</strong>: update to TensorFlow 2.0.0, Keras 2.3.1.</p>\n\n<p><strong>Tip</strong>: use <a href=\"https://anaconda.org/john.muradeli/dashboard\" rel=\"nofollow noreferrer\">Anaconda</a> w/ virtual environment. If you don't have any virtual envs yet, run:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>conda create --name tf2_env --clone base\nconda activate tf2_env\nconda uninstall tensorflow-gpu\nconda uninstall keras\nconda install -c anaconda tensorflow-gpu==2.0.0\nconda install -c conda-forge keras==2.3.1\n</code></pre>\n\n<p>May be a bit more involved than this, but that's subject of another question.</p>\n\n<hr>\n\n<p><strong>UPDATE</strong>: importing from <code>keras</code> instead of <code>tf.keras</code> also solves the problem. </p>\n\n<hr>\n\n<p><strong>Disclaimer</strong>: BN remains a 'controversial' layer in Keras, yet to be fully fixed - see <a href=\"https://github.com/keras-team/keras/issues/12400\" rel=\"nofollow noreferrer\">Relevant Git</a>; I plan on investigating it myself eventually, but for your purposes, this answer's fix should suffice. </p>\n\n<p>I also recommend familiarizing yourself with BN's underlying theory, in particular regarding its train vs. inference operation; in a nutshell, batch sizes under 32 is a pretty bad idea, and dataset should be sufficiently large to allow BN to accurately approximate test-set <code>gamma</code> and <code>beta</code>.</p>\n\n<hr>\n\n<p><strong>Code used:</strong></p>\n\n<pre class=\"lang-py prettyprint-override\"><code>x_train=np.random.normal(0, 1, (100, 12))\n\nmodel = Sequential()\nmodel.add(Dense(8, input_shape=(12,)))\n#model.add(Activation('linear'))\n#model.add(tf.keras.layers.BatchNormalization())\nmodel.add(Dense(12))\nmodel.compile(optimizer='adam', loss='mean_squared_error')\n\nW_sum_all = [] # fit rewritten to allow runtime weight collection\nfor _ in range(20):\n for i in range(9):\n x = x_train[i*10:(i+1)*10]\n model.train_on_batch(x, x)\n\n W_sum_all.append([])\n for layer in model.layers:\n if layer.trainable_weights != []:\n W_sum_all[-1] += [np.sum(layer.get_weights()[0])]\nmodel.evaluate(x[-10:], x[-10:])\n\nplt.plot(W_sum_all)\nplt.title(\"Sum of weights (#V1)\", weight='bold', fontsize=14)\nplt.legend(labels=[\"dense\", \"dense_1\"], fontsize=14)\nplt.gcf().set_size_inches(7, 4)\n</code></pre>\n\n<p><strong>Imports/pre-executions:</strong></p>\n\n<pre class=\"lang-py prettyprint-override\"><code>import numpy as np\nnp.random.seed(1)\nimport random\nrandom.seed(2)\nimport tensorflow as tf\nif tf.__version__[0] == '2':\n tf.random.set_seed(3)\nelse:\n tf.set_random_seed(3)\n\nimport matplotlib.pyplot as plt\nfrom tensorflow.keras.models import Model, Sequential\nfrom tensorflow.keras.layers import Input, Dense, Activation\n</code></pre>\n",
37578
+ "system": ""
37579
+ },
37580
+ {
37581
+ "instruction": "Tensorflow Keras load_model from Memory or Variable?",
37582
+ "input": "",
37583
+ "output": "<p>Depending on the format that outputs the function <code>decrypt_func</code> it is possible to use <code>h5py</code> for loading the decriypted stream and then use the <code>keras.models.load_model</code> function to load the model, which supports <code>h5py.File</code> object type as input model apart from your mentioned string, path to the saved model.</p>\n\n<pre><code>with open('model.hdf5', 'rb') as f_hdl:\n h5 = decrypt_func(f_hdl.read())\n with h5py.File(h5, 'r') as h5_file:\n model = keras.models.load_model(h5_file)\n</code></pre>\n",
37584
+ "system": ""
37585
  }
37586
  ]